Todd Persen is the CEO/co-founder of era software. He was previously CTO/Co-founder at InfluxData, creators of the InfluxDB database.
The pandemic has accelerated the pace of transformation across industries, and digital business initiatives remain the top priority for CIOs. With operational efficiency critical to business success, CIOs can no longer rely on legacy technologies. This approach will hamper their ability to adapt and enable new digital services and handle growing data volumes.
Data has been and will continue to be the currency of digital transformation. Data is knowledge about your business and your customers, but only if you can extract value from it easily and cheaply.
The fundamental problem with extracting this value is that application environments are becoming increasingly complex. The highly distributed use of microservices, containers, and orchestration tools makes it difficult to understand exactly what is going on in the system.
At the same time, environments are becoming increasingly complex, with digital services straddling public clouds, hybrid clouds, and on-premises/private clouds. Additionally, due to the volume and speed of observability data generated in digital operations, leaders must provide their teams with the ability to manage and process this data in real time.
Observability could be the X factor here, a modern approach to data management. It is the evolution of traditional monitoring to gain in-depth insights from logs, metrics, and traces collected at scale from various modern applications and infrastructure environments. Observability helps ensure that you can deliver reliable digital services in the face of the growing complexity of cloud services, positively impacting both customer experience and bottom line.
Although observability is a simple goal, many organizations realize that existing monitoring tools cannot keep up with the massive volumes of data created by modern cloud environments. Therefore, CIOs and business leaders must rethink how to extract critical insights from high-volume observability data.
How Observability Aligns IT and Business
the The IT world has introduced new operating models such as BizDevOps, DataOps, and DevSecOps. These moves signal the importance of aligning business and IT around security, data collection and use to accelerate innovation. However, the growing complexity of our digital world often forces IT and business teams to struggle to manage large, high-speed data. Observability can serve as a bridge between business users and IT, removing complexity while providing visibility.
Across industries, CIOs and engineering leaders are responsible for the reliability of digital services. They also play a vital role in supporting key business performance metrics such as customer retention. If their cloud services are unreliable and taking a long time to resolve incidents, it could lead to poor customer experience, leading to churn. To reduce churn, engineering and IT teams need to closely observe and monitor customer churn to avoid a negative impact on customer attraction and retention.
The gateway to observability is to collect data into a centralized searchable data store and make it easily accessible. This allows IT teams to create custom dashboards, explore data, and learn more about their systems. Eventually, IT teams will need to move from a high-level view and dig deeper into issues or further explore why their applications are behaving a certain way.
This is where the cost-effective collection and analysis of large-scale log data is essential. Digging into your system and application logs for information is crucial for root cause and forensic analyses.
How we think about scale and performance
It is important to understand the true cost of operating local log management systems based on open source technologies or cloud-hosted versions of the same open source stack. Although modern IT teams have successfully used these log management solutions to improve mean time to resolution (MTTR), they were not designed to be optimal solutions for today’s cloud-scale logs generated by increasingly complex distributed systems.
For performance-oriented organizations that require real-time access to large volumes of log data, using object storage and decoupling storage from IT may be the next step in their strategy. Architecting observability and log management solutions around object storage can provide cost advantages and allow organizations to decouple storage from computing. Decoupling storage and computing can allow teams to only pay for the infrastructure resources they consume without dedicated processing and storage resources.
Upcoming Industry Trends
The pandemic has accelerated some trends that have advanced observability data management. This includes moving to the cloud and multicloud, adopting modern application development and delivery practices, and using data to generate predictive models.
Organizations in all industries will be increasingly data-driven, including retail, manufacturing, gaming, and financial services. As a result, many industries are embracing DevSecOps principles and implementing industry-specific workflows with observability to help protect revenue.
Managing real-time observability data is becoming increasingly important for cybersecurity due to an increase in attack surfaces, much of which results from employees working from home. But many organizations can’t use huge volumes of data for proactive, rapid defense against fast-moving cyberattackers. Fighting today’s threats requires a different way of using data to keep our organizations secure.
Observability data management is at the center of it all. With its help, IT and security organizations can reduce the complexity and pain of accessing data so they can collect, store, analyze and deliver real-time information where it matters most.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs, and technology executives. Am I eligible?