Blogs

Splunk Donates eBPF Telemetry Data Collector to CNCF

Splunk Inc. announced during the KubeCon + CloudNativeCon North America conference that it donated a data collector to the OpenTelemetry project run by the Cloud Native Computing Foundation (CNCF).

Splunk’s data collector runs as a sandbox-level program at the kernel of a Linux operating system and takes advantage of extended Berkeley Packet Filter (eBPF) technology to make it simpler to collect networking telemetry.

The OpenTelemetry initiative itself spans a range of open source tools, application programming interfaces (APIs) and software development kits (SDKs) that are used to instrument applications. Previously, Splunk contributed the SignalFx Smart Agent and Smart Gateway to the OpenTelemetry project along with more than 64,000 code contributions.

Morgan McLean, a director of product management for Splunk, said that while OpenTelemetry is officially in beta, some elements of the project are more mature than others. For example, tools for capturing metrics and traces are already being employed, while another set of tools for capturing log data will be ready sometime next year, he said.

The data collector contributed by Splunk is among the first that operate at the kernel level to collect data, added McLean. That approach makes it possible for some types of data to be captured by default rather than requiring developers to always instrument their applications using agents to enable an observability platform. It will also contribute toward the eventual convergence of network operations and DevOps processes.

Going forward, DevOps teams should expect to employ a mix of open source data collectors that operate at both the eBPF and application level, said McLean. The eventual goal is to instrument every application by default by making it much simpler to collect data. Today, most DevOps teams are employing a mix of proprietary and open source agent software to collect data that then must be deployed and then integrated with all the applications they build. Given the cost and level of effort required, the percentage of applications that are actually instrumented is, not surprisingly, fairly low.

However, as agent software becomes more readily accessible, the percentage of applications that are instrumented should increase considerably in the years ahead. For DevOps teams that depend on that data to optimize application performance, that capability should prove to be a boon for improving overall observability in IT environments.

Observability, in one form or another, has always been a core tenet of any DevOps best practice. Initially, DevOps teams focused on continuous monitoring as the most effective way to proactively manage application environments. However, it can still take days, sometimes weeks, to discover the root cause of an issue.

Monitoring focuses on predefined metrics to identify when a specific platform or application is performing within expectations. The metrics tracked generally focus on, for example, resource utilization. Observability platforms combine metrics, logs and traces—a specialized form of logging—to instrument applications in a way that makes it simpler to troubleshoot issues without relying on a limited set of predefined metrics focused on a specific process or function.

Those observability platforms then make it possible to correlate events so that it is easier to identify anomalous behavior indicative of an issue’s root cause. Armed with these insights, it becomes a lot simpler for IT teams to resolve issues faster.

It’s unclear when OpenTelemetry tools will become more widely employed, but as more tools for collecting data become available, the impact on DevOps will be nothing short of profound.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Recent Posts

AIOps Success Requires Synthetic Internet Telemetry Data

The data used to train AI models needs to reflect the production environments where applications are deployed.

2 days ago

Five Great DevOps Jobs Opportunities

Looking for a DevOps job? Look at these openings at NBC Universal, BAE, UBS, and other companies with three-letter abbreviations.

2 days ago

Tricentis Taps Generative AI to Automate Application Testing

Tricentis is adding AI assistants to make it simpler for DevOps teams to create tests.

4 days ago

Valkey is Rapidly Overtaking Redis

Redis is taking it in the chops, as both maintainers and customers move to the Valkey Redis fork.

5 days ago

GitLab Adds AI Chat Interface to Increase DevOps Productivity

GitLab Duo Chat is a natural language interface which helps generate code, create tests and access code summarizations.

5 days ago

The Role of AI in Securing Software and Data Supply Chains

Expect attacks on the open source software supply chain to accelerate, with attackers automating attacks in common open source software…

5 days ago