Observability

Why OpenTelemetry (OTel) Matters

OpenTelemetry (OTel) is an open-source framework for collecting telemetry data—traces, metrics, and logs—with the goal of standardising observability across distributed systems. Backed by the Cloud Native Computing Foundation (CNCF), OTel is rapidly becoming the industry standard for instrumentation in modern architectures.

Why OTel Matters

The primary advantage of OTel is its vendor neutral approach. Instead of locking into a proprietary agent or SDK, OTel provides a consistent model for instrumentation and data export. This gives organisations full control over how they implement and operationalise observability, enabling flexibility in backend choices, whether that's Prometheus, Jaeger, Honeycomb, or a managed APM solution.

OTel supports two main instrumentation strategies:


Auto-Instrumentation (Zero-Code)

For common languages and frameworks, OTel provides agents that automatically capture telemetry without requiring code changes. This is ideal for teams looking for quick wins with minimal development effort.


Manual Instrumentation

For complex or custom systems, OTel exposes APIs and SDKs that allow developers to instrument code directly. While powerful, this approach demands additional development and testing, making it more suitable for organisations with mature engineering practices.

The Trade-Off

While OTel offers flexibility and cost control, it introduces operational overhead. Managing collectors, exporters, and pipelines requires planning and governance. Compared to platform based observability solutions, OTel can feel heavy for organisations without strong CI/CD workflows or centralised DevOps capabilities.

Managed Observability Platforms

Solutions like Dynatrace, Datadog, and New Relic have invested heavily in out-of-the-box, agent-based instrumentation. These platforms minimise complexity by providing turnkey deployment and integrated analytics. For many use cases, installing a vendor agent and configuring server-side settings is significantly easier than building and maintaining an OTel pipeline.

While these platforms can ingest OTel data, their core value proposition often lies in proprietary instrumentation and advanced features—such as AI-driven anomaly detection and automated root cause analysis.

OTel-Oriented Platforms

Platforms like Honeycomb, Elastic, and the Grafana stack are designed with OTel in mind. They offer strong visualisation and correlation for OTel data and often integrate directly with OTel collectors, reducing setup complexity. For organisations committed to open standards, these solutions provide a natural fit.

When to Choose What


Use an OTel-oriented solution (e.g., Grafana, Honeycomb) if:

  • Operational overhead is manageable and CI/CD pipelines are mature.
  • Your architecture primarily uses OTel supported languages (Java, .NET, Go, Python, etc.).
  • You prioritise vendor neutrality, cost control, and flexibility.

Use a managed observability platform (e.g., Dynatrace, Datadog) if:

  • Your organisation is large, with a mix of legacy and modern systems.
  • Operational overhead is a concern (limited DevOps or testing resources).
  • Teams are siloed and need fast, low touch observability.
Alan Wang
Principal Observability and Site Reliability Engineer

Alan Wang is a seasoned Observability and Site Reliability Engineer with over 10 years of experience delivering enterprise-level IT solutions that enhance system reliability, performance, and scalability. With deep expertise in APM platforms such as Dynatrace, New Relic, AppDynamics, and Sumo Logic, he has led large-scale implementations that enable businesses to proactively monitor and optimise their digital ecosystems.

His experience spans DevOps strategy, Google SRE practices, automation, infrastructure as code (IaC), and cloud-native technologies. As the Founder of SRE Universe, he leverages his expertise to help businesses navigate the complexities of modern Observability and site reliability, driving efficiencies, reducing downtime, and optimising digital experiences.

Alan is passionate about distributed tracing, log analytics, and metric-driven insights. He has played pivotal roles in presales, technical consultation, and project delivery, ensuring Observability is embedded as a strategic capability within organisations.

Related Posts

Performance Testing | 11 December, 2025

Performance Testing in the Modern DevOps Era

Explore how modern performance testing integrates with DevOps, SRE practices, and observability platforms using tools like k6, Grafana, and AI-powered reporting.

AI Governance | 4 September, 2025

AI Governance in Australian Regulated Industries: Managing Risk in the Age of Artificial Intelligence

Discover how Australian regulated industries can navigate AI governance challenges and implement robust risk management frameworks for financial services, superannuation, and insurance.

Digital Immunity | 18 February, 2025

What is a Digital Immune System?

Learn about Digital Immune Systems and how they help protect your organization from digital threats.