In every case, the trace and metric data that your service or its dependencies emit are of limited use unless you can actually collect that data somewhere for analysis and alerting. The OpenTelemetry component responsible for batching and transporting telemetry data to a backend system is known as an exporter.
To understand how OpenTelemetry’s exporter model works, it is useful to generally understand a little bit about how instrumentation is integrated to service code. Generally, you can have instrumentation at three different points: Your service, its library dependencies, and its platform dependencies. Integrating at the service level is fairly straightforward, as you would declare a dependency in your code on the appropriate OpenTelemetry package and deploy it with your code. Library dependencies are similar, except that your libraries would generally only declare a dependency on the OpenTelemetry API. Platform dependencies are a more unusual case. When I say ‘platform dependency’, what I mean are the pieces of software you run to provide services to your service, things like Envoy and Istio. These will deploy their own copy of OpenTelemetry, independent of your actions, but will also generally emit trace context that your service will want to be a part of.
The trace and metric data that your service or its dependencies emit are of limited use unless you can actually collect that data somewhere for analysis and alerting. The OpenTelemetry component responsible for batching and transporting telemetry data to a backend system is known as an exporter. The exporter interface is implemented by the OpenTelemetry SDKs, and uses a simple plug-in model that allows for telemetry data to be translated to whatever format a backend system requires, and transmit it to that backend system. Exporters can be composed and chained together, allowing for common functionality to be shared (like tagging data before export, or providing a queue to ensure consistent performance) across multiple protocols.
To put this in more concrete terms, let’s compare OpenTelemetry to OpenTracing. In OpenTracing, if you wanted to switch what system you were reporting data to, you’d need to replace the entire tracer component with another – for example, swapping out the Jaeger client library with the LightStep client library. In OpenTelemetry, you simply need to change the export component, or even just add the new one and export to multiple backend systems simultaneously. This makes it a lot easier to try out new analysis tools or send your telemetry data to different analysis tools in different environments.
While the exporter model is very convenient, there are instances when you don’t have the ability to actually redeploy a service in order to add a new exporter. In some organizations, there’s a disconnect between the people writing the instrumented code and the people running the observability platform, which can impair the velocity of rolling out changes to where data goes. In addition, some teams may prefer to abstract the entire exporter model out from their code, and into a separate service. This is where the OpenTelemetry Collector comes in. The collector is a separate process that is designed to be a ‘sink’ for telemetry data emitted by many processes, which can then export that data to backend systems. The Collector has two different deployment strategies – either running as an agent alongside a service, or as a remote application. You’d generally think about using both: the agent would be deployed with your service and run as a separate process, or in a sidecar. The collector would be deployed separately, as its own application running in a container or virtual machine. Each agent would forward telemetry data to the collector, which could then export it to a variety of backend systems such as LightStep, Jaeger, Prometheus, and more.
Regardless of how you choose to instrument or deploy OpenTelemetry, exporters provide a lot of powerful ways to report telemetry data. You can directly export from your service, you can proxy through the collector, or you can aggregate into standalone collectors – or even a mix of these! Ultimately, what’s important is that you’re getting that telemetry data into an observability platform that can help you analyze and understand what’s going on in your system.