This tutorial demonstrates how to use the OpenTelemetry Collector to send application telemetry to Cloud Observability.
You will run a simple containerized application locally that will send trace data to a local containerized instance of the OpenTelemetry Collector, that will in turn send the trace data to Cloud Observability.
For more on the OpenTelemetry Collector, see the official OpenTelemetry docs.
If you’re deploying the collector in Kubernetes, we recommend using the Kubernetes Operator.
For sending metrics to Cloud Observability using the OpenTelemetry Collector, see Ingest metrics using the OpenTelemetry Collector.
Although instrumented code can send data directly to an Observability back-end (e.g. Cloud Observability) without a Collector, it is considered a best practice to send all OpenTelemetry data to your back-end via the OpenTelemetry Collector.
Clone the OpenTelemetry Examples repo.
1
git clone https://github.com/lightstep/opentelemetry-examples.git
Edit the OpenTelemetry Collector Config file.
1
cd opentelemetry-examples
Open
collector/vanilla/collector.yaml
for editing using your favourite editor.
It’s recommended that you make a copy of this file collector.yaml
and
save it as otelcol-lightstep.yml
.
The file looks like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
receivers:
otlp:
protocols:
grpc:
http:
exporters:
logging:
logLevel: debug
otlp/ls:
endpoint: ingest.lightstep.com:443
headers:
"lightstep-access-token": "${LIGHTSTEP_ACCESS_TOKEN}"
processors:
batch:
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp/ls]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [logging,otlp/ls]
Replace ${LIGHTSTEP_ACCESS_TOKEN}
with your own Cloud Observability Access
Token, and save the file. The access token tells what Cloud Observability
project to send your telemetry
data to.
A few noteworthy items:
receivers
configuration may appear to be empty; however, it actually means that we
are using the defult values for the receivers
config. It is actually the
equivalent of:1
2
3
4
5
6
7
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
OpenTelemetry Protocol (OTLP)
format, the OTLP Exporter
is used. The exporter can be called either
otlp
or follow the naming format otlp/<something>
. In this example,
the otlp/<something>
format is added for clarity, to indicate that we are
using OTLP to send data to Cloud Observability.Logging Exporter
. This is
helpful, as it prints our traces to the Collector’s stdout
. In this case
it’s your terminal.service.pipelines
section of the YAML
config. Specifically, we need to define a pipeline for our traces. The
pipeline tells the Collector:
stdout
(via the Logging
Exporter
) and to Cloud Observability (via OTLP Exporter
)If you find that you can’t see data in Cloud Observability, make sure that
you’ve defined a pipeline, and that the pipeline’s exporter (for example
service.pipelines.traces.exporters
) lists your otlp/ls
exporter.
Launch the Collector.
First, set up Docker networking. This is a one-time setup, and is done so that our sample app container can talk to the OpenTelemetry Collector container.
Open a new terminal window in the opentelemetry-examples
folder, and run the following commands:
1
docker network create --driver=bridge -o "com.docker.network.bridge.enable_icc"="true" otel-collector-demo
Now, in the same terminal window run the Collector. Note how it is
using the newly-created otel-collector-demo
network.
1
2
3
4
5
6
7
8
9
10
docker run -it --rm \
-p 4317:4317 \
-p 4318:4318 \
-v $(pwd)/collector/vanilla/otelcol-lightstep.yaml:/otel-config.yaml \
--network="otel-collector-demo" \
-h otel-collector \
--name otel-collector \
otel/opentelemetry-collector-contrib:0.53.0 \
"/otelcol-contrib" \
"--config=otel-config.yaml"
A few noteworthy items about the above command. It:
4317
) and HTTP port
(4318
) for ingesting data in OTLP format. Code instrumented with
OpenTelemetry will be in the OTLP format.otelcol-lightstep.yml
file that you modified earlier, to
the otel-config.yaml
file in the container’s internal filesystem--config
flag.Ensure that you are running the docker
command from the
opentelemetry-examples
folder.
Launch the sample app.
Open a new terminal window and run the command below. Note how we are using
the same network here as we did for the OpenTelemetry Collector,
otel-collector-demo
so that the app can reach the Collector.
1
2
3
docker run -it --rm -h go-sample-server \
--network="otel-collector-demo" \
-p 9000:9000 ghcr.io/lightstep/go-sample-server:1.0.0
You should see output that looks something like this:
Call the service.
Open a new terminal window, and run the following:
1
curl http://localhost:9000
You should see output that looks something like this:
When you return to the terminal window running the OpenTelemetry Collector,
you’ll notice that the Logging Exporter has printed your trace to
stdout
:
When you return to the terminal windows running the sample app, you’ll notice the following new output:
See the Traces in Cloud Observability.
Log into Cloud Observability. You’ll be able to see the services listed in the Service Directory.
To view traces in your Cloud Observability project, click Explorer in the left navigation bar, and then click on any span in the Trace Analysis table.
For a more detailed trace exploration, check out Cloud Observability Notebooks.
The default OTLP Exporter from a Collector enables gzip
compression and TLS.
Depending on your network configuration, you may need to enable or disable
certain other gRPC features. This
page
contains a complete list of configuration parameters for the Collector gRPC
client.
In the event that you are unable to establish a gRPC connection to the Cloud Observability
Observability platform, you can use the
grpcurl tool to ensure connectivity
from your network to our public satellites. Run the following command, replacing
<YOUR_ACCESS_TOKEN>
with your project’s access
token:
1
grpcurl -H 'lightstep-access-token:<YOUR_ACCESS_TOKEN>' ingest.lightstep.com:443 list
You should see the following output, or something similar:
1
2
3
4
5
grpc.reflection.v1alpha.ServerReflection
jaeger.api_v2.CollectorService
lightstep.collector.CollectorService
lightstep.egress.CollectorService
opentelemetry.proto.collector.trace.v1.TraceService
If you do not see this output, or the request hangs, then something is blocking
gRPC traffic from transiting your network to ours. Please ensure that any
proxies are passing through the lightstep-access-token
header.
For additional troubleshooting recommendations, see Troubleshooting Missing Data in Cloud Observability.
Use the OpenTelemetry Collector
Updated Jun 10, 2022