This tutorial demonstrates how to use the OpenTelemetry Collector to send application telemetry to Lightstep Observability.

You will run a simple containerized application locally that will send trace data to a local containerized instance of the OpenTelemetry Collector, that will in turn send the trace data to Lightstep Observability.

otel-collector-example

For more on the OpenTelemetry Collector, see the official OpenTelemetry docs.

If you’re deploying the collector in Kubernetes, we recommend using the Kubernetes Operator.

For sending metrics to Lightstep using the OpenTelemetry Collector, see Ingest metrics using the OpenTelemetry Collector.

Pre-Requisites

Although instrumented code can send data directly to an Observability back-end (e.g. Lightstep) without a Collector, it is considered a Lightstep best practice to send all OpenTelemetry data to your back-end via the OpenTelemetry Collector.

Tutorial: Running the OpenTelemetry Collector Locally

  1. Clone the OpenTelemetry Examples repo.

    1
    
     git clone https://github.com/lightstep/opentelemetry-examples.git
    
  2. Edit the OpenTelemetry Collector Config file.

    1
    
     cd opentelemetry-examples
    

    Open collector/vanilla/collector.yaml for editing using your favourite editor.

    It’s recommended that you make a copy of this file collector.yaml and save it as otelcol-lightstep.yml.

    The file looks like this:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    
     receivers:
       otlp:
         protocols:
           grpc:
           http:
    
     exporters:
       logging:
         logLevel: debug
       otlp/ls:
         endpoint: ingest.lightstep.com:443
         headers: 
           "lightstep-access-token": "${LIGHTSTEP_ACCESS_TOKEN}"
    
     processors:
       batch:
    
     service:
       pipelines:
         traces:
           receivers: [otlp]
           processors: [batch]
           exporters: [logging, otlp/ls]
    
         metrics:
             receivers: [otlp]
             processors: [batch]
             exporters: [logging,otlp/ls]
    

    Replace ${LIGHTSTEP_ACCESS_TOKEN} with your own Lightstep Access Token, and save the file. The access token tells what Lightstep project to send your telemetry data to.

    A few noteworthy items:

    • The Collector can ingest data using both HTTP and gRPC. The receivers configuration may appear to be empty; however, it actually means that we are using the defult values for the receivers config. It is actually the equivalent of:
    1
    2
    3
    4
    5
    6
    7
    
     receivers:
       otlp:
         protocols:
           grpc:
             endpoint: 0.0.0.0:4317
           http:
             endpoint: 0.0.0.0:4318
    
    • Lightstep ingests data in native OpenTelemetry Protocol (OTLP) format, the OTLP Exporter is used. The exporter can be called either otlp or follow the naming format otlp/<something>. In this example, the otlp/<something> format is added for clarity, to indicate that we are using OTLP to send data to Lightstep.
    • Though not mandatory, we are also using a Logging Exporter. This is helpful, as it prints our traces to the Collector’s stdout. In this case it’s your terminal.
    • We must define a pipeline in the service.pipelines section of the YAML config. Specifically, we need to define a pipeline for our traces. The pipeline tells the Collector:
      • Where it’s getting trace data from (it’s being sent via OTLP)
      • If there’s any processing that needs to be done (this is optional)
      • Where to send data to. In our case, it’s to stdout (via the Logging Exporter) and to Lightstep (via OTLP Exporter)

    If you find that you can’t see data in Lightstep, make sure that you’ve defined a pipeline, and that the pipeline’s exporter (for example service.pipelines.traces.exporters) lists your otlp/ls exporter.

  3. Launch the Collector.

    First, set up Docker networking. This is a one-time setup, and is done so that our sample app container can talk to the OpenTelemetry Collector container.

    Open a new terminal window in the opentelemetry-examples folder, and run the following commands:

    1
    
     docker network create --driver=bridge -o "com.docker.network.bridge.enable_icc"="true" otel-collector-demo
    

    Now, in the same terminal window run the Collector. Note how it is using the newly-created otel-collector-demo network.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    
     docker run -it --rm \
         -p 4317:4317 \
         -p 4318:4318 \
         -v $(pwd)/collector/vanilla/otelcol-lightstep.yaml:/otel-config.yaml \
         --network="otel-collector-demo" \
         -h otel-collector \
         --name otel-collector \
         otel/opentelemetry-collector-contrib:0.53.0  \
         "/otelcol-contrib" \
         "--config=otel-config.yaml"
    

    A few noteworthy items about the above command. It:

    • Exposes the OpenTelemetry Collector’s gRPC port (4317) and HTTP port (4318) for ingesting data in OTLP format. Code instrumented with OpenTelemetry will be in the OTLP format.
    • Maps the local otelcol-lightstep.yml file that you modified earlier, to the otel-config.yaml file in the container’s internal filesystem
    • Tells the OpenTelemetry Collector where to look for config YAML file via the --config flag.

    Ensure that you are running the docker command from the opentelemetry-examples folder.

  4. Launch the sample app.

    Open a new terminal window and run the command below. Note how we are using the same network here as we did for the OpenTelemetry Collector, otel-collector-demo so that the app can reach the Collector.

    1
    2
    3
    
     docker run -it --rm -h go-sample-server \
       --network="otel-collector-demo" \
       -p 9000:9000 ghcr.io/lightstep/go-sample-server:1.0.0
    

    You should see output that looks something like this: otel-collector-sample-app-output

  5. Call the service.

    Open a new terminal window, and run the following:

    1
    
     curl http://localhost:9000
    

    You should see output that looks something like this: curl-output

    When you return to the terminal window running the OpenTelemetry Collector, you’ll notice that the Logging Exporter has printed your trace to stdout:

    logging-exporter-sample

    When you return to the terminal windows running the sample app, you’ll notice the following new output:

    registration-server-output

  6. See the Traces in Lightstep.

    Log into Lightstep. You’ll be able to see the services listed in the Service Directory.

    registration-server-lightstep

    To view traces in your Lightstep Observability project, click Explorer in the left navigation bar, and then click on any span in the Trace Analysis table.

    For a more detailed trace exploration, check out Lightstep Observability Notebooks.

Troubleshooting

The default OTLP Exporter from a Collector enables gzip compression and TLS. Depending on your network configuration, you may need to enable or disable certain other gRPC features. This page contains a complete list of configuration parameters for the Collector gRPC client.

In the event that you are unable to establish a gRPC connection to the Lightstep Observability platform, you can use the grpcurl tool to ensure connectivity from your network to our public satellites. Run the following command, replacing <YOUR_ACCESS_TOKEN> with your project’s access token:

1
grpcurl -H 'lightstep-access-token:<YOUR_ACCESS_TOKEN>' ingest.lightstep.com:443 list

You should see the following output, or something similar:

1
2
3
4
5
grpc.reflection.v1alpha.ServerReflection
jaeger.api_v2.CollectorService
lightstep.collector.CollectorService
lightstep.egress.CollectorService
opentelemetry.proto.collector.trace.v1.TraceService

If you do not see this output, or the request hangs, then something is blocking gRPC traffic from transiting your network to ours. Please ensure that any proxies are passing through the lightstep-access-token header.

For additional troubleshooting recommendations, see Troubleshooting Missing Data in Lightstep.