Send telemetry data via OTLP/HTTP

The OpenTelemetry Protocol (OTLP) specifies that telemetry data traversing between various telemetry sources, intermediary nodes such as agents and collectors, and telemetry backends would rely on gRPC as its primary transport mechanism. Though using gRPC has its benefits, there are several drawbacks to this implementation in certain environments:

  • gRPC as a software package is a relatively large dependency, where HTTP is a much smaller dependency and certain languages already have robust HTTP libraries built in

  • gRPC-based transport can pose issues in certain types of network infrastructures

In April 2020, the community defined and proposed a HTTP Transport Extension to OTLP, specifying how OTLP messages would traverse via HTTP. There are specifications for sending OTLP over HTTP in both Protocol Buffers schema and JSON serialization. More information about these implementations can be found in the Further Reading and Resources section of this article.

Cloud Observability OTLP/HTTP endpoints

Cloud Observability’s public and private collectors accept OTLP sent via HTTP in both protobuf schema and JSON serialization. Cloud Observability’s public OTLP/HTTP endpoints are as follows:

Trace ingest endpoints

Endpoints accepting application/x-protobuf
  • OTLP versions v0.5 - v0.9:

Start tabs

On-Premise Satellites

1
https://<microsatellite_ip>:<microsatellite_port>/v1/traces

Public Satellites

1
https://ingest.lightstep.com:443/v1/traces

End code tabs

Endpoints accepting application/json
  • OTLP version v0.5:

Start tabs

On-Premise Satellites

1
https://<microsatellite_ip>:<microsatellite_port>/api/v2/otel/trace

Public Satellites

1
https://ingest.lightstep.com:443/api/v2/otel/trace

End code tabs

  • OTLP versions v0.6 and above:

Start tabs

On-Premise Satellites

1
https://<microsatellite_ip>:<microsatellite_port>/v1/traces

Public Satellites

1
2
3
https://ingest.lightstep.com:443/traces/otlp/v0.6
or 
https://ingest.lightstep.com:443/v1/traces

End code tabs

If you are using OTLP v0.5, you must upgrade as Cloud Observability support for v0.5 will be deprecated!

Metrics ingest endpoints

Endpoints accepting application/x-protobuf
  • OTLP versions v0.5 - v0.7:

Start tabs

On-Premise Satellites

1
2
3
https://<microsatellite_ip>:<microsatellite_port>/metrics/otlp/v0.5
or
https://<microsatellite_ip>:<microsatellite_port>/metrics/otlp/v0.6

Public Satellites

1
2
3
https://ingest.lightstep.com:443/metrics/otlp/v0.5
or
https://ingest.lightstep.com:443/metrics/otlp/v0.6

End code tabs

  • OTLP version v0.9:

Start tabs

On-Premise Satellites

1
https://<microsatellite_ip>:<microsatellite_port>/metrics/otlp/v0.9

Public Satellites

1
https://ingest.lightstep.com:443/metrics/otlp/v0.9

End code tabs

Endpoints accepting application/json
  • OTLP versions v0.5 - v0.7:

Start tabs

On-Premise Satellites

1
2
3
https://<microsatellite_ip>:<microsatellite_port>/metrics/otlp/v0.5
or
https://<microsatellite_ip>:<microsatellite_port>/metrics/otlp/v0.6

Public Satellites

1
2
3
https://ingest.lightstep.com:443/metrics/otlp/v0.5
or
https://ingest.lightstep.com:443/metrics/otlp/v0.6

End code tabs

  • OTLP version v0.9:

Start tabs

On-Premise Satellites

1
https://<microsatellite_ip>:<microsatellite_port>/metrics/otlp/v0.9

Public Satellites

1
https://ingest.lightstep.com:443/metrics/otlp/v0.9

End code tabs

If you are using OTLP v0.5, you must upgrade as Cloud Observability support for v0.5 will be deprecated!

On-premise Microsatellites follow the same endpoint scheme listed above. Replace ingest.lightstep.com:443 with your satellite’s hostname and HTTP port number such that you have the following example URLs:

  • Traces: http://{Microsatellite_ip}:{http_port}/traces/otlp/v0.9

  • Metrics: http://{Microsatellite_ip}:{http_port}/metrics/otlp/v0.9

Common use cases for OTLP/HTTP

The following are common use cases for using OTLP/HTTP:

  • Tracing or metric instrumentation is in Node.js, or in the browser, which has overall better performance when HTTP is used vs when gRPC is used as a transport mechanism

  • Telemetry data needs to pass through an L7 HTTP load balancer that does not support gRPC in order to reach Cloud Observability

  • Internal network infrastructure doesn’t support gRPC

OpenTelemetry JavaScript Configuration Examples

Sending telemetry data to Cloud Observability by way of the OpenTelemetry JS SDK also requires configuring Cloud Observability’s OTLP/HTTP endpoint.

An example configuration using Cloud Observability’s public ingest:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
'use strict';

const opentelemetry = require('@opentelemetry/sdk-node');
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');
const { Resource } = require('@opentelemetry/resources');
const { SemanticResourceAttributes } = require('@opentelemetry/semantic-conventions');

const accessToken = '<LS_ACCESS_TOKEN>';

const collectorOptions = {
  url: 'https://ingest.lightstep.com/traces/otlp/v0.9',
  headers: {
    'Lightstep-Access-Token': accessToken
  },
};

const traceExporter = new OTLPTraceExporter(collectorOptions);

const sdk = new opentelemetry.NodeSDK({
  resource: new Resource({
    [SemanticResourceAttributes.SERVICE_NAME]: 'local-ex-node',
  }),
  traceExporter,
  instrumentations: [getNodeAutoInstrumentations()]
});

When configuring the traceExporter to send telemetry data to a private microsatellite, be sure to replace the url in collectorOptions with the appropriate microsatellite URL, http://{satellite_ip}:{http_port}/traces/otlp/v0.9.

OpenTelemetry Collector considerations

The OpenTelemetry Collector includes an OTLP receiver and OTLP exporter that support both gRPC and HTTP transport. It’s worth noting here that the HTTP Transport Extension for OTLP specifies that given a particular target endpoint, the default URL path for telemetry carrying traces data is v1/traces, the default URL path for telemetry carrying metrics data is v1/metrics, and the default URL path for telemetry carrying log data is v1/logs. The Collector’s OTLP receiver and exporter have implemented their HTTP transport mechanisms with these default URL paths in mind.

Additionally, the Collector’s OTLP receiver accepts telemetry data in protobuf JSON serialization format.

As such, given a Collector with an IP address of 192.168.54.12 with the following default configuration of gRPC listening on port 4317 and HTTP listening on 4318:

1
2
3
4
5
receivers:
  otlp:
    protocols:
      grpc:
      http:

Trace data would be accepted at http://192.168.54.12:4318/v1/traces, metrics data would be accepted at http://192.168.54.12:4318/v1/metrics, and log data would be accepted at http://192.168.54.12:4318/v1/logs.

Likewise, the OTLP/HTTP exporter makes the same assumptions about the aforementioned default URL paths given a certain endpoint hostname. As such, if you are configuring an OTLP/HTTP exporter to send telemetry data to Cloud Observability, you will need to use a configuration that specifies traces_endpoint and metrics_endpoint rather than endpoint because Cloud Observability implements a different set of URL paths for OTLP/HTTP than what is specified in the OTEP document:

OpenTelemetry Collector OTLP/HTTP exporter configuration to Cloud Observability public ingest:

1
2
3
4
5
6
exporters:
  otlphttp:
    traces_endpoint: https://ingest.lightstep.com:443/traces/otlp/v0.9
    metrics_endpoint: https://ingest.lightstep.com:443/metrics/otlp/v0.9
    headers: "lightstep-access-token": "<LS_ACCESS_TOKEN>"
    compression: gzip

OpenTelemetry Collector OTLP/HTTP exporter configuration to a microsatellite at 192.168.54.10 with an HTTP port of 55681:

1
2
3
4
5
6
exporters:
  otlphttp:
    traces_endpoint: http://192.168.54.10:55681/traces/otlp/v0.9
    metrics_endpoint: http://192.168.54.10:55681/metrics/otlp/v0.9
    headers: "lightstep-access-token": "<LS_ACCESS_TOKEN>"
    compression: gzip

If you are deploying the OpenTelemetry Collector Docker image, ensure that your headers are formatted as follows: headers: {"lightstep-access-token": "<LS_ACCESS_TOKEN>"}

Put it all together

Given an environment where the customer wishes to send telemetry data to Cloud Observability under the following conditions:

  • gRPC transport is not an option, the environment dictates that HTTP must be used as a transport mechanism

  • Instrumented code is able to export telemetry data via HTTP to a centralized OpenTelemetry Collector

  • A centralized OpenTelemetry Collector is in use at 192.168.54.12 with only its OTLP HTTP receiver and exporters configured with default options

  • The Collector’s OTLP/HTTP exporter sends telemetry data to a pool of Microsatellites behind an HTTP load balancer with address lb-ex.example.com and listening on port 55681

We may have a configuration pipeline that looks like the following:

Instrumentation configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
'use strict';
const { DiagLogLevel } = require('@opentelemetry/api');
const { lightstep, opentelemetry } = require('lightstep-opentelemetry-launcher-node');

// set access token or use LS_ACCESS_TOKEN environment variable
const accessToken = '<LS_ACCESS_TOKEN>';

const sdk = lightstep.configureOpenTelemetry({
  accessToken,
  spanEndpoint: 'http://192.168.54.12:4318/v1/traces',
  serviceName: 'locl-ex-node',
  metricInterval: 3000,
  logLevel: DiagLogLevel.DEBUG,
});

OpenTelemetry Collector OTLP receiver configuration:

1
2
3
4
receivers:
  otlp:
    protocols:
      http:

OpenTelemetry Collector OTLP/HTTP exporter configuration:

1
2
3
4
5
6
exporters:
  otlphttp:
    traces_endpoint: http://lb-ex.example.com:55681/traces/otlp/v0.9
    metrics_endpoint: http://lb-ex.example.com:55681/metrics/otlp/v0.9
    headers: "lightstep-access-token": "<LS_ACCESS_TOKEN>"
    compression: gzip

In order to fully test this pipeline, run a curl command against the Collector’s OTLP receiver to see if the trace shows up in the Cloud Observability SaaS:

curl -iv -H "Content-Type: application/json" http://192.168.54.12:4318/v1/traces -d @small_data.json

where small_data.json contains the following protobuf-encoded JSON payload:

small_data.json

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
{
  "resourceSpans": [
    {
      "resource": {
        "attributes": [
          {
            "key": "service.name",
            "value": {
              "stringValue": "curl-test-otel-pipeline"
            }
          }
        ]
      },
      "scopeSpans": [
        {
          "spans": [
            {
              "traceId": "71699b6fe85982c7c8995ea3d9c95df2",
              "spanId": "3c191d03fa8be065",
              "name": "test-span",
              "kind": 1,
              "droppedAttributesCount": 0,
              "events": [],
              "droppedEventsCount": 0,
              "status": {
                "code": 1
              }
            }
          ],
          "scope": {
            "name": "local-curl-example"
          }
        }
      ]
    }
  ]
 }

Further reading and resources

See also

Quickstart: Collector for application data using Docker

Load balance Cloud Observability

Updated Mar 3, 2022