This topic walks you through how to install an OpenTelemetry Collector with a sample configuration for both metrics and traces that serves as a base for other guides.

While there are many ways to deploy a Collector on Kubernetes, we recommend using the OpenTelemetry Operator. The Operator installs custom resources into your cluster, allowing you to create OpenTelemetry Collectors and have the operator handle their coordination.

If you’re interested in learning more about the trade-offs between different deployment modes, read more here.

Prefer tutorials? Follow our Learning Path for deploying a OpenTelemetry Collector.

To install the Collector, you need to (in this order):

  1. Install the OpenTelemetry Operator and Cert Manager
  2. Install the OpenTelemetry Collector

These instructions install the Collector to a Kubernetes cluster as a single replica Kubernetes Deployment (also called “standalone” mode) using the Operator. For any questions, please contact your Customer Success representative.

You must be able to run ValidatingWebhookConfigurations and MutatingWebhookConfigurations within your Kubernetes cluster; these are used to verify the Collector configuration.

Prerequisites

Install the OpenTelemetry Operator and Cert Manager

To install and configure a collector, you need to add the Kubernetes Operator for OpenTelemetry to your cluster. The Operator requires a Cert Manager installation to be present.

You can learn more about the Operator pattern in Kubernetes here.

  1. Configure Helm for installation.
    1
    2
    3
    
    % helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
    % helm repo add jetstack https://charts.jetstack.io
    % helm repo update
    
  2. Install the Cert Manager.
    1
    2
    3
    4
    5
    6
    
    % helm install \
      cert-manager jetstack/cert-manager \
      --namespace cert-manager \
      --create-namespace \
      --version v1.8.0 \
      --set installCRDs=true
    
  3. Install the OpenTelemetry Operator.
    1
    2
    3
    4
    
    % helm install \
      opentelemetry-operator open-telemetry/opentelemetry-operator \
      -n opentelemetry-operator \
      --create-namespace
    
  4. Verify the components have been correctly installed.
    1
    2
    3
    4
    
    ## this should show “cert-manager” and “opentelemetry-operator” installed
    % helm list -A
    ## this will complete when the opentelemetry operator pod is finished
    % kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=opentelemetry-operator -n opentelemetry-operator
    

Configure the OpenTelemetry Collector

Now that you’ve installed the OpenTelemetry Operator for Kubernetes, you can configure the Collector with Lightstep’s example Helm chart for a single-replica deployment.

  1. From the Lightstep otel-collector-charts repository, copy the charts/collector_k8s folder to your existing directory.
    1
    
    git clone https://github.com/lightstep/otel-collector-charts
    
  2. Set the shell variable LS_TOKEN to your Lightstep Access Token.
    1
    
    export LS_TOKEN="<ACCESS_TOKEN>"
    
  3. Understand the current configuration
    1
    
    cat ./charts/collector-k8s/values.yaml
    

    By default, the single deployment collector is configuring to accept OTLP traces and metrics, as well as scrape its own custom metrics using Prometheus. It uses the memory limiter to prevent OOMs, and the batch processor to improve performance.

    Be sure to change the name in the values file for the collector to something more descriptive.

  4. Configure a single-replica collector using Lightstep’s example Helm chart for the OpenTelemetry Operator for Kubernetes.
    1
    2
    3
    
    kubectl create namespace opentelemetry
    kubectl create secret generic otel-collector-secret -n opentelemetry --from-literal=LS_TOKEN=$LS_TOKEN
    helm upgrade lightstep ./charts/collector-k8s -f ./charts/collector-k8s/values.yaml -n opentelemetry --install
    
  5. In Lightstep Observability, use a Notebook to verify that the metric otelcol_process_uptime is reporting to your Lightstep project. Verifying OpenTelemetry Installation

If you don’t see this metric, you might not have set your token correctly. Check the logs of your Collector pod for access token not found errors using: % kubectl logs -n opentelemetry <collector pod name> If you see these errors, make sure that the token saved in your otel-collector-secret is correct and has write metrics permissions.

Collector troubleshooting

The default OTLP Exporter from a Collector enables gzip compression and TLS. Depending on your network configuration, you may need to enable or disable certain other gRPC features. This page contains a complete list of configuration parameters for the Collector gRPC client.

In the event that you are unable to establish a gRPC connection to the Lightstep Observability platform, you can use the grpcurl tool to ensure connectivity from your network to our public satellites. Run the following command, replacing <YOUR_ACCESS_TOKEN> with your project’s access token:

1
grpcurl -H 'lightstep-access-token:<YOUR_ACCESS_TOKEN>' ingest.lightstep.com:443 list

You should see the following output, or something similar:

1
2
3
4
5
grpc.reflection.v1alpha.ServerReflection
jaeger.api_v2.CollectorService
lightstep.collector.CollectorService
lightstep.egress.CollectorService
opentelemetry.proto.collector.trace.v1.TraceService

If you do not see this output, or the request hangs, then something is blocking gRPC traffic from transiting your network to ours. Please ensure that any proxies are passing through the lightstep-access-token header.

For additional troubleshooting recommendations, see Troubleshooting Missing Data in Lightstep.

Next steps

Now that you’ve succesfully installed an OpenTelemetry Operator and Collector to your cluster, there are many ways to tune your setup to your needs.