If you want to use Lightstep to collect infrastructure metrics or application metrics in Prometheus format within a Kubernetes cluster environment, we recommend using the OpenTelemetry Collector with the Prometheus receiver. It is possible to run the collector and existing Prometheus server infrastructure side-by-side or use the OpenTelemetry Collector with Lightstep to replace Prometheus entirely.

This topic covers using the OpenTelemetry Collector in your Kubernetes cluster to ingest metrics in Prometheus format (also known as OpenMetrics) and assumes that you are running a single pod Prometheus in a Kubernetes cluster.

Prefer tutorials? Follow our Learning Path on how to deploy a Collector to ingest Prometheus metrics.

You can also deploy the Collector using a Kubernetes DaemonSet or a StatefulSet.
Read Plan an OpentTelemetry Collector deployment to determine which method to use.

You need to (in this order):

  1. Install the OpenTelemetry Operator and Cert Manager
  2. Install the OpenTelemetry Collector
  3. Configure the Collector to scrape the metrics you need

These instructions install the OpenTelemetry Collector to a Kubernetes cluster as a single replica Kubernetes Deployment (also called “standalone” mode) using the OpenTelemetry Operator. If you’re interested in running a high-availability Collector (multiple replicas), please contact your Customer Success representative.

You must be able to run ValidatingWebhookConfigurations and MutatingWebhookConfigurations within your Kubernetes cluster; these are used to verify the Collector configuration.

Prerequisites

Install the OpenTelemetry Operator and Cert Manager

To install and configure a collector, you need to add the Kubernetes Operator for OpenTelemetry to your cluster. The Operator requires a Cert Manager installation to be present.

You can learn more about the Operator pattern in Kubernetes here.

  1. Configure Helm for installation.
    1
    2
    3
    
    % helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
    % helm repo add jetstack https://charts.jetstack.io
    % helm repo update
    
  2. Install the Cert Manager.
    1
    2
    3
    4
    5
    6
    
    % helm install \
      cert-manager jetstack/cert-manager \
      --namespace cert-manager \
      --create-namespace \
      --version v1.8.0 \
      --set installCRDs=true
    
  3. Install the OpenTelemetry Operator.
    1
    2
    3
    4
    
    % helm install \
      opentelemetry-operator open-telemetry/opentelemetry-operator \
      -n opentelemetry-operator \
      --create-namespace
    
  4. Verify the components have been correctly installed.
    1
    2
    3
    4
    
    ## this should show “cert-manager” and “opentelemetry-operator” installed
    % helm list -A
    ## this will complete when the opentelemetry operator pod is finished
    % kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=opentelemetry-operator -n opentelemetry-operator
    

Configure the OpenTelemetry Collector to Ingest Prometheus Metrics

Now that you’ve installed the OpenTelemetry Operator for Kubernetes, you can configure the collector with Lightstep’s example Helm chart for a single-replica deployment.

  1. From the Lightstep prometheus-k8s-opentelemetry-collector repository, copy the collector_k8s folder to your existing directory.
    1
    
    git clone https://github.com/lightstep/prometheus-k8s-opentelemetry-collector
    
  2. Set the shell variable LS_TOKEN to your Lightstep Access Token.
    1
    
    export LS_TOKEN="<ACCESS_TOKEN>"
    
  3. Configure a single-replica collector using Lightstep’s example Helm chart for the OpenTelemetry Operator for Kubernetes.
    1
    2
    3
    
    kubectl create namespace opentelemetry
    kubectl create secret generic otel-collector-secret -n opentelemetry --from-literal=LS_TOKEN=$LS_TOKEN
    helm upgrade lightstep ./collector_k8s -f ./collector_k8s/values-deployment.yaml -n opentelemetry --install
    
  4. In Lightstep Observability, use a Notebook to verify that the metric otelcol_process_uptime is reporting to your Lightstep project. Verifying OpenTelemetry Installation

If you don’t see this metric, you might not have set your token correctly. Check the logs of your Collector pod for access token not found errors using: % kubectl logs -n opentelemetry <collector pod name> If you see these errors, make sure that the token saved in your otel-collector-secret is correct and has write metrics permissions.

Configure the Collector to scrape a subset of metrics

Now that the Collector is available, you run a Lightstep Docker image in the namespace of your Prometheus server to extract and save the scrape_config file. An existing Prometheus server is required for this step.

  1. Check that your Prometheus pod is healthy and all containers are running.
    Replace <namespace> and <prometheus pod name> with your Prometheus server’s namespace and pod name.
    1
    
    % kubectl get pod -n <namespace> <prometheus pod name>
    
  2. Run the following command to identify the pod ip address where your Prometheus server is running.
    1
    
    % kubectl get pods -n <namespace> <prometheus pod name> -o jsonpath='{.status.podIP}'
    
  3. Extract and save your Prometheus configuration into scrape_configs.yaml.
    Replace <namespace> and <pod ip address> with your Prometheus server’s namespace and pod ip address.
    1
    
    % kubectl run --rm  --quiet -i -n <namespace> --image=lightstep/prometheus-config-helper:latest --env="PROMETHEUS_ADDR=<pod ip address>:9090" --restart=Never get-prometheus-scrape-configs > collector_k8s/scrape_configs.yaml
    

    Depending on the state of the Prometheus server, this may fail and leave the scrape_configs.yaml file empty. If it does, you may safely rerun the command.

  4. (Optional) Edit scrape_config.yaml to exclude any scrape targets you want to omit.
    Use # to omit individual lines.
    Once complete, upgrade the Collector’s chart provided by Lightstep’s example respository to incorporate the new changes.
    1
    2
    
    # collector_k8s/values-deployment.yaml comes from cloned example repository
    % helm upgrade lightstep ./collector_k8s -f ./collector_k8s/values-deployment.yaml -n opentelemetry --install
    
  5. Verify your scrape targets are appearing using Notebooks. Verify OpenTelemetry Scrape Targets

Collector Troubleshooting

The default OTLP Exporter from a Collector enables gzip compression and TLS. Depending on your network configuration, you may need to enable or disable certain other gRPC features. This page contains a complete list of configuration parameters for the Collector gRPC client.

In the event that you are unable to establish a gRPC connection to the Lightstep Observability platform, you can use the grpcurl tool to ensure connectivity from your network to our public satellites. Run the following command, replacing <YOUR_ACCESS_TOKEN> with your project’s access token:

1
grpcurl -H 'lightstep-access-token:<YOUR_ACCESS_TOKEN>' ingest.lightstep.com:443 list

You should see the following output, or something similar:

1
2
3
4
5
grpc.reflection.v1alpha.ServerReflection
jaeger.api_v2.CollectorService
lightstep.collector.CollectorService
lightstep.egress.CollectorService
opentelemetry.proto.collector.trace.v1.TraceService

If you do not see this output, or the request hangs, then something is blocking gRPC traffic from transiting your network to ours. Please ensure that any proxies are passing through the lightstep-access-token header.

For additional troubleshooting recommendations, see Troubleshooting Missing Data in Lightstep.