Running the OpenTelemetry Collector as a DaemonSet

This topic covers instructions to deploy the OpenTelemetry Collector using Kubernetes DaemonSet to replace Prometheus. A DaemonSet is a type of workload that ensures that every Kubernetes Node has a running instance.

To scale the Collector with a Daemonset, you configure the Collector to only scrape application metrics from its nodes. Optionally, you can run another single Collector in Deployment mode to scrape static targets and infrastructure metrics if needed.

Prerequisites

This topic covers the steps to deploy two OpenTelemetry Collectors, one in DaemonSet mode and optionally, a second one in Deployment mode, in a Kubernetes Cluster. You will need to (in this order):

  1. Install the OpenTelemetry Collector DaemonSet
  2. Configure the OpenTelemetry Collector deployment to scrape infrastructure metrics and static targets, if needed

Install the OpenTelemetry Collector DaemonSet

  1. From the Lightstep prometheus-k8s-opentelemetry-collector repository, copy the collector_k8s folder to your existing directory.

  2. Set the shell variable LS_TOKEN to your Lightstep Access Token.
    1
    
    export LS_TOKEN=”<ACCESS_TOKEN>”
    
  3. Install the OpenTelemetry Collector using the collector_k8s/values-daemonset.yaml values.
    1
    2
    3
    
    kubectl create namespace opentelemetry
    kubectl create secret generic otel-collector-secret -n opentelemetry --from-literal=LS_TOKEN=$LS_TOKEN
    helm upgrade lightstep ./collector_k8s -f ./collector_k8s/values-daemonset.yaml -n opentelemetry --install
    
  4. Verify that the daemonset Collector is up and running, You should see one pod in “ready” state for each node on your cluster.
    1
    
    kubectl get daemonset -n opentelemetry
    

    This Collector will scrape all pods that are annotated with the prometheus.io/scrape: true annotation, which is per pod. You can adjust the prometheus.io/port annotation to scrape a port of your choice instead of the default.

  5. In Lightstep Observability, use a Notebook to verify that the metric otelcol_process_uptime is reporting to your Lightstep project. You can group this metric by k8s.pod.name to see all pods that were created. You should expect one pod for each node on your Kubernetes Cluster. Verifying OpenTelemetry Installation

Additionally, verify that your applications are being scraped by the Collector with the metric scrape_samples_scraped grouped by service.name. You should see the amount of samples scraped from each application. At this point, you can start querying your app metrics. Verifying Targets Scraped

If you don’t see this metric, you might not have set your token correctly. Check the logs of your Collector pod for access token not found errors using: % kubectl logs -n opentelemetry <collector pod name>.
If you see these errors, make sure that the correct token is saved in your otel-collector-secret and has write metrics permissions.

Next, you can configure the deployment Collector to scrape your infrastructure metrics.

(Optional) Configure the Deployment Collector to scrape your infrastructure metrics

The DaemonSet Collector deployment has been configured to scrape application metrics from its nodes. In order to scrape static targets and infastructure metrics, run a second OpenTelemetry Collector as a single replica deployment.

  1. Add your additional scrape targerts to the scrape_configs.yaml. This should contain static targets that are not discovered by the Kubernetes service discovery in the DaemonSet Collector.

  2. Enable the secondary Collector Deployment by setting enabled to true in the collectors array element named as deployment in the values-daemonset.yaml file.
    Once complete, upgrade the Collector’s chart to incorporate the new changes.
    1
    
    helm upgrade lightstep ./collector_k8s -f ./collector_k8s/values-daemonset.yaml -n opentelemetry --install
    
  3. Using Notebooks, verify that your applications are being scraped by the Collector with the metric scrape_samples_scraped grouped by service.name.