This tutorial demonstrates how to use the Kubernetes Operator for OpenTelemetry Collector to send infrastructure metrics, and optionally application traces, to Cloud Observability using a Helm chart already configured for Collector best practices. Cloud Observability recommends using the Kubernetes Operator when deploying the OpenTelemetry Collector in Kubernetes environments.
A prerequisite of this quickstart is a running Kubernetes cluster. It can be either a standard Kubernetes distribution or a managed Kubernetes distribution like Azure AKS, Google GKE, or AWS EKS. If you’d just like to test locally, we recommend using minikube.
For more on the Kubernetes Operator for OpenTelemetry Collector, see the official OpenTelemetry docs.
Run the following command to verify you are connected to a Kubernetes cluster.
1
kubectl cluster-info
If you see errors or cannot connect, follow the instructions from minikube or your cloud provider on authenticating with your cluster.
Next, verify Helm is installed.
1
helm version
Verify you are on Helm v3.
We recommend using Helm to manage dependencies and upgrades. However, if you cannot deploy Helm charts, you can use the helm template
command to automatically generate Kubernetes manifests from an existing chart.
Run the following command to add the following Helm respositories and pull the latest charts:
1
2
3
4
5
helm repo add jetstack https://charts.jetstack.io
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo add prometheus https://prometheus-community.github.io/helm-charts
helm repo add lightstep https://lightstep.github.io/otel-collector-charts
helm repo update
Next, install the cert-manager charts on your cluster. The Cert Manager manages certificates needed by the Operator to subscribe to in-cluster Kubernetes events.
1
2
3
4
5
6
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.8.0 \
--set installCRDs=true
0.35.1
with the Cloud Observability Helm chart.
1
2
3
helm install \
opentelemetry-operator open-telemetry/opentelemetry-operator \
-n default --version 0.35.1
deployed
:
1
helm list -A
Kubernetes has built-in support for hundreds of useful metrics that help teams understand the health of their containers, pods, nodes, workloads, and internal system components. Cloud Observability provides a Helm chart to automatically configure collectors to send these metrics to Cloud Observability.
Create a secret that holds your Cloud Observability Access Token.
1
2
export LS_TOKEN='<your-token>'
kubectl create secret generic otel-collector-secret -n default --from-literal="LS_TOKEN=$LS_TOKEN"
Install the collector-k8s
chart. This chart automatically creates collectors to pull Kubernetes metrics and send them to your Cloud Observability project. We recommend you also specify the name of your cluster when installing the chart, which your can use by setting the clusterName
variable:
1
helm install kube-otel-stack lightstep/kube-otel-stack -n default --set metricsCollector.clusterName=your-cluster-name
Verify the pods from the charts have been deployed with no errors:
1
kubectl get pods
You should see pods for a node exporter, the operator, kube-state-metrics, and multiple collectors.
In Cloud Observability, you can view your metrics in either a notebook or dashboard.
When using notebooks you can click on any Kubernets metrics in the all
telemetry
dropdown.
Check the scrape_series_added
metric first, which lets you know many Kubernetes metrics are being ingested.
For dashboards, there are several pre-built dashboards that display Kubernetes metrics. For example, to see Pod metrics, in the Dashboard view, click Create a pre-built dashboard, and choose “K8S Pod Resources”.
You can also use the Operator to deploy a collector configured to send trace data to Cloud Observability. The chart configures a collector for tracing using best practices.
Run the following command to deploy a new Collector configured for trace data into the cluster. Replace your-cluster-name
with the name of the cluster you are connected to.
1
2
3
helm upgrade kube-otel-stack lightstep/kube-otel-stack \
-n default --set tracesCollector.enabled=true \
--set tracesCollector.clusterName=your-cluster-name
Next, verify that the Collector configured for tracing has been deployed:
1
kubectl get services
You should see a new service with the name kube-otel-stack-traces-collector
with ports 4317/TCP
and 8888/TCP
open.
Configure your OpenTelemetry-instrumented applications running in the cluster to export traces to an OTLP/gRPC endpoint kube-otel-stack-traces-collector:4317
. More information is available on how to instrument applications in the Quickstart: Instrumentation documentation or follow the instructions below to deploy the demo application.
The Operator, for languages like Java, .NET, Node, and Python, supports auto-instrumenting code running in clusters. This lets you deploy SDKs automatically without any code changes. More details are available in the OpenTelemetry Community Docs.
If you don’t have existing services that are instrumented, you can deploy a demo microservice environment to your cluster maintained by the OpenTelemetry Community that uses the collectors and configuration you deployed using the kube-otel-stack
Helm chart.
Before proceeding, we recommend creating a separate sandbox or development project for testing with non-production data. If you create a new project, you will need to update the access token value you set in previous steps.
Create a new values.yaml
with the following content. This configures the OpenTelemetry Demo Helm chart to send metrics and traces to the collectors deployed by the kube-otel-stack
chart:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
opentelemetry-collector:
config:
exporters:
otlp/traces:
endpoint: kube-otel-stack-traces-collector:4317
tls:
insecure: true
otlp/metrics:
endpoint: kube-otel-stack-metrics-collector:4317
tls:
insecure: true
service:
pipelines:
metrics:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp/metrics]
traces:
receivers: [otlp]
processors: [batch]
exporters: [logging, otlp/traces]
Deploy the demo environment with your values.yaml
file.
1
helm upgrade my-otel-demo open-telemetry/opentelemetry-demo --install -f values.yaml
After a few minutes, you should see new services, spans, and metrics in your Cloud Observability project.
You can also use the Operator to deploy a collector configured to send log data to Cloud Observability. The chart configures a collector for logging using best practices and will forward Kubernetes events and pod logs by default.
Run the following command to deploy a new Collector configured for logging data into the cluster.
1
2
helm upgrade kube-otel-stack lightstep/kube-otel-stack \
-n default --set logsCollector.enabled=true
Next, verify that the Collector configured for logging has been deployed:
1
kubectl get services
You should see a new service with the name kube-otel-stack-logs-collector
.
After a few minutes, you should see logs in Cloud Observability.
The default OTLP Exporter from a Collector enables gzip
compression and TLS.
Depending on your network configuration, you may need to enable or disable
certain other gRPC features. This
page
contains a complete list of configuration parameters for the Collector gRPC
client.
In the event that you are unable to establish a gRPC connection to the Cloud Observability
Observability platform, you can use the
grpcurl tool to ensure connectivity
from your network to our public satellites. Run the following command, replacing
<YOUR_ACCESS_TOKEN>
with your project’s access
token:
1
grpcurl -H 'lightstep-access-token:<YOUR_ACCESS_TOKEN>' ingest.lightstep.com:443 list
You should see the following output, or something similar:
1
2
3
4
5
grpc.reflection.v1alpha.ServerReflection
jaeger.api_v2.CollectorService
lightstep.collector.CollectorService
lightstep.egress.CollectorService
opentelemetry.proto.collector.trace.v1.TraceService
If you do not see this output, or the request hangs, then something is blocking
gRPC traffic from transiting your network to ours. Please ensure that any
proxies are passing through the lightstep-access-token
header.
For additional troubleshooting recommendations, see Troubleshooting Missing Data in Cloud Observability.
Use the OpenTelemetry Collector
Updated Mar 28, 2023