This quickstart shows you how to get infrastructure metrics and logs and application metrics and traces from an application running in a Kubernetes environment. Kubernetes has built-in support for hundreds of useful metrics that help teams understand the health of their containers, pods, nodes, workloads, and internal system components. You can use your own app to generate application metrics and traces or you can use the OpenTelemetry demo, a microservice environment maintained by the OpenTelemetry Community.

Log data is available as an early access feature. Contact your account representative to learn more.

If using your own app, it needs to be instrumented with OpenTelemetry in order to see application metrics and traces in Cloud Observability (metrics from Kubernetes can be ingested without any instrumentation).

We recommend creating a separate sandbox or development project for testing with non-production data.

Overview

You’ll use the OpenTelemetry Collector and the Kubernete’s OpenTelemetry Operator to send data to Cloud Observability. The Collector is a vendor-agnostic agent that receives, processes, and exports telemetry data. The Operator is an implementation of the Kubernetes Operator that manages the Collector. You install and configure both of these using provided Helm charts.

In this quickstart, you’ll:

  • Step 1: Add and install Helm charts
  • Step 2: Install a pre-configured Collector to send trace, log, and metric data to Cloud Observability
  • Step 3: View metric, log, and trace data in Cloud Observability to diagnose an issue

Prerequisites

  • An application (if you’re not using the demo app) running in a Kubernetes cluster. It can be either a standard Kubernetes distribution or a managed Kubernetes distribution like Azure AKS, Google GKE, or AWS EKS. If you’d just like to test locally, we recommend using minikube.

  • Helm version 3 or later.

    We recommend using Helm to manage dependencies and upgrades. However, if you cannot deploy Helm charts, you can use the helm template command to automatically generate Kubernetes manifests from an existing chart.

  • A Cloud Observability account
  • A Cloud Observability access token for the Cloud Observability project you would like to use.

Step 1: Install Helm charts

  1. Run the following command to verify you are connected to a Kubernetes cluster.

    1
    
     kubectl cluster-info
    

    If you see errors or cannot connect, follow the instructions from minikube or your cloud provider on authenticating with your cluster.

  2. Verify Helm is installed and that you’re on version 3 or later.

    1
    
     helm version
    
  3. Add the following Helm repositories and pull the latest charts.

    1
    2
    3
    4
    5
    
     helm repo add jetstack https://charts.jetstack.io
     helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
     helm repo add prometheus https://prometheus-community.github.io/helm-charts
     helm repo add lightstep https://lightstep.github.io/otel-collector-charts
     helm repo update
    
  4. Install the cert-manager charts on your cluster. The Cert Manager manages certificates needed by the Operator to subscribe to in-cluster Kubernetes events.

    1
    2
    3
    4
    5
    6
    
     helm install \
         cert-manager jetstack/cert-manager \
         --namespace cert-manager \
         --create-namespace \
         --version v1.8.0 \
         --set installCRDs=true
    
  5. Install the OpenTelemetry Operator chart. The Operator automates the creation and management of collectors, autoscaling, code instrumentation, scraping metrics endpoints, and more.
    1
    2
    3
    
     helm install \
         opentelemetry-operator open-telemetry/opentelemetry-operator \
         -n default
    
  6. Run the following command to verify both charts successfully deployed with a status that says deployed:
    1
    
     helm list -A
    

    The output should look similar to the following:

    1
    2
    3
    
     NAME                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                           APP VERSION
     cert-manager            cert-manager    1               2023-04-26 16:30:31.994524008 +0000 UTC deployed        cert-manager-v1.8.0             v1.8.0     
     opentelemetry-operator  default         1               2023-04-26 16:30:59.478981048 +0000 UTC deployed        opentelemetry-operator-0.27.0   0.75.0
    

You’ve installed the prerequisites needed to successfully run Collectors and you’ve installed the OpenTelemetry Operator to Kubernetes.

Step 2: Send telemetry data to Cloud Observability

You can use your own applications running in Kubernetes or you can install the OpenTelemetry demo.

Use an existing application

You use the Operator to deploy a Collector configured to send trace and metric data to Cloud Observability. The Helm chart configures a collector using best practices.

The Operator, for languages like Java, .NET, Node, and Python, supports auto-instrumenting code running in clusters. This lets you deploy SDKs automatically without any code changes. More details are available in the OpenTelemetry Community Docs.

  1. Create a Kubernetes secret that holds your Cloud Observability access token.

    1
    2
    
     export LS_TOKEN='<your-token>'
     kubectl create secret generic otel-collector-secret -n default --from-literal="LS_TOKEN=$LS_TOKEN"
    
  2. Deploy the Collectors into the cluster. Replace your-cluster-name with the name of the cluster you are connected to.

    1
    2
    3
    4
    5
    6
    
     helm install kube-otel-stack lightstep/kube-otel-stack \
       -n default --set tracesCollector.enabled=true \
       -n default --set logsCollector.enabled=true \
       --set tracesCollector.clusterName=your-cluster-name
       --set metricsCollector.enabled=true \
       --set metricsCollector.clusterName=your-cluster-name
    
  3. Verify that the Collectors are deployed:

    1
    
     kubectl get services
    

    The output should look similar to the following, with the metric and trace collectors using ports 4317/TCP and 8888/TCP.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    
     NAME                                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
     kube-otel-stack-kube-state-metrics             ClusterIP   10.101.88.226    <none>        8080/TCP            50s
     kube-otel-stack-metrics-collector              ClusterIP   10.110.169.147   <none>        8888/TCP,4317/TCP   46s
     kube-otel-stack-metrics-collector-headless     ClusterIP   None             <none>        8888/TCP,4317/TCP   46s
     kube-otel-stack-metrics-collector-monitoring   ClusterIP   10.111.105.46    <none>        8888/TCP            46s
     kube-otel-stack-metrics-targetallocator        ClusterIP   10.100.174.219   <none>        80/TCP              46s
     kube-otel-stack-prometheus-node-exporter       ClusterIP   10.102.127.57    <none>        9100/TCP            50s
     kube-otel-stack-traces-collector               ClusterIP   10.104.137.13    <none>        8888/TCP,4317/TCP   47s
     kube-otel-stack-traces-collector-headless      ClusterIP   None             <none>        8888/TCP,4317/TCP   47s
     kube-otel-stack-traces-collector-monitoring    ClusterIP   10.107.94.27     <none>        8888/TCP            47s
     kube-otel-stack-logs-collector-monitoring      ClusterIP   10.102.66.68     <none>        8888TCP             31s
     kubernetes                                     ClusterIP   10.96.0.1        <none>        443/TCP             13d
     opentelemetry-operator                         ClusterIP   10.102.57.178    <none>        8443/TCP,8080/TCP   16m
     opentelemetry-operator-webhook                 ClusterIP   10.104.31.140    <none>        443/TCP             16m
    
  4. Configure your OpenTelemetry-instrumented applications running in the cluster to export traces to an OTLP/gRPC endpoint kube-otel-stack-traces-collector:4317. More information is available on how to instrument applications in the Quickstart: Instrumentation documentation.

The Operator, for languages like Java, .NET, Node, and Python, supports auto-instrumenting code running in clusters. This lets you deploy SDKs automatically without any code changes. More details are available in the OpenTelemetry Community Docs.

Use the OpenTelemetry demo

The demo is a microservice environment maintained by the OpenTelemetry Community. You install and configure the Collector for the demo using a Helm chart.

We recommend creating a separate sandbox or development project for testing with non-production data. If you create a new project, you will need to update the access token value you set in previous steps.

You use the Operator to deploy a Collector configured to send trace and metric data to Cloud Observability. The Helm chart configures a collector using best practices.

  1. Create a Kubernetes secret that holds your Cloud Observability access token. Replace '<your-token>' with the access token copied from Cloud Observability.

    1
    2
    
     export LS_TOKEN='<your-token>'
     kubectl create secret generic otel-collector-secret -n default --from-literal="LS_TOKEN=$LS_TOKEN"
    
  2. Deploy the Collectors into the cluster. Replace your-cluster-name with the name of the cluster you are connected to.

    1
    2
    3
    4
    5
    
     helm install kube-otel-stack lightstep/kube-otel-stack \
       -n default --set tracesCollector.enabled=true \
       --set tracesCollector.clusterName=your-cluster-name
       --set metricsCollector.enabled=true \
       --set metricsCollector.clusterName=your-cluster-name
    
  3. Verify that the Collectors are deployed:

    1
    
     kubectl get services
    

    The output should look similar to the following, with the metrics and trace collectors using ports 4317/TCP and 8888/TCP.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    
     NAME                                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
     kube-otel-stack-kube-state-metrics             ClusterIP   10.101.88.226    <none>        8080/TCP            50s
     kube-otel-stack-metrics-collector              ClusterIP   10.110.169.147   <none>        8888/TCP,4317/TCP   46s
     kube-otel-stack-metrics-collector-headless     ClusterIP   None             <none>        8888/TCP,4317/TCP   46s
     kube-otel-stack-metrics-collector-monitoring   ClusterIP   10.111.105.46    <none>        8888/TCP            46s
     kube-otel-stack-metrics-targetallocator        ClusterIP   10.100.174.219   <none>        80/TCP              46s
     kube-otel-stack-logs-collector-monitoring      ClusterIP   10.102.66.68     <none>        8888TCP             31s
     kube-otel-stack-prometheus-node-exporter       ClusterIP   10.102.127.57    <none>        9100/TCP            50s
     kube-otel-stack-traces-collector               ClusterIP   10.104.137.13    <none>        8888/TCP,4317/TCP   47s
     kube-otel-stack-traces-collector-headless      ClusterIP   None             <none>        8888/TCP,4317/TCP   47s
     kube-otel-stack-traces-collector-monitoring    ClusterIP   10.107.94.27     <none>        8888/TCP            47s
     kubernetes                                     ClusterIP   10.96.0.1        <none>        443/TCP             13d
     opentelemetry-operator                         ClusterIP   10.102.57.178    <none>        8443/TCP,8080/TCP   16m
     opentelemetry-operator-webhook                 ClusterIP   10.104.31.140    <none>        443/TCP             16m
    
  4. Create a new values.yaml with the following content and save it to a local directory. This configures the OpenTelemetry Demo Helm chart to send metrics and traces to the collectors deployed by the kube-otel-stack chart:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    
     opentelemetry-collector:
       config:
         exporters:
           otlp/traces:
             endpoint: kube-otel-stack-traces-collector:4317
             tls:
               insecure: true
           otlp/metrics:
             endpoint: kube-otel-stack-metrics-collector:4317
             tls:
               insecure: true
         service:
           pipelines:
             metrics:
               receivers: [otlp]
               processors: [batch]
               exporters: [logging, otlp/metrics]
             traces:
               receivers: [otlp]
               processors: [batch]
               exporters: [logging, otlp/traces]
    
  5. Deploy the demo using the path to the saved values.yaml file.

    1
    
     helm upgrade my-otel-demo open-telemetry/opentelemetry-demo --install -f values.yaml
    
  6. Expose the frontend proxy at port 8080.

    1
    
     kubectl port-forward svc/my-otel-demo-frontendproxy 8080:8080
    
  7. Expose the OTLP port on the Collector at port 4317.

    1
    
     kubectl port-forward svc/my-otel-demo-otelcol 4317:4317
    
  8. Verify the demo is running by visiting localhost:8080

    It may take a few minutes for the app to start running.

View telemetry data in Cloud Observability

Cloud Observability offers pre-built dashboards you can use to start viewing your telemetry data. The Global Service Dashboard allows you to view the health of all your services and its associated metrics.

  1. In Cloud Observability, click the Dashboard icon to open the Dashboard view. Dashboard view

  2. Click Create a pre-built dashboard.

  3. Select the Services tab and add the OOTB Generic Service Dashboard Create dashboard

The Global Service dashboard uses template variables that allow you to view overall health, as well as health of a particular service. Choose a service (or services) from the $service dropdown to see data from a service. Scroll down the dashboard to see associated Kubernetes metrics. Global Service dashboard

Click the expand icon to view a chart in detail, including the query used to create it. In charts with span data, click a dot in the chart to see a full trace from the exemplar span. Expanded chart

Now that you have telemetry reporting to a dashboard, read the following topics to learn more about it:

And learn about other features in Cloud Observability to help you monitor and investigate your system.

Troubleshooting

The default OTLP Exporter from a Collector enables gzip compression and TLS. Depending on your network configuration, you may need to enable or disable certain other gRPC features. This page contains a complete list of configuration parameters for the Collector gRPC client.

In the event that you are unable to establish a gRPC connection to the Cloud Observability platform, you can use the grpcurl tool to ensure connectivity from your network to our public satellites. Run the following command, replacing <YOUR_ACCESS_TOKEN> with your project’s access token:

1
grpcurl -H 'lightstep-access-token:<YOUR_ACCESS_TOKEN>' ingest.lightstep.com:443 list

You should see the following output, or something similar:

1
2
3
4
5
grpc.reflection.v1alpha.ServerReflection
jaeger.api_v2.CollectorService
lightstep.collector.CollectorService
lightstep.egress.CollectorService
opentelemetry.proto.collector.trace.v1.TraceService

If you don’t see this output, or the request hangs, then something is blocking gRPC traffic from transiting your network to ours. Please ensure that any proxies are passing through the lightstep-access-token header.

For additional troubleshooting recommendations, see Troubleshooting Missing Data in Cloud Observability.

See also

Recommended Collector configuration

Already using OpenTelemetry and the Collector?

Already using OpenTelemetry and the Collector?

Updated Apr 27, 2023