Cloud Observability can ingest traces from the Datadog Agent natively. You can choose to “tee” your traces to both Cloud Observability and Datadog, or send them only to Cloud Observability. This document covers the configuration for each use case.
Cloud Observability supports both Datadog Agent major versions 6 and 7 tracks.
The following minimum versions are required:
Major version 5 of the Datadog Agent is not supported.
Configure required and recommended attributes for Datadog tracing clients.
Cloud Observability’s datadog trace ingest requires the lightstep.service_name
attribute to be set on all incoming spans. We also recommend setting the lightstep.service_version
attribute. Cloud Observability does not currently support setting these tags on the Datadog Agent. Instead they must be configured at the tracing client level (e.g. dd-trace-py
, etc). Each tracing client has various in-code configuration options, but the most consistent and straightforward way to set these is using the DD_TRACE_GLOBAL_TAGS
environment variable.
1
DD_TRACE_GLOBAL_TAGS="lightstep.service_name:<service_name>,lightstep.service_version:<service_version>"
DD_TRACE_GLOBAL_TAGS
is to be set on the Datadog tracing clients, not the agent.
To send tracing data to Cloud Observability and to Datadog, you need to add or update the additional_endpoints
property. It takes a key value pair, where the key is the endpoint to your Microsatellite environment (where the data is collected) and the value is the Cloud Observability access token. You can configure this using either YAML or environment variables.
datadog.yaml
file, add the additional_endpoints
property in the Basic Configuration section, and configure it with the URL for Cloud Observability metrics ingest and your Cloud Observability access token. If you are currently using v2 of the Datadog API, then you must also set that to false.If your environment uses public Microsatellites, use the values shown in the example for the trace endpoint. For on-premise Microsatellites, use the Microsatellite’s IP address.
Start tabs
Public Microsatellites
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#########################
## Basic Configuration ##
#########################
api_key: <EXISTING_DATADOG_API_KEY>
use_v2_api.series: false
####################################
## Trace Collection Configuration ##
####################################
apm_config:
additional_endpoints:
'https://ingest.lightstep.com': #US data center
#'https://ingest.eu.lightstep.com': #EU data center
- <LIGHTSTEP_ACCESS_TOKEN>
On-Premise Microsatellites
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#########################
## Basic Configuration ##
#########################
api_key: <EXISTING_DATADOG_API_KEY>
use_v2_api.series: false
####################################
## Trace Collection Configuration ##
####################################
apm_config:
additional_endpoints:
'<MICROSATELLITE_ENDPOINT>':
- <LIGHTSTEP_ACCESS_TOKEN>
End code tabs
Forwarder started;
. This lists the destinations the agent is sending to (which should include both Datadog and Cloud Observability).The Datadog Docker Agent, the containerized version of the Datadog Agent, supports configuration via environment variables. If you are currently using v2 of the Datadog API, then you must also set that to false.
If you are currently using v2 of the Datadog API, then you must also set the DD_USE_V2_API_SERIES
environment variable to false.
DD_APM_ADDITIONAL_ENDPOINTS
variable with the appropriate endpoint for your Microsatellite environment.
If your environment uses public Microsatellites, use the values shown in the example below. For on-premise Microsatellites, use the Microsatellite’s IP address.<LIGHTSTEP_ACCESS_TOKEN>
with your value.Start tabs
Docker Run
1
2
-e DD_APM_ADDITIONAL_ENDPOINTS='{"https://ingest.lightstep.com": ["LIGHTSTEP_ACCESS_TOKEN"]}' #US data center
#-e DD_APM_ADDITIONAL_ENDPOINTS='{"https://ingest.eu.lightstep.com": ["LIGHTSTEP_ACCESS_TOKEN"]}' #EU data center
Kubernetes
1
2
3
4
env:
- name: DD_APM_ADDITIONAL_ENDPOINTS
value: '{"https://ingest.lightstep.com": ["LIGHTSTEP_ACCESS_TOKEN"]}' #US data center
#value: '{"https://ingest.eu.lightstep.com": ["LIGHTSTEP_ACCESS_TOKEN"]}' #EU data center
Docker Compose
1
2
3
4
5
services:
datadog:
environment:
- DD_APM_ADDITIONAL_ENDPOINTS='{"https://ingest.lightstep.com": ["LIGHTSTEP_ACCESS_TOKEN"]}' #US data center
#- DD_APM_ADDITIONAL_ENDPOINTS='{"https://ingest.eu.lightstep.com": ["LIGHTSTEP_ACCESS_TOKEN"]}' #EU data center
End code tabs
Forwarder started;
. This lists the destinations the agent is sending to (which should include both Datadog and Cloud Observability).If you want to send traces only to Cloud Observability (and not Datadog), set the api_key
variable to your Cloud Observability access token and update the dd_url
and apm_config.apm_dd_url
based on your Microsatellite environment, and disable the process agent. If your environment uses public Microsatellites, use the values shown in the example. For on-premise Microsatellites, use the Microsatellite’s IP address.
Setting the dd_url
to Cloud Observability sends metric data to Cloud Observability instead of Datadog. If not set to a Microsatellite endpoint, the data along with your Cloud Observability access token is sent to Datadog, which is strongly discouraged.
api_key
and dd_url
and apm_config.apm_dd_url
as follows:Start tabs
Public Microsatellites
1
2
3
4
5
6
7
8
api_key: <YOUR LIGHTSTEP_ACCESS_TOKEN>
dd_url: 'https://metricingest.lightstep.com' # US data center
# dd_url: 'https://metricingest.eu.lightstep.com' # EU data center
apm_config:
apm_dd_url: 'https://ingest.lightstep.com' # US data center
# apm_dd_url: 'https://ingest.eu.lightstep.com # EU data center
process_config:
enabled: disabled
On-Premise Microsatellites
1
2
3
4
5
6
api_key: <YOUR LIGHTSTEP_ACCESS_TOKEN>
dd_url: '<MICROSATELLITE_ENDPOINT>'
apm_config:
apm_dd_url: '<MICROSATELLITE_ENDPOINT>'
process_config:
enabled: disabled
End code tabs
Updated Nov 9, 2021