This document describes UQL itself. Check out our cheat sheet, as well as a topic for building UQL queries specifically for span queries.
The Lightstep Unified Query Language (UQL) allows you to:
- Retrieve and manipulate time series, which may be derived from metrics or spans (see time series queries). Metrics telemetry must be analyzed using time series queries, since metrics telemetry is time series data. Spans can be interpreted as time series, for example by counting their start time (e.g.
spans count
). - Fetch whole spans (see spans queries) and their metadata (for example, attributes and logs). Use span syntax when creating a dependency map.
UQL has the same structure whether it’s used to query for spans or time series. UQL queries are written as a pipeline made up of stages: stage1 | stage2 | stage3
. Stages operate sequentially and independently; stage1
produces some data, then stage2
modifies that, and finally stage3
operates on the output of stage 2.
Time Series Queries
Syntax
The first stage of the pipeline produces a table, and subsequent stages operate on that table using table operations. Table operations may produce rows, combine rows, change the number of columns in a table, or even combine tables.
Tables
Each pipeline stage takes some number of tables as input and produces tables as output. A table has a column for each label on the data, a pair of timestamp columns, and a value column. For example, the query metric cpu.usage | delta 1s | group_by [service-name], mean
produces a table with a column for the service-name
label.
service-name | start-timestamp | limit-timestamp | cpu.usage |
---|---|---|---|
checkout | 779000 | 780000 | 23.1 |
checkout | 780000 | 781000 | 22.3 |
product | 779000 | 780000 | 5.5 |
product | 780000 | 781000 | 7.2 |
Table groups
Table Groups are sets of UQL tables that proceed through a pipeline together. Table Groups are what allow you to join tables and produce outputs like error ratios.
Here’s an example of a query that produces request error ratios per-service:
Query with a Table Group
1
2
3
4
5
6
7
(
metric requests;
metric errors
)
| delta 1m
| group_by [service], sum
| join right / (left + right)
The Table Group is created by the parallel Fetch stages: (metric requests; metric errors;)
and consists of two tables.
Distributing operations
Operations that can work with table groups may be written after the closing parenthesis of the group. They will distribute into the group. The term distribute is the same as in arithmetic: (a + b) * c == a * c + b * c
You can also think of this as operating on each stage concurrently. These two pipelines are equivalent because the delta
and group_by
stages are distributed into the table group.
Distributing stages
1
2
3
4
5
6
7
8
9
(
metric requests | delta 1m | group_by [hostname], sum;
metric errors | delta 1m | group_by [hostname], sum
) | join right / left
(
metric requests;
metric errors
) | delta 1m | group_by [hostname], sum | join right / left
Data types
The values in a table’s columns are always one of these types:
- string: UTF-8 string.
- integer: 64-bit integer.
- float: 64-bit IEEE-754 float
- boolean: either
true
orfalse
- distribution: a statistical distribution. For example, the latency of requests over a given time period may be represented as a distribution.
- timestamp: a Unix Epoch timestamp, in microseconds
Arguments to table operations may be any of the table column types, as well as:
- duration: an integer followed by a duration specifier. For example: 10m. Valid duration specifiers are:
- s for seconds
- m for minutes
- h for hours
- d for days
- w for weeks
- table identifiers: in join’s the tables are called
left
andright
. - column identifiers: the label of a table’s column, for example:
region
orhostname
. - value expressions: are combinations of identifiers, operators, and literals that are evaluated to produce new values in a table. For example, in a
join
operation:join left / (1 + left + right)
. - filter expressions: are expressions that can operate on column names and values to produce a boolean result. For example:
region == "us-east-1" || region == "us-west-1"
- reducers: are mathematical functions that are applied to time series values. Examples include
mean
,max
, andcount
. - fetch type: specifies what span-derived time series should be fetched. Valid fetch types are:
- count: produces a delta float time series counting the number of spans
- latency: produces a delta distribution of the latency of spans
- custom-numeric-attribute: produces a delta distribution of the values of custom-numeric-attribute
Data literals
Numeric literals are written like: 110
or 110.33
Boolean literals are true
and false
String literals can be written in three ways to help with escaping symbols. The string literal test \ “‘123’“ can be written:
- With double quotes:
"test \\ \"'123'\""
. If you want to write a\
or"
literal in the string, you’ll need to escape it with a preceding\
. - With single quotes:
'test \\ "\'123\'"'
. If you want to write a\
or'
literal in the string, you’ll need to escape it with a preceding\
. - With backticks (`):
`test \ "'123'"`
. All characters that appear in a raw string are interpreted literally, so there’s no need to escape anything. The one caveat is that there’s no way to put a ` character in these strings.
Metric kinds
Metrics may be one of the following kinds. Kinds describe the relationship between successive points in the time series.
- Delta: A measurement of the change in the metric from the point to point. For example, the number of requests is usually a delta metric. Over a 5 second period, you might see a point every second like
1, 11, 2, 5, 6
, which would indicate that 25 requests occurred in that timeframe. - Gauge: An instantaneous measurement. There is no relationship between successive points. For example, the setting of a rate limit or other configuration value is often a gauge.
Certain reducers are only valid for certain kinds of metrics, as indicated below.
Pipeline stages
Documentation format
Each pipeline stage is described using the following structure:
Syntax
1
stage_name <required-arg1>, <[required-list-arg2]> [, optional-arg3 [, optional-arg4]]
stage_name performs some operation on the input tables
Tables | |
---|---|
Input | A single aligned table. |
Output | A single aligned table. |
Arguments
Name | Type | Required | Description | Default |
---|---|---|---|---|
required-arg1 | string | required | It’s an argument to the stage! | n/a |
required-list-arg2 | string… | required | This is a list of arguments, formatted like so: ["item0", "item1"] |
n/a |
optional-arg3 | integer | optional | This third argument is an optional argument. | If there’s a default, it’ll be provided in this column |
optional-arg4 | boolean | optional | This is also an optional argument, but for it to be set, arg3 must also be set. | If there’s a default, it’ll be provided in this column |
Fetch operations
Fetch operations take zero tables as input and produce a single table as output. They must come first in a pipeline.
metric
Syntax
1
metric <metric-name>
metric fetches the metric named metric-name
.
Tables | |
---|---|
Input Tables | None |
Output Tables | A single, unaligned table. |
Arguments
Name | Type | Required | Description |
---|---|---|---|
metric-name | string | required | The name of the stored metric |
constant
Syntax
1
constant <literal-value>
constant generates a gauge time series where all points have value literal-value
. The generated points are aligned to the global output period, and the time series has no labels. This can be used to plot horizontal lines on charts.
Tables | |
---|---|
Input Tables | None |
Output Tables | A single table, aligned to the global output period. |
Arguments
Name | Type | Required | Description |
---|---|---|---|
constant-value | literal | required | The value to plot. |
spans
Syntax
1
spans <fetch-type>
spans fetches a scalar time series of the count of spans or a distribution of either the spans latency or the values of a numeric attribute.
Tables | |
---|---|
Input Tables | None |
Output Tables | A single, unaligned table. |
Arguments
Name | Type | Required | Description |
---|---|---|---|
fetch-type | string | required | The name of the span-derived time series |
Aligners
Aligners take unaligned data and group it into a time series with points at regular intervals. They perform time aggregation, also called temporal aggregation. Aligners should generally follow immediately after a fetch operation. Aligners take two arguments: the input window and the output period.
If there is more than one aligner stage in a query, all but the final aligner must provide both input window and output period arguments.
Lightstep supports these aligners:
delta
Syntax
1
delta [input-window [, output-period]]
delta calculates the change in the metric across the input period, outputting a point every output period. The input-window functions as a smoothing argument. If the output-period is not provided, the Lightstep-calculated global output period is used.
Tables | |
---|---|
Input Tables | Any number of tables |
Output Tables | The same number of tables, aligned to the output period. |
Arguments
Name | Type | Required | Description | Default |
---|---|---|---|---|
input-window | duration | optional | The duration to incorporate into a single point calculation. | The global output period |
output-period | duration | optional | The period between output points. | The global output period |
rate
Syntax
1
rate [input-window [, output-period]]
rate calculates the change per-second of the input tables, outputting a point every output period. The input-window functions as a smoothing argument: each output point is computed by taking the mean of the points within the input window. If the output-period is not provided, the Lightstep-calculated global output period is used.
Input Tables | Any number of tables |
Output Tables | The same number of tables, aligned to the output period. |
Arguments
Name | Type | Required | Description | Default |
---|---|---|---|---|
input-window | duration | optional | The duration to incorporate into a single point calculation | The global output period |
output-period | duration | optional | The period between output points. | The global output period |
latest
Syntax
1
latest [input-window [, output-period]]
latest aligns gauge metrics to the output period by using the nearest point before an output point. The input-window tells latest
how far to look back for input points. If the output-period is not provided, the Lightstep-calculated global output period is used.
Tables | |
---|---|
Input Tables | Any number of tables with kind gauge |
Output Tables | The same number of tables, aligned to the output period. |
Arguments
Name | Type | Required | Description | Default |
---|---|---|---|---|
input-window | duration | optional | The “lookback” window for input points | The global output period |
output-period | duration | optional | The period between output points | The global output period |
Example
Given this input data:
start-timestamp | limit-timestamp | rate-limit |
---|---|---|
779000 | 779000 | 101 |
780500 | 780500 | 102 |
782100 | 782100 | 103 |
783000 | 783000 | 104 |
784000 | 784000 | 105 |
latest 1s, 1s
will produce this output:
start-timestamp | limit-timestamp | rate-limit |
---|---|---|
779000 | 779000 | 101 |
780000 | 780000 | 101 |
781000 | 781000 | 102 |
782000 | 782000 | 102 |
783000 | 783000 | 104 |
784000 | 784000 | 105 |
reduce
Syntax
1
reduce [input-window> [, output-period],] <reducer>
reduce performs temporal aggregation for a metric using the supplied reducer.
Tables | |
---|---|
Input Tables | Any number of tables of any kind |
Output Tables | The same number of tables, aligned to the output period. |
Arguments
Name | Type | Required | Description | Default |
---|---|---|---|---|
input-window | duration | optional | The duration to incorporate into a single point calculation | The global output period |
output-period | duration | optional | The period between output points. | The global output period |
reducer | reducer | required | The reducer to apply to points when computing the output | n/a, required |
Example
reduce 1h, max
would produce the maximum value of the metric over the hour before the point.
Global output period
The global output period controls how many data points a UQL query produces per time series. The global output period isn’t something that’s set from within the language. It’s automatically set by the Lightstep UI or whatever interface is making the query. To show why the UI needs to have control over the time aggregation of a query, here’s an example.
In the Lightstep frontend, if you make a UQL query over the past 60 minutes the UI sets the global output period to 30 seconds. This means that there will be 120 data points per time series. Now let’s say you want to make that same query, but this time over the last day. When you change the query interval, the UI will adjust the global output period to 10 minutes. There will now be 144 data points in your chart per time series. This increase is necessary because if a query spanning the last day produced a data point every 30 seconds, madness would ensue! There would be a total of 7200 data points, which couldn’t be plotted easily.
The output of a UQL query must always be aligned to the global output period. In other words, neighboring points must always be global-output-period-apart. In order to meet this requirement, you must end a query with an aligner without an output period set, causing this aligner to default to the global output period.
For example, the query below isn’t allowed, because the trailing rate
uses an output period of 1 minute.
Illegal Query
1
metric requests | rate 1m, 1m
It’s important to note that even if the global output period were set to 1 minute, the above query would still be considered illegal. Although its output period matches the global output period, if the global output period were to change to 2 minutes the query would produce incorrectly aligned data. Legal queries must produce data aligned to the global output period regardless of what it is set to.
The query below is legal, because the trailing reduce
doesn’t specify an output period, which causes the global output period to be used instead:
Legal Query
1
metric requests | reduce 1m, sum
Series operations
Series operations change tables by dropping or combining rows, or by joining tables together. UQL has the following series operations:
filter
Syntax
1
filter <predicate>`
filter drops rows from the table that do not match the predicate. The predicate is generally based on the label columns so that a complete series is either kept or filtered out.
Tables | |
---|---|
Input Tables | Any number of tables of any kind |
Output Tables | The same number of tables, potentially with fewer rows. |
Arguments
Name | Type | Required | Description |
---|---|---|---|
predicate | filter expression | required | The expression that’s evaluated to decide whether to drop a row. A row is kept when the expression evaluates to true. |
Examples
All requests
time series where service
is checkout
and zone
is us-east-1
:
1
metric requests | delta | filter service == "checkout" && zone == "us-east-1"
All requests
time series where the host name isn’t i-12345
:
1
metric requests | delta | filter host != "i-12345"
All requests
time series where the host name doesn’t start with i-123
:
1
metric requests | delta | filter host =~ "i-123.*"
All responses
time series where the integer-valued response_code
label is in the 400s:
1
metric responses | delta | filter response_code >= 400 && response_code < 500
Count the number of spans where the host
label is defined:
1
spans count | filter defined(host) | delta | group_by [], sum
group_by
Syntax
1
group_by [<label-list>], <reducer>
group_by combines rows with the same timestamps and the same values in the label columns within the []
using the provided reducer.
Tables | |
---|---|
Input Tables | Any number of tables of any kind |
Output Tables | The same number of tables, potentially with fewer rows. |
Arguments
Name | Type | Required | Description |
---|---|---|---|
label-list | identifier… | required | 0 or more label keys to group metric time series by. An empty set means to group all rows together. |
reducer | reducer | required | The reducer used to combine the values of the rows. |
Examples
Given this input data, produced by metric rate-limit | latest
:
region | zone | hostname | start-timestamp | limit-timestamp | rate-limit |
---|---|---|---|---|---|
us-east-1 | us-east-1-bos-1a | abc.def | 779000 | 779000 | 100 |
us-east-1 | us-east-1-bos-1a | ghi.jkl | 779000 | 779000 | 101 |
us-east-1 | us-east-1-mia-1a | abc.def | 779000 | 779000 | 99 |
us-west-2 | us-west-2-lax-1a | abc.def | 779000 | 779000 | 102 |
us-west-2 | us-west-2-lax-1a | ghi.jkl | 779000 | 779000 | 100 |
metric rate-limit | latest | group_by [region, zone], max
produces:
region | zone | start-timestamp | limit-timestamp | rate-limit |
---|---|---|---|---|
us-east-1 | us-east-1-bos-1a | 779000 | 779000 | 101 |
us-east-1 | us-east-1-mia-1a | 779000 | 779000 | 99 |
us-west-2 | us-west-2-lax-1a | 779000 | 779000 | 102 |
An empty list of group_by fields means group all rows together, stripping off all attributes and producing a single time series. metric rate-limit | latest | group_by [], max
produces:
start-timestamp | limit-timestamp | rate-limit |
---|---|---|
779000 | 779000 | 101 |
779000 | 779000 | 99 |
779000 | 779000 | 102 |
join
Syntax
1
join <join-expression> [left-default [, right-default]]
join merges two tables into a single table using the join_expression
. A join expression has two identifiers, left
and right
, that represent the values of rows in the input tables.
Join uses columns in the left and right tables with identical names for joining, and join matches all columns with identical names. There is a row for the cross-product of all the combinations of values in the join columns. This can result in a very large table. For this reason, it’s usually useful to reduce the number of columns in tables using group_by
before a join
. In general, using one or two columns for joining is a good rule of thumb.
Group-By before Join
1
(metric cpu.usage; metric cpu.limit) | delta | group_by [service], mean | join left / right * 100
The output tables of the group_by stage will have only one label column: service
.
If there is a combination in one of the tables that is not present in the other, the default value argument is used.
Default values for Join
1
(metric cpu.usage; metric cpu.limit) | latest | group_by [service], mean | join left / right * 100, 0, 1
Tables | |
---|---|
Input Tables | Two tables |
Output Tables | One table |
Arguments
Name | Type | Required | Description |
---|---|---|---|
join-expression | value expression | Required | An expression that describes how to combine the points in the left and right tables. |
left-default | literal | Optional | The value to use for left if there are no rows in the left table for this join. |
right-default | literal | Optional | The value to use for right if there are no rows in the right table for this join. |
Example
Given these two input tables:
hostname | start-timestamp | limit-timestamp | request-count |
---|---|---|---|
abc.def | 779000 | 1619800781000 | 10 |
abc.def | 781000 | 1619800783000 | 8 |
abc.def | 783000 | 1619800785000 | 6 |
ghi.jk | 779000 | 1619800781000 | 2 |
ghi.jk | 781000 | 1619800783000 | 4 |
ghi.jk | 783000 | 1619800785000 | 8 |
hostname | start-timestamp | limit-timestamp | error-count |
---|---|---|---|
abc.def | 779000 | 1619800781000 | 1 |
abc.def | 781000 | 1619800783000 | 0 |
abc.def | 783000 | 1619800785000 | 3 |
ghi.jk | 779000 | 1619800781000 | 0 |
ghi.jk | 781000 | 1619800783000 | 0 |
ghi.jk | 783000 | 1619800785000 | 2 |
join right / left
produces:
hostname | start-timestamp | limit-timestamp | value |
---|---|---|---|
abc.def | 779000 | 781000 | 0.1 |
abc.def | 781000 | 783000 | 0 |
abc.def | 783000 | 785000 | 0.5 |
ghi.jk | 779000 | 781000 | 0 |
ghi.jk | 781000 | 783000 | 0 |
ghi.jk | 783000 | 785000 | 0.25 |
Joining more than two tables
Using the with
keyword and naming your join inputs allows joining 2 or more tables into a single table.
Percentage of cpu limits currently being used, broken down by container and pod
1
2
3
4
with
usage = metric cpu.usage | latest | group_by [container, pod], mean;
limit = metric cpu.limit | latest | group_by [container, pod], mean;
join usage / limit * 100, usage=0, limit=1
Mean error response percentage by service
1
2
3
4
5
with
client_errors = metric responses | delta | filter response_class = 4xx | group_by [service], mean;
server_errors = metric responses | delta | filter response_class = 5xx | group_by [service], mean;
total = metric responses | delta | group_by [service], mean;
join ((client_errors + server_errors) / total) * 100
top
Syntax
1
top [[label-list],] <k>, <reducer>, [window]
top returns the k time series with the highest value as computed by the supplied reducer and time window. All other time series are filtered out. If labels are specified, top returns k time series for each combination of label values. As an example:
Grouped Top-K
1
metric cpu.usage | top [service, region], 10, mean, 10m
Will return ten time series for each combination of service and region in the table. For each of those combinations, up to ten time series will be returned, and the ranking will be based on the mean over the last ten minutes of the query.
The inverse of top is bottom.
Tables | |
---|---|
Input Tables | One table |
Output Tables | One table |
Arguments
Name | Type | Required | Description |
---|---|---|---|
label-list | identifier… | Optional | A list of label keys to group by when performing the reduction. |
k | integer | Required | The number of time series to place in the output table. |
reducer | reducer | Required | The reducer to use for ranking time series. |
window | duration | Optional | The window ending at the queries’ limit to perform the reduction on. If this argument is omitted, the entire time range of the query is included. For example, a value of 10m means “compute over the last ten minutes of the query”. |
Examples
Top five time series where the CPU usage was the highest on average over the duration of the query:
1
metric kubernetes.cpu.usage.total | latest | top 5, mean
For each region and environment, the five time series where the CPU usage was the highest on average over the last hour of the query:
1
metric kubernetes.cpu.usage.total | latest | top ["region", "environment"], 5, mean, 1h
bottom
Syntax
1
top [[label-list],] <k>, <reducer>, [window]
Bottom is the exact inverse of top. It returns the k time series with the lowest values according to the provided reducer. For details about ranking by label combinations and time windows, see top
Tables | |
---|---|
Input Tables | One table |
Output Tables | One table |
Arguments
Name | Type | Required | Description |
---|---|---|---|
label-list | identifier… | Optional | A list of label keys to group by when performing the reduction. |
k | integer | Required | The number of time series to place in the output table. |
reducer | reducer | Required | The reducer to use for ranking time series. |
window | duration | Optional | The window ending at the queries’ limit to perform the reduction on. If this argument is omitted, the entire time range of the query is included. For example, a value of 10m means “compute over the last ten minutes of the query”. |
Example
Bottom five time series where the CPU usage was lowest on average over the duration of the query.
1
metric kubernetes.cpu.usage.total | latest | bottom 5, mean
point
Syntax
1
point <value-expression>
Point evaluates the value-expression
for each point in the input table and stores the result in a row in the output table. The value expression can make use of one identifier: value
.
Tables | |
---|---|
Input Tables | One tables |
Output Tables | One table |
Arguments
Name | Type | Required | Description |
---|---|---|---|
value-expression | value expression | Required | An expressions describing how to transform the points in the input table. |
Examples
Time series of the 95th percentile of spans latency for the warehouse
service:
1
spans latency | delta | group_by [], sum | point percentile(value, 95)
Boolean-valued time series where a point is true if the value of my.metric
is greater than 5.
1
metric my.metric | latest | point value > 5
Squared value of each point in my.metric
:
1
metric my.metric | latest | point pow(value, 2)
Determine how long until SSL certificates expire:
1
metric ssl_expiration_unix_secs | point value - timestamp(value) | latest
time_shift
Syntax
1
time_shift <shift-duration>
time_shift moves each point in the time series forwards by the amount of time specified in the duration.
This is equivalent to adding duration to the timestamp of each point.
time_shift does not have to immediately follow a fetch stage, but it must come before an aligner or group_by
in the UQL query.
Tables | |
---|---|
Input Tables | Any number of unaligned and unaggregated tables. |
Output Tables | The same number of tables, with modified timestamps. |
Arguments
Name | Type | Required | Description |
---|---|---|---|
shift-duration | duration | required | The amount of time to shift the time series forwards by. This must be positive. |
Example
Rate of requests to the warehouse
service, where each point in the time series is a point from the prior day, shifted forward.
spans count | time_shift 1d | rate | filter service = warehouse | group_by [], sum
Difference in request rate from one week prior:
1
2
3
4
5
(
metric requests | rate | group_by [customer, method], sum;
metric requests | time_shift 7d | rate | group_by [customer, method], sum
)
| join left - right
point_filter
Syntax
1
point_filter <predicate>
point_filter evaluates the provided predicate against the value of each point. If the predicate returns false, the point is removed. Otherwise, the point remains unchanged.
Tables | |
---|---|
Input Tables | Any number of tables. |
Output Tables | The same number of tables, possibly with fewer points. |
Arguments
Name | Type | Required | Description |
---|---|---|---|
predicate | filter expression | required | An expression that operates on a point’s value (called value ) and returns a boolean. |
Example
Remove all negative gauge points:
1
metric cpu.utilization | point_filter value >= 0 | latest
Keep only distributions where the p99 latency is above 10 seconds:
1
2
3
4
spans latency
| delta
| point_filter percentile(value, 99) > 10000
| group_by [], sum
Spans Queries
This section describes queries that fetch and manipulate whole spans. Spans queries fetch whole spans with all of their labels and logs. This is not to be confused with queries like spans count
or spans latency
, which build time series from spans. For help with those queries, see this topic.
spans_sample
Syntax
1
spans_sample [filter]
spans_sample must be the first stage in every spans query. If a predicate is specified, spans_sample fetches only spans that match the predicate. If not, it may fetch any span. The stage will try to fetch a large sample of spans, but may not fetch that many if only a few spans match the predicate.
Arguments
Name | Type | Required | Description |
---|---|---|---|
filter | filter expression | optional | An expression that operates on a span’s labels and returns a boolean. |
assemble
Syntax
1
assemble
assemble takes a set of sampled spans as input, assembles a trace for each of those spans, and returns a set of traces. It takes no arguments.
Pipeline arguments
This section explains all of the types that are references in the Argument Stages
Identifiers
Identifiers are unquoted strings that are used in several pipeline stages. In join
the identifiers are left
and right
. In point
the identifier is value
. group_by
and filter
use identifiers as the first part of a label. For example, group_by[service]
uses the identifiers service
.
Value expressions
Value expressions are supported in the join
and point
table operations. They are made up of identifiers, literals, and arithmetic operators. Value expressions produce a single output value.
Examples
left + right
2 * value - 3
percentile(value, 99)
dist_sum(value)
Filter expressions
Filter expressions are used in the filter
table operation. Filter expressions take as input a set of labels and return true or false depending on whether those labels satisfy the predicate. A filter expression can be composed of many different operators:
- regular expressions (
=~
,!~
) - comparison operators (
==
,!=
,<
,<=
,>
,>=
) - presence operators (
defined(some_key)
,undefined(some_key)
) - logical operators (
&&
,||
)
Examples
region == "us-east-1"
region =~ "us-.\*"
response_code != 200
response_code >= 400 && response_code < 500
undefined(response_code) || response_code != 200
Arithmetic operators
Arithmetic expressions are listed in order from high to low precedence. They take and return numeric arguments on the value
column of a table
Symbol | Name | Description |
---|---|---|
* | mul | returns the product of the left and right hand sides |
/ | div | returns the value of the left hand side divided by the right hand side |
+ | sum | returns the sum of the left and right hand sides |
- | diff | returns the difference between the left and right hand sides |
pow(a,b) | pow | returns a raised to the power of b |
percentile(a, b) | percentile | returns the bth percentile of the distribution a |
dist_sum(a) | dist_sum | returns the sum of the values in distribution a |
dist_count(a) | dist_count | returns the total count of the values in distribution a |
abs(a) | abs | returns the absolute value of a |
timestamp(a) | timestamp | return the start timestamp (in seconds since the epoch) of the point a |
Boolean operators
Boolean operators have the lowest precedence and are applied left-to-right.
Symbol | Name | Description |
---|---|---|
&& | and | true if the left and right hand sides are both true |
|| | or | true if one of the left and right hand sides are true |
!= | neq | true if the left and right hand sides aren’t equal |
== | eq | true if the left and right hand sides are equal |
< | less | true if the left hand side is strictly less than the right |
> | greater | true if the left hand side is strictly greater than the right |
<= | leq | true if the left hand side is less than or equal to the right |
>= | geq | true if the left hand side is greater than or equal to the right |
=~ | regex | true if the left hand side matches the regular expression on the right hand side |
!~ | nregex | true if the left hand side does not match the regular expression on the right hand side |
Regular expressions
Regular expressions use the RE2 regular expression syntax.
By default, regular expressions:
- are case-sensitive
- are greedy
- allow
.
to match new lines - are not multi-line aware (i.e.
^
will only match the start of text, not the start of a line)
You can modify this behavior with flags in the regular expression string. For example, if
you want ^
/ $
to match the start / end of lines, you can use (?m)
to enable multi-line mode.
To match spans whose db.statement
attribute has a WHERE clause at the start of a line, you could write:
Example multi-line db.statement attribute
1
2
3
SELECT *
FROM table
WHERE id = 2;
Query with flags
1
2
3
4
spans count
| delta
| filter db.statement =~ "(?mi)^where"
| group_by [service, operation], sum
The m
flag lets ^
match the start of each line, not just the start of the string. The i
flag allows a case-insensitive search, so the regex will match all variations of “where”.
Reducers
Reducers are required in the reduce
, group_by
, bottom
, and top
stages. Reducers perform calculations on groups of points and produce a single output.
Name | Supported Types | Description |
---|---|---|
min | integer, float | the minimum value |
mean | integer, float | the arithmetic mean of the values |
max | integer, float | the maximum value |
sum | integer, float | the sum of input values |
distribution | integer, float | computes a summary of the values provided, producing a distribution |
count | any type | counts the number of time series in the input table |
count_nonzero | integer, float | counts the number of time series in the input table where the value does not equal zero |
std_dev | integer, float | the standard deviation of the values |
Comments
All text on each line after the # character is treated as a comment. (Of course, the # inside a string won’t start a comment). Here is an example query with comments:
1
2
3
4
# This query plots the rate of requests to each service.
metric requests
| rate # this must be before the group_by
| group_by [service], sum