UQL reference

This document describes UQL itself. Check out our cheat sheet and guides specifically for span queries and log queries.

The Cloud Observability Unified Query Language (UQL) allows you to:

  • Retrieve and manipulate time series, which may be derived from logs, metrics, or spans (see time series queries). Metrics telemetry must be analyzed using time series queries, since metrics telemetry is time series data. Logs and spans can be interpreted as time series, for example, by counting their start time (e.g., spans count).
  • Fetch whole spans (see Trace queries) and their metadata (for example, attributes and logs). Use span syntax when creating a dependency map.

UQL has the same structure whether it’s used to query for spans or time series. UQL queries are written as a pipeline made up of stages: stage1 | stage2 | stage3. Stages operate sequentially and independently; stage1 produces some data, then stage2 modifies that, and finally stage3 operates on the output of stage 2.

Time series: Query overview

Data Model

Time series queries can be created from logs, metrics, and spans.

The first stage of the pipeline produces a table, and subsequent stages operate on that table using table operations. Table operations may produce rows, combine rows, change the number of columns in a table, or even combine tables.

Each pipeline stage takes some number of tables as input and produces tables as output. A table has a column for each label on the data, a pair of timestamp columns, and a value column. For example, the following query produces a table with a column for the service-name label:

1
2
3
metric cpu.usage
| delta 1s
| group_by [service-name], mean
service-name start-timestamp limit-timestamp cpu.usage
checkout 779000 780000 23.1
checkout 780000 781000 22.3
product 779000 780000 5.5
product 780000 781000 7.2

Data Types

The values in a table’s columns are always one of these types:

  • string: UTF-8 string.
  • integer: 64-bit integer.
  • float: 64-bit IEEE-754 float
  • boolean: either true or false
  • distribution: a statistical distribution. For example, the latency of requests over a given time period may be represented as a distribution.
  • timestamp: a Unix Epoch timestamp, in microseconds

Arguments to table operations may be any of the table column types, as well as:

  • duration: an integer followed by a duration specifier. For example: 10m. Valid duration specifiers are:
    • s for seconds
    • m for minutes
    • h for hours
    • d for days
    • w for weeks
  • table identifiers: in join’s the tables are called left and right.
  • column identifiers: the label of a table’s column, for example: region or hostname.
  • value expressions: are combinations of identifiers, operators, and literals that are evaluated to produce new values in a table. For example, in a join operation: join left / (1 + left + right).
  • filter expressions: are expressions that can operate on column names and values to produce a boolean result. For example: region == "us-east-1" || region == "us-west-1"
  • reducers: are mathematical functions that are applied to time series values. Examples include mean, max, and count.
  • fetch type: specifies what log- or span-derived time series should be fetched. Valid fetch types are:
    • count [logs and spans]: produces a delta float time series counting the number of spans or logs.
    • latency [spans only]: produces a delta distribution of the latency of spans.
    • custom-numeric-attribute [spans only]: produces a delta distribution of the values of custom-numeric-attribute.

Data Literals

Numeric literals are written like: 110 or 110.33

Boolean literals are true and false

String literals can be written in three ways to help with escaping symbols. The string literal test \ “‘123’“ can be written:

  • With double quotes: "test \\ \"'123'\"". If you want to write a \ or " literal in the string, you’ll need to escape it with a preceding \.
  • With single quotes: 'test \\ "\'123\'"'. If you want to write a \ or ' literal in the string, you’ll need to escape it with a preceding \.
  • With backticks (`): `test \ "'123'"`. All characters that appear in a raw string are interpreted literally, so there’s no need to escape anything. The one caveat is that there’s no way to put a ` character in these strings.

Metric kinds

Metrics may be one of the following kinds. Kinds describe the relationship between successive points in the time series.

  • Delta: A measurement of the change in the metric from the point to point. For example, the number of requests is usually a delta metric. Over a 5 second period, you might see a point every second like 1, 11, 2, 5, 6, which would indicate that 25 requests occurred in that timeframe.
  • Gauge: An instantaneous measurement. There is no relationship between successive points. For example, the setting of a rate limit or other configuration value is often a gauge.

Certain reducers are only valid for certain kinds of metrics, as indicated below.

Time series: Fetch stages

Fetch stages take zero tables as input and produce a single table as output. They must come first in a pipeline.

logs

Syntax

1
logs <fetch-type>

logs fetches a scalar time series of the count of logs.

Tables  
Input Tables None
Output Tables A single, unaligned table.

Arguments

Name Type Required Description
fetch-type string required The name of the log-derived time series

metric

Syntax

1
metric <metric-name>

metric fetches the metric named metric-name.

Tables  
Input Tables None
Output Tables A single, unaligned table.

Arguments

Name Type Required Description
metric-name string required The name of the stored metric

spans

Syntax

1
spans <fetch-type>

spans fetches a scalar time series of the count of spans or a distribution of either the spans latency or the values of a numeric attribute.

Tables  
Input Tables None
Output Tables A single, unaligned table.

Arguments

Name Type Required Description
fetch-type string required The name of the span-derived time series

constant

Syntax

1
constant <literal-value>

constant generates a gauge time series where all points have value literal-value. The generated points are aligned to the global output period, and the time series has no labels. This can be used to plot horizontal lines on charts.

Tables  
Input Tables None
Output Tables A single table, aligned to the global output period.
Arguments
Name Type Required Description
constant-value literal required The value to plot.

Time series: Aligners

Aligners take unaligned data and group it into a time series with points at regular intervals. They perform time aggregation, also called temporal aggregation. Aligners should generally follow immediately after a fetch operation. Aligners take two arguments: the input window and the output period. Checkout the alignment deep dive topic to learn more.

If there is more than one aligner stage in a query, all but the final aligner must provide both input window and output period arguments.

Cloud Observability supports these aligners:

delta

Syntax

1
delta [input-window [, output-period]]

delta calculates the change in the metric across the input period, outputting a point every output period. The input-window functions as a smoothing argument. If the output-period is not provided, the Cloud Observability-calculated global output period is used.

Tables  
Input Tables Any number of tables
Output Tables The same number of tables, aligned to the output period.

Arguments

Name Type Required Description Default
input-window duration optional The duration to incorporate into a single point calculation. The global output period
output-period duration optional The period between output points. The global output period

rate

Syntax

1
rate [input-window [, output-period]]

rate calculates the change per-second of the input tables, outputting a point every output period. The input-window functions as a smoothing argument: each output point is computed by taking the mean of the points within the input window. If the output-period is not provided, the Cloud Observability-calculated global output period is used.

Input Tables Any number of tables
Output Tables The same number of tables, aligned to the output period.

Arguments

Name Type Required Description Default
input-window duration optional The duration to incorporate into a single point calculation The global output period
output-period duration optional The period between output points. The global output period

latest

Syntax

1
latest [input-window [, output-period]]

latest aligns gauge metrics to the output period by using the nearest point before an output point. The input-window tells latest how far to look back for input points. If the output-period is not provided, the Cloud Observability-calculated global output period is used.

Tables  
Input Tables Any number of tables with kind gauge
Output Tables The same number of tables, aligned to the output period.

Arguments

Name Type Required Description Default
input-window duration optional The “lookback” window for input points The global output period
output-period duration optional The period between output points The global output period
Example

Given this input data:

start-timestamp limit-timestamp rate-limit
779000 779000 101
780500 780500 102
782100 782100 103
783000 783000 104
784000 784000 105

latest 1s, 1s will produce this output:

start-timestamp limit-timestamp rate-limit
779000 779000 101
780000 780000 101
781000 781000 102
782000 782000 102
783000 783000 104
784000 784000 105

reduce

Syntax

1
reduce [input-window> [, output-period],] <reducer>

reduce performs temporal aggregation for a metric using the supplied reducer.

Tables  
Input Tables Any number of tables of any kind
Output Tables The same number of tables, aligned to the output period.

Arguments

Name Type Required Description Default
input-window duration optional The duration to incorporate into a single point calculation The global output period
output-period duration optional The period between output points. The global output period
reducer reducer required The reducer to apply to points when computing the output n/a, required

Example

reduce 1h, max would produce the maximum value of the metric over the hour before the point.

Global output period

The global output period controls how many data points a UQL query produces per time series. The global output period isn’t something that’s set from within the language. It’s automatically set by the Cloud Observability UI or whatever interface is making the query. To show why the UI needs to have control over the time aggregation of a query, here’s an example.

In the Cloud Observability frontend, if you make a UQL query over the past 60 minutes the UI sets the global output period to 30 seconds. This means that there will be 120 data points per time series. Now let’s say you want to make that same query, but this time over the last day. When you change the query interval, the UI will adjust the global output period to 10 minutes. There will now be 144 data points in your chart per time series. This increase is necessary because if a query spanning the last day produced a data point every 30 seconds, madness would ensue! There would be a total of 7200 data points, which couldn’t be plotted easily.

The output of a UQL query must always be aligned to the global output period. In other words, neighboring points must always be global-output-period-apart. In order to meet this requirement, you must end a query with an aligner without an output period set, causing this aligner to default to the global output period.

For example, the query below isn’t allowed, because the trailing rate uses an output period of 1 minute.

Illegal Query

1
metric requests | rate 1m, 1m

It’s important to note that even if the global output period were set to 1 minute, the above query would still be considered illegal. Although its output period matches the global output period, if the global output period were to change to 2 minutes the query would produce incorrectly aligned data. Legal queries must produce data aligned to the global output period regardless of what it is set to.

The query below is legal, because the trailing reduce doesn’t specify an output period, which causes the global output period to be used instead:

Legal Query

1
metric requests | reduce 1m, sum

Time series: Modifiers

These stages change tables by dropping or combining rows, or by joining tables together. UQL has the following modifier stages:

fill

Syntax

1
fill <scalar>

fill sets of the value of any empty output intervals in each input series to the provided scalar. If there are no input series, nothing changes.

It only operates on previously-aligned time series, and never on raw log or span data. It must come after the group by stage in spans count and logs count queries.

Tables  
Input Tables Any number of tables containing aligned time series
Output Tables The same number of tables, potentially with more rows filled with the scalar value

Arguments

Name Type Required Description
scalar Numeric literal required The scalar value with which to fill gaps in input series.

Examples

Fill gaps in a metric time series:

1
metric example | fill 42"

Fill gaps in the count of logs containing the keyword “error”:

1
logs count | filter phrase_match(body, "error") | group_by [], sum | fill 0

Fill gaps in a percentage produced by a join query:

1
2
3
4
with
  payments = metric requests | delta | filter method == "/api/submit-payment" | group_by [], sum; 
  total = metric requests | delta | group_by [], sum; 
join payments / total, payments=0, total=1 | fill 0

filter

Syntax

1
filter <predicate>`

filter drops rows from the table that do not match the predicate. The predicate is generally based on the label columns so that a complete series is either kept or filtered out.

Tables  
Input Tables Any number of tables of any kind
Output Tables The same number of tables, potentially with fewer rows.

Arguments

Name Type Required Description
predicate filter expression required The expression that’s evaluated to decide whether to drop a row. A row is kept when the expression evaluates to true.

Examples

All requests time series where service is checkout and zone is us-east-1:

1
metric requests | delta | filter service == "checkout" && zone == "us-east-1"

All requests time series where the host name isn’t i-12345:

1
metric requests | delta | filter host != "i-12345"

All requests time series where the host name doesn’t start with i-123:

1
metric requests | delta | filter host =~ "i-123.*"

All responses time series where the integer-valued response_code label is in the 400s:

1
metric responses | delta | filter response_code >= 400 && response_code < 500

Count the number of spans where the host label is defined:

1
spans count | filter defined(host) | delta | group_by [], sum

group_by

Syntax

1
group_by [<label-list>], <reducer>

group_by combines rows with the same timestamps and the same values in the label columns within the [] using the provided reducer.

Tables  
Input Tables Any number of tables of any kind
Output Tables The same number of tables, potentially with fewer rows.

Arguments

Name Type Required Description
label-list identifier… required 0 or more label keys to group metric time series by. An empty set means to group all rows together.
reducer reducer required The reducer used to combine the values of the rows.

Examples

Given this input data, produced by metric rate-limit | latest:

region zone hostname start-timestamp limit-timestamp rate-limit
us-east-1 us-east-1-bos-1a abc.def 779000 779000 100
us-east-1 us-east-1-bos-1a ghi.jkl 779000 779000 101
us-east-1 us-east-1-mia-1a abc.def 779000 779000 99
us-west-2 us-west-2-lax-1a abc.def 779000 779000 102
us-west-2 us-west-2-lax-1a ghi.jkl 779000 779000 100

metric rate-limit | latest | group_by [region, zone], max produces:

region zone start-timestamp limit-timestamp rate-limit
us-east-1 us-east-1-bos-1a 779000 779000 101
us-east-1 us-east-1-mia-1a 779000 779000 99
us-west-2 us-west-2-lax-1a 779000 779000 102

An empty list of group_by fields means group all rows together, stripping off all attributes and producing a single time series. metric rate-limit | latest | group_by [], max produces:

start-timestamp limit-timestamp rate-limit
779000 779000 101
779000 779000 99
779000 779000 102

join

Named Join Syntax

1
2
3
with
  <bound-input-query>...
join <join-expression> <default>...

Unnamed Join Syntax

1
2
(<left-query>; <right-query>) 
| join <join-expression> [left-default [, right-default]]

The UQL join stage merges two tables into a single table. It then combines timeseries values together using an arithmetic expression. There are two different syntaxes for joining: named join and unnamed join. Regardless of which syntax you use the underlying join semantics will be the same.

Please read the join topic to learn details about the join stage.

top

Syntax

1
top [[label-list],] <k>, <reducer>, [window]

top returns the k time series with the highest value as computed by the supplied reducer and time window. All other time series are filtered out. If labels are specified, top returns k time series for each combination of label values. As an example:

Grouped Top-K

1
metric cpu.usage | top [service, region], 10, mean, 10m

Will return ten time series for each combination of service and region in the table. For each of those combinations, up to ten time series will be returned, and the ranking will be based on the mean over the last ten minutes of the query.

The inverse of top is bottom.

Tables  
Input Tables One table
Output Tables One table

Arguments

Name Type Required Description
label-list identifier… Optional A list of label keys to group by when performing the reduction.
k integer Required The number of time series to place in the output table.
reducer reducer Required The reducer to use for ranking time series.
window duration Optional The window ending at the queries’ limit to perform the reduction on. If this argument is omitted, the entire time range of the query is included. For example, a value of 10m means “compute over the last ten minutes of the query”.

Examples

Top five time series where the CPU usage was the highest on average over the duration of the query:

1
metric kubernetes.cpu.usage.total | latest | top 5, mean

For each region and environment, the five time series where the CPU usage was the highest on average over the last hour of the query:

1
metric kubernetes.cpu.usage.total | latest | top ["region", "environment"], 5, mean, 1h

bottom

Syntax

1
bottom [[label-list],] <k>, <reducer>, [window]

Bottom is the exact inverse of top. It returns the k time series with the lowest values according to the provided reducer. For details about ranking by label combinations and time windows, see top

Tables  
Input Tables One table
Output Tables One table

Arguments

Name Type Required Description
label-list identifier… Optional A list of label keys to group by when performing the reduction.
k integer Required The number of time series to place in the output table.
reducer reducer Required The reducer to use for ranking time series.
window duration Optional The window ending at the queries’ limit to perform the reduction on. If this argument is omitted, the entire time range of the query is included. For example, a value of 10m means “compute over the last ten minutes of the query”.

Example

Bottom five time series where the CPU usage was lowest on average over the duration of the query.

1
metric kubernetes.cpu.usage.total | latest | bottom 5, mean

point

Syntax

1
point <value-expression>

Point evaluates the value-expression for each point in the input table and stores the result in a row in the output table. The value expression can make use of one identifier: value.

Tables  
Input Tables One tables
Output Tables One table

Arguments

Name Type Required Description
value-expression value expression Required An expressions describing how to transform the points in the input table.

Examples

Time series of the 95th percentile of spans latency for the warehouse service:

1
spans latency | delta | group_by [], sum | point percentile(value, 95)

Boolean-valued time series where a point is true if the value of my.metric is greater than 5.

1
metric my.metric | latest | point value > 5

Squared value of each point in my.metric:

1
metric my.metric | latest | point pow(value, 2)

Determine how long until SSL certificates expire:

1
metric ssl_expiration_unix_secs | point value - timestamp(value) | latest

point_filter

Syntax

1
point_filter <predicate>

point_filter evaluates the provided predicate against the value of each point. If the predicate returns false, the point is removed. Otherwise, the point remains unchanged.

Tables  
Input Tables Any number of tables.
Output Tables The same number of tables, possibly with fewer points.

Arguments

Name Type Required Description
predicate filter expression required An expression that operates on a point’s value (called value) and returns a boolean.

Example

Remove all negative gauge points:

1
metric cpu.utilization | point_filter value >= 0 | latest

Keep only distributions where the p99 latency is above 10 seconds:

1
2
3
4
spans latency
| delta
| point_filter percentile(value, 99) > 10000
| group_by [], sum

time_shift

Syntax

1
time_shift <shift-duration>

time_shift moves each point in the time series forwards by the amount of time specified in the duration.

This is equivalent to adding duration to the timestamp of each point.

time_shift does not have to immediately follow a fetch stage, but it must come before an aligner or group_by in the UQL query.

Tables  
Input Tables Any number of unaligned and unaggregated tables.
Output Tables The same number of tables, with modified timestamps.

Arguments

Name Type Required Description
shift-duration duration required The amount of time to shift the time series forwards by. This must be positive.

Example

Rate of requests to the warehouse service, where each point in the time series is a point from the prior day, shifted forward.

spans count | time_shift 1d | rate | filter service = warehouse | group_by [], sum

Difference in request rate from one week prior:

1
2
3
4
5
(
  metric requests | rate | group_by [customer, method], sum;
  metric requests | time_shift 7d | rate | group_by [customer, method], sum
)
| join left - right

Trace queries

This section describes queries that fetch and manipulate whole traces. Trace queries fetch spans with all of their labels and logs. This is not to be confused with queries like spans count or spans latency, which build time series from spans. For help with those queries, see this topic.

spans_sample

Syntax

1
spans_sample [filter]

spans_sample must be the first stage in every trace query. It returns a set of spans.

The spans sampled at this stage will be the basis of assembling a trace, also known as seed spans in later stages. If a predicate is specified, spans_sample fetches only spans that match the predicate. If not, it may fetch any span. The stage will try to fetch a large sample of spans, but may not fetch that many if only a few spans match the predicate.

Arguments

Name Type Required Description
filter filter expression optional An expression that operates on a span’s labels and returns a boolean.

assemble

Syntax

1
assemble

assemble takes a set of spans as input, assembles a trace for each of those spans, and returns a set of traces. It takes no arguments.

trace_filter

Syntax

1
trace_filter <predicate>

trace_filter takes a set of traces, drops some of them based on the specified predicate, and returns a smaller set of traces.

If predicate returns true for the set of span labels, the trace will be kept. Otherwise, it will be discarded. For example, trace_filter operation == foo && service == bar will keep only traces which have operation == foo on some span and service == bar on some span. The labels may be on different spans or the same span.

Arguments

Name Type Required Description
predicate filter expression required The expression that’s evaluated to decide whether to drop a trace. A trace is kept when the expression evaluates to true when applied to all span labels in the trace.

summarize_by

Syntax

1
summarize_by [<group-attribute-list>], <reducer>([<to-reduce-attribute>])...

summarize_by takes as input a set of traces and returns a “trace table” where each row summarizes a sub-group of those traces.

summarize_by is almost identical to a SQL GROUP BY, with the notable difference that it also keeps the input traces that were used to build each summary row. [<group-attribute-list>] determines how traces will be grouped. The remaining arguments determine how each group will be translated into a row in the output table.

Arguments

Name Type Required Description
group-attribute-list identifier… optional the attributes to group traces by
reducer whole-trace reducer optional the function used to combine attribute values
to-reduce-attribute identifier optional the attribute to aggregate using the reducer

[<group-attribute-list>] is a comma-separated list of whole-trace attributes enclosed by brackets. summarize_by will group traces based on the values of the specified attributes. Two types of attributes can be used:

Field Description
[attribute] A trace-wide attribute; resolves to the set: {span.[attribute] for span in spans_in_trace}
seed_span.[attribute] A seed span attribute; resolves to the value of {attribute} on the seed span

Note that a “seed span” is the span that the trace was assembled from.

<reducer>([to-reduce-attribute])... is a comma-separated list of reducers operating on whole-trace attributes. For each group of traces, summarize_by will produce a row. The values of the columns in that row will be computed by the reducers in the summarize_by query. The allowed reducers are mean, count, and fraction_true. Only a few patterns are supported:

Reducer + Attribute Pattern Meaning
mean(seed_span.latency) mean latency of traces’ seed spans
count() number of traces
fraction_true(seed_span.error) fraction of traces’ seeds spans which errored

Applying reducers in other ways is not yet supported. If you try to write something else, such as mean(byte_size), you’ll see an error.

Example

Now, let’s walk through an example. Consider the query:

1
summarize_by [host, seed_span.response_code], mean(seed_span.latency), count(), fraction_true(seed_span.error)

First, the [host, seed_span.response_code] portion of the query indicates that input traces should be separated into groups based on:

  • The set of values of the host attribute on all spans in each trace
  • The value of the response_code attribute on the seed span of each trace

Next, a row will be produced for each group. The first cell of that row will contain an average of the latency of the seed spans in the group. The second cell will contain the number of traces in the group. And the third cell will contain the fraction of seed spans in the group that contain errors.

Logs queries

This section describes queries that fetch log objects.

logs

Syntax

1
logs

logs fetches logs objects with all their attributes. We omit the fetch-type (count) to get logs objects versus a time series derived from logs.

Fetch all logs

1
logs

The above query will fetch all logs within a given time range. It is possible to narrow the search space with the filter stage. The following query will find all logs where the log body contains error and the service.name attribute equals web.

Fetch all logs with filters

1
2
logs
| filter phrase_match(body, "error") && service.name == "web"

Pipeline arguments

This section explains all of the types that are references in the Argument Stages

Identifiers

Identifiers are unquoted strings that are used in several pipeline stages. In join the identifiers are left and right. In point the identifier is value. group_by and filter use identifiers as the first part of a label. For example, group_by[service] uses the identifiers service.

Value expressions

Value expressions are supported in the join and point table operations. They are made up of identifiers, literals, and arithmetic operators. Value expressions produce a single output value.

Examples

  • left + right
  • 2 * value - 3
  • percentile(value, 99)
  • dist_sum(value)

Filter expressions

Filter expressions are used in the filter table operation. Filter expressions take as input a set of labels and return true or false depending on whether those labels satisfy the predicate. A filter expression can be composed of many different operators:

  • regular expressions (=~, !~)
  • comparison operators (==, !=, <, <=, >, >=)
  • presence operators:
  • contains (contains(some_key, some_substring))
  • logical operators (&&, ||, !)
  • phrase_match(body, "error")

phrase_match

phrase_match is a filter predicate for logs and logs count queries. Use phrase_match to find specific letters, words, or numbers in logs.

phrase_match accepts two parameters: body and a search phrase.

For search phrases with more than 1 token, the last phrase token only needs to match a prefix of the next body token. For example, the search phrase an err matches an error and san error but not an terror.

Examples

  • phrase_match(body, "err")
  • region == "us-east-1"
  • region =~ "us-.\*"
  • response_code != 200
  • response_code >= 400 && response_code < 500
  • undefined(response_code) || response_code != 200
  • contains(error_message, "us-east-1")
  • !(service == foo && operation == bar)

Arithmetic operators

Arithmetic expressions are listed in order from high to low precedence. They take and return numeric arguments on the value column of a table

Symbol Name Description
* mul returns the product of the left and right hand sides
/ div returns the value of the left hand side divided by the right hand side
+ sum returns the sum of the left and right hand sides
- diff returns the difference between the left and right hand sides
pow(a,b) pow returns a raised to the power of b
percentile(a, b) percentile returns the bth percentile of the distribution a
dist_sum(a) dist_sum returns the sum of the values in distribution a
dist_count(a) dist_count returns the total count of the values in distribution a
abs(a) abs returns the absolute value of a
timestamp(a) timestamp return the start timestamp (in seconds since the epoch) of the point a
floor(a) floor return the greatest whole number value less than than or equal to a.
ceil(a) ceil return the least whole number value greater than or equal to a.
round(a) round return the nearest whole number value to a, rounding half away from zero.

Boolean operators

Boolean operators have the lowest precedence and are applied left-to-right.

Symbol Name Description
&& and true if the left and right hand sides are both true
|| or true if one of the left and right hand sides are true
! not true if the input is false; false if the input is true
!= neq true if the left and right hand sides aren’t equal
== eq true if the left and right hand sides are equal
< less true if the left hand side is strictly less than the right
> greater true if the left hand side is strictly greater than the right
<= leq true if the left hand side is less than or equal to the right
>= geq true if the left hand side is greater than or equal to the right
=~ regex true if the left hand side matches the regular expression on the right hand side
!~ nregex true if the left hand side does not match the regular expression on the right hand side

Regular expressions

Regular expressions use the RE2 regular expression syntax.

By default, regular expressions:

  • are case-sensitive
  • must match the entire string
  • allow . to match new lines
  • are not multi-line aware (i.e. ^ will only match the start of text, not the start of a line)

You can modify this behavior with flags in the regular expression string. For example, if you want ^ / $ to match the start / end of lines, you can use (?m) to enable multi-line mode.

To match spans whose db.statement attribute has a WHERE clause at the start of a line, you could write:

Example multi-line db.statement attribute

1
2
3
SELECT *
FROM table
WHERE id = 2;

Query with flags

1
2
3
4
spans count
| delta
| filter db.statement =~ "(?mi).*^where.*"
| group_by [service, operation], sum

The m flag lets ^ match the start of each line, not just the start of the string. The i flag allows a case-insensitive search, so the regex will match all variations of “where”.

If you want to match the WHERE clause with the exact case anywhere in the string, you can also write:

1
2
3
4
spans count
| delta
| filter db.statement =~ ".*WHERE.*"
| group_by [service, operation], sum

Reducers

Reducers are required in the reduce, group_by, bottom, and top stages. Reducers perform calculations on groups of points and produce a single output.

Name Supported Types Description
min integer, float the minimum value
mean integer, float the arithmetic mean of the values
max integer, float the maximum value
sum integer, float the sum of input values
distribution integer, float computes a summary of the values provided, producing a distribution
count any type counts the number of time series in the input table
count_nonzero integer, float counts the number of time series in the input table where the value does not equal zero
std_dev integer, float the standard deviation of the values

Comments

All text on each line after the # character is treated as a comment. (Of course, the # inside a string won’t start a comment). Here is an example query with comments:

1
2
3
4
# This query plots the rate of requests to each service.
metric requests
| rate # this must be before the group_by
| group_by [service], sum

Template Variables

Use template variables in filter expressions, including equality (==, !=) and regular expression matches (=~, !~) as well as filters using contains or phrase_match. Template variable values are always strings and may be bound to one, many or all possible values.

Learn how to get started using template variables in the Cloud Observability UI here.

For example, $var and ${var} reference a template variable named var. If the value of var is "foo" and a query includes the expression filter service == $var, it evaluates as filter service == "foo".

A template variable may also be bound to many values. The expression expands to consider all combinations of values. For example, if var is bound to two values "foo" and "bar" then the expression filter service == $var evaluates as:

1
filter service == "foo" || service == "bar"

The inverse filter service != $var expands to:

1
filter service != "foo" && service != "bar"

If a template variable is bound to all possible values, the entire predicate is skipped.

The table below summarizes the behavior of each binary operator depending on the template variable values provided.

  one many all
equal (==) x == "val" x == "val1" || ... x == "valN" x =~ ".*" (skipped)
not equal (!=) x != "val" x != "val1" && ... x != "valN" x != ".*" (skipped)
regex match (=~) x =~ "val" x =~ "val1" || ... x =~ "valN" x =~ ".*" (skipped)
regex no match (!~) x !~ "val" x !~ "val1" && ... x !~ "valN" x !~ ".*" (skipped)
contains contains(x, "val") contains(x, "val1") || ... contains(x, "valN") contains(x, "")
phrase_match phrase_match(x, "val") phrase_match(x, "val1") || ... phrase_match(x, "valN") phrase_match(x, "")

Interpolated Template Variables

Template variables forming part of a string literal interpolate their values into the string. Write interpolated template variables using the ${var} form; $var is not parsed as a template variable when inside a string literal.

Like plain template variables, interpolated template variables may be bound to one, many or all possible values. A string on the right-hand side of a filter expression may include many template variables, however, in which case it expands to include all combinations of values. Predicates matching interpolated wildcard template variables are not skipped unless the template variable comprises the entirety of the string (e.g. "${var}").

For example, consider the filter expression filter service == "${foo}-${bar}-${baz}". This string contains three template variables.

  • foo has values "a" and "b"
  • bar has an empty set of values
  • baz has values "x" and "y"

The expression expands to include all combinations of concrete values and becomes a regular expression match to accomodate the wildcard template variable bar:

1
2
filter service =~ "a-.*-x" || service =~ "a-.*-y" ||
       service =~ "b-.*-x" || service =~ "b-.*-y"

In phrase_match, any wildcard value must be the final subexpression of a string. For example, phrase_match(body, "foo-${var}") is acceptable if var is bound to all values, but phrase_match(body, "${var}-foo") is not. If var is bound to one or many values, these limitations do not apply.

See also

UQL cheat sheet

Use UQL to create advanced queries

Get started with spans queries in UQL

Updated Mar 7, 2024