Cloud Observability offers a way to quickly see how all your services and their operations are performing in one place - the Service Directory view.
You can also use our pre-built service dashboards or the Service health panel to view service health.
From here, you can:
When you first open Cloud Observability, you’re taken to the Service Directory. You can also access it from the navigation bar.
Your services are listed in alphabetical order. To make finding services easier, you can “favorite” a service so it always appears at the top of the list.
To find a service:
To favorite a service:
We will be introducing new workflows to replace the Deployments tab and RCA view. As a result, they will soon no longer be supported. Instead, use notebooks for your investigation where you can run ad-hoc queries, view data over a longer time period, and run Cloud Observability’s correlation feature.
The Service Health view on the Deployments tab shows you the latency, error rate, and operation rate of your Key Operations (operations whose performance is strategic to the health of your system) on the selected service.
Key Operations are displayed in order of magnitude of change in performance (you can change the order). You can quickly see the top performance changes and compare performance during latency or error regression spikes with normal performance.
By default, the operations are sorted by the amount of detected change (largest to smallest). Use the dropdown to change the sort.
Also by default, only the latency percentile with the largest amount of change displays. You can change the sparkline charts to show more percentiles using the More ( ⋮ ) icon.
You can search for an operation using the Search field.
By default, the data shown is from the last 60 minutes. You can customize that time period using the dropdown. Use the < > controls to move backwards and forwards through time. You can view data from your retention window (default is three days).
You can also zoom in on a time period by clicking and dragging over the time period you want a closer look at. The charts redraw to report on just the period you selected.
Cloud Observability displays sparkline charts for Key Operations on the service. They show recent performance for Service Level Indicators (SLIs): latency, error rate, and operation rate. Shaded yellow bars to the left of the chart indicate the magnitude of the change.
The operations are initially sorted by highest change during the visible time period (you can change that time period) and you can also change the sorting order).
If there’s a deployment marker visible, that change is measured as the difference between the selected version and all other versions. If there is no marker, the change is measured as the difference between the first and second half of the time period shown in the chart.
When determining what has changed in your services, Cloud Observability compares the SLI of the baseline SLI timeseries to the comparison SLI timeseries. Those time periods are determined using the data currently visible in the charts.
You can change the amount of time displayed using the time period dropdown at the top right of the page.
The baseline and comparison time periods are determined as follows:
If there is one or more deployment markers visible:
If there are no deployment markers visible:
Cloud Observability compares the performance of the first half of the time period to the second half.
Only changes that are relative (i.e. a change of 10ms to 500ms is ranked higher than one of 1s to 2s) are considered. The yellow bars on the sparkline chart indicate the amount of change. Cloud Observability measures two aspects of change: size and continuity. A full bar indicates that a large, sustained change has happened. Smaller bars indicate either a smaller change or one that did not last for the full time period.
The yellow bar means that an SLI had an objectively large change, regardless of service or operation. Cloud Observability’s algorithm runs on each SLI independently. For example, when the bar displays for an operation’s latency, that means latency has changed – not that its change was greater compared to the other SLIs.
Select an operation’s sparkline charts to view larger charts for the latency, error rate, and operation rate. You use these larger charts to start your investigation.
Span samples for the selected operation are shown in a table below the charts (click View spans). Click a span’s row to view the span in it’s trace.
The Service Health view on the Deployments tab provides the ability to see how deployments specifically affect performance of your services. When you implement an attribute to display versions of your service, a deployment marker displays at the time the deployment occurred on all charts in Cloud Observability.
We will be introducing new workflows to replace the Deployments tab and RCA view. As a result, they will soon no longer be supported. Instead, use notebooks for your investigation where you can run ad-hoc queries, view data over a longer time period, and run Cloud Observability’s correlation feature.
These markers allow you to quickly correlate deployment with a possible regression.
When you have multiple versions in a time window, you can view the performance of each deployed version. For example, in this image of the Service Health view, multiple versions have been deployed. Hover over the chart to see the percentage of traffic in each version.
Learn more about how to use this view to monitor the health of your deployments.
We will be introducing new workflows to replace the Deployments tab and RCA view. As a result, they will soon no longer be supported. Instead, use notebooks for your investigation where you can run ad-hoc queries, view data over a longer time period, and run Cloud Observability’s correlation feature.
When you spot a latency or error rate regression, you can start an investigation by clicking the corresponding time series chart during the regression. You can choose to compare performance from before the previous deploy, an hour ago, a day ago, or select a custom baseline.
Choose a time in the middle of the regression to avoid collecting data previous to the spike.
We will be introducing new workflows to replace the Deployments tab and RCA view. As a result, they will soon no longer be supported. Instead, use notebooks for your investigation where you can run ad-hoc queries, view data over a longer time period, and run Cloud Observability’s correlation feature.
This view provides the following tools to help you with root cause analysis for latency:
Learn how to use these tools here.
We will be introducing new workflows to replace the Deployments tab and RCA view. As a result, they will soon no longer be supported. Instead, use notebooks for your investigation where you can run ad-hoc queries, view data over a longer time period, and run Cloud Observability’s correlation feature.
Cloud Observability offers these tools for analyzing spikes in error rate:
Learn how to use these tools here.
You can add any of the time series charts to a notebook for when, during an investigation, you want to be able to run ad hoc queries, take notes, and save your analysis for use in postmortems or runbooks. Notebooks let you view logs, metrics, and traces from different places in Cloud Observability together, in one place.
To add to a notebook, click Add to notebook and search to choose an existing notebook or create a new notebook.
When you add to a notebook, a panel is created using the same query. You can see the latency for multiple percentiles and view exemplar traces. The annotation is a link back to the original, so you can quickly return to the origin of your investigation.
Learn more about notebooks.
To view the relationships of the selected service and operation to upstream and downstream services and operations, you can create a dependency map and add it to a notebook.
The Operations tab on the Service Directory view shows the selected service’s operations currently reporting to Cloud Observability in alphabetical order, along with performance metrics aggregated over the selected time period.
The table provides several useful performance metrics for each operation:
To see if other services are affecting an operation, view the operation in a notebook or dashboard and use the dependency map to view upstream and downstream services and their performance.
Streams are retained span queries that continuously collect latency, error rate and operation rate data. By default, data from span queries are persisted for three days. When you save a query as a Stream, the data is collected and persisted for a longer period of time.
To view all Streams for a service, click the Streams tab. The number on the tab tells you how many Streams exist for this service.
Create a Stream from the Operations tab by clicking Create Stream for an operation.
You can add charts that show the Stream’s performance to either a notebook or a dashboard. When you add a Stream, three charts are created: one for latency, one for error rate, and one for operation rate.
Add an Stream’s query to a notebook for when, during an investigation, you want to be able to run ad hoc queries, take notes, and save your analysis for use in postmortems or runbooks. Notebooks allow you to view metric and trace data from different places in Cloud Observability together, in one place.
Add the query to a dashboard when you want to monitor the performance over a period of time.
Click the Dashboards tab to view dashboards that include charts or a Stream for this service. The number on the tab tells you how many dashboards exist for this service.
Only dashboards that have charts that contain a filter for the service are shown.
Click a dashboard to view it.
Read Create and manange dashboards to learn more.
The data you can view and use in Cloud Observability depends on the quality of your tracing instrumentation. The better and more comprehensive your instrumentation is, the better Cloud Observability can collect and analyze your data to provide highly actionable information.
Cloud Observability analyzes the instrumentation on your services and determines how you can improve it to make your Cloud Observability experience even better. It can determine whether you instrumentation:
hostname
attributes to help find performance issues in different environments.Click the Instrumentation Quality tab to learn how well your instrumentation measures up. The number on the tab gives your score (based on 100%).
Learn more about what your score means and how to fix it.
Updated Mar 15, 2024