Let’s take a look at at trace that includes the
update-inventory operation to dig deeper into the issue.
Back in the Compare Operations table, click on the
update-inventoryoperation to some example traces. Then click on the longest trace to open it in the Trace view.
Learn more about Trace View
You use the Trace view to see a full trace from beginning to end of a request. The Trace view shows you a flame graph of the full trace (each service a different color) and below that, each span is shown in a hierarchy, allowing you to see the parent-child relationship of all the spans in the trace. Errors are shown in red.
Clicking a span shows details in the right panel, based on the span’s metadata. Whenever you view a trace in Lightstep, it’s persisted for the length of your Data Retention policy so it can be bookmarked, shared, and reviewed by your team at any time. The selected span is part of the URL, so it will remain selected when the URL is visited.
You can learn more about the Trace view here
Note that the
update-inventoryoperation is indeed an issue, as it’s taking up most of the critical path’s time and is contributing to 99.7% of the latency. Also note that the tag
customerhas a value of
trendibleand the tag
regionhas a value of
us-west-1both of which were discovered by Correlations, so we know we’ve likely found the problem.
Let’s expand the log section to see if there’s any good information there.
In the sidebar, expand Logs. Seems there’s an issue with a network connection. But what caused it? Let’s investigate the
Click back on the Service Directory and choose the
inventoryservice that the
Sure enough, there’s a deployment marker right around the time we saw the spike in the
iOS service. Looks like that deploy likely caused the network issues leading to latency!
What Did We Learn?
- You can see example traces and then click on one to open it in Trace View to see the details.
- The Trace view contains tons of information about each span in the trace, making it easy to verify your hypothesis.