# Performance monitoring
Performance of web applications has been shown as not only a key aspect of the users' experience but also shown to improve user retention, and conversion [1].
For business owners however, quantitatively measuring performance is difficult. Traditionally, performance or load testing is referred to as determining how a system performs under load [2]. However, this view simply ignores the general user experience under normal loading conditions, especially in the age of modern web applications where the cost of rendering and loading an application can be less impacted by the load of the application server.
User-centric performance metrics look at measuring the performance of a web application from a user's point of view, rather than the application server because ultimately the user's experience is what really matters. For instance, two applications can have exactly the same server response times, but one can render sooner than the other, with a better user experience.
Virtuoso makes measuring and monitoring performance easy by automatically capturing key user-centric performance metrics, as well as network response timings every time it captures your checkpoints. This way, you can not only monitor the user experience, but also the network traffic performance which can help detect performance regressions (or improvements) in the application servers.
# User-centric performance metrics
Web vitals, an initiative by Google, provides a standardized set of key user-centric metrics that can be measured on web applications. In particular, there are five web vitals that are recommended to monitor and aim to improve:
- Largest Contentful Paint (LCP): rendering time of the largest visible block within the viewport.
- First Input Delay (FID): the time it takes for the browser to respond to the first user interaction on your page.
- First Contentful Paint (FCP): the time from when the page starts loading to when any part of the page's content is rendered on the screen.
- Time to First Byte (TTFB): the time it takes the user's browser to receive the first byte of the page content.
- Cumulative Layout Shift (CLS): sum of all shifts in the page layout during the user's experience on the page.
- Because this metric requires a long user session to observe the layout shifts, and that automated user journeys are often fast automated paths on your application, our internal testing has shown that this metric is best left to in-the-field measurement rather than captured as part of the performance metrics we report on. As a result, Virtuoso does not currently report the CLS metric.
Other metrics also exist such as Time To Interactive (TTI) and Total Blocking Time (TBT), but these are considered lab-metrics and can lead to large variances, and generally, as per Google's guidance, FID is considered a better metric for measuring page interactivity.
Measuring in the field
Although Virtuoso enables you to test and monitor your application's performance, especially to help detect performance regressions before they make it to the users, we also advise that you measure web vitals in the field as well; that is, to measure the core web vitals from the users' experience as well.
# How to monitor for performance
When you run an execution with capture of checkpoint snapshots enabled, Virtuoso monitors and reports on key user-centric performance metrics, as well as metrics from all network requests.
This is available both as an overall report on the goal's main summary screen, as well as in a detailed view for each checkpoint, in the per-checkpoint side panel, an example of which is shown below:
Monitoring and alerting
All information shown on the views above can be retrieved using the Virtuoso API. You can use this in your CI pipelines or monitoring scripts to collect and report on deviations against your thresholds.
# Meaningful colors for user-centric metrics
The captured metrics are reported in a color coded scheme:
- Green: represents that your page checkpoint is performing well;
- Orange: indicates that the metric needs improvement;
- Red: indicates that the performance is poor for the given metric.
You can learn more about the thresholds of each metric by hovering your mouse over the specific value, or learn more by clicking the link beside each metric.
# Some metrics can be missing for certain checkpoints
Not all user-centric metrics may be always available. For example, FID and LCP may not be available because the steps in your journey do not make a user-interaction, or the largest contentful paint may be unknown by the time your journey navigates away.
In order to increase the likelihood of capturing these metrics, we recommend making some interaction on the page before the end of your checkpoint which does not result in a page-navigation away from the current page. For example, you can click on a non-header link or dropdown, or write into an input field.