# Goal and journey summary
The goal summary delivers important insights into your goals and their executions in a single place. To access it, navigate to the Goal view and view the
Summary tab on the right-side panel.
The summary is available as soon as you create a goal, providing information about the tests and related execution plans. However, most of the information will only become available once you execute one or more journeys.
The Journeys section, at the top of the summary, provides the answer to a set of basic questions on the goal:
- How many journeys are in this goal? How many are published and how many are still in a draft state?
- How many distinct checkpoints are the journeys divided into?
- How many extensions and how many data tables are used in the tests?
- When was the last change to any of the tests? Who made that change?
- Which users have contributed to this goal?
- How many tests passed, failed or were not executed on their last execution?
# Detailed view
To get a better overview of your journeys, open the journeys detailed view by clicking the
This provides you with a tag summary listing all tags used in the goal and how many journeys they're associated with, and a table of goal contributors.
It also includes a failure report, which lists all failing steps across the goal, together with the reason for failure. Click each of the steps to see its details and root cause analysis. Read more about root cause analysis →
# Network requests
The Network requests section indicates the number of occurrence of network errors while traversing the checkpoints during your tests.
It shows the summary of the total number of requests made, as well as server-level (
5xx) and client-level (
4xx) errors responses received during the test execution.
# Detailed view
To dig deeper into the errors, open the detailed view of the network requests by clicking the
This provides you the list of errors grouped together by the specific endpoint and error message, which you can filter and sort. Each error includes the request target's
request method, the response's
HTTP status code, and for how many
distinct pages the error was received.
The errors can be further expanded to reveal the list of affected pages, and their full list of requests.
Error counts don't match?
The totals by error type, shown in both the basic and detailed version of this section, count every occurrence of an error of that type. The list items show the number of distinct pages affected by each particular error.
The sum of affected pages may not add up to the total as each error may occur more than once for a page.
# Browser logs
Similar to the network requests, the Browser logs section indicates the occurrence of logs on the browser console, while executing the tests.
The counts are separated into
LOG level log messages.
# Detailed view
Same as other sections of the summary, you can drill down by clicking the
Each error on the list includes the log's message and the count for how many
distinct pages the console registered said message. You can sort the list by various criteria, as well as filter them using the list options.
The item can be further expanded to reveal the list of pages for which the log message was registered, each of which can be clicked to reveal its full list of browser logs.
Similar to Network Logs counts, the count excludes duplicate logs received on the same page.
# Performance metrics
This section allows you to quickly monitor the performance of your web applications based on a set of user-centric metrics. Read more about the metrics and performance monitoring on Virtuoso. →
Monitoring and improving these metrics, as defined by Google on their web vitals initiative, should enhance the user's experience and contribute to the improvement of user retention and conversion.
The values for these metrics are monitored and updated when you run an execution with capture of checkpoint snapshots enabled. The shown values represent the average of values across the captured checkpoints and each is reported in a color coded scheme:
- Green: represents that your page checkpoints are performing well;
- Orange: indicates that the metric needs improvement;
- Red: indicates that the performance is poor for the given metric.
You can learn more about the thresholds of each metric by hovering your mouse over the
# Detailed view
By clicking the
On this view, you have access to a table of every page (from your captured checkpoints) and their corresponding vitals. The table can be sorted by any of its columns and clicking the page name or checkpoint number will open its detailed view.
Metrics are missing?
All user-centric metrics may not always be available. For example, FID and LCP may be unavailable because the steps in your journey do not make a user interaction, or the largest contentful paint may be unknown by the time your journey navigates away.
In order to increase the likelihood of capturing these metrics, we recommend making some interaction on the page before the end of your checkpoint which does not result in a page-navigation away from the current page. For example, you can click on a non-header link or dropdown, or write into an input field. Read more about this on the performance monitoring section.
# Goal plans
The Goal plans section displays in a very succinct way:
- How many active execution plans are targeting the whole goal or any of its journeys;
- Which plan is the scheduled next to be executed for this goal;
The links allow you to access the plans directly; see Managing plans for more information about plans.
# Single journey summary
When a single journey is opened on the left-side panel of the Goal view, the summary sections will focus on that single journey.
The Journeys section will display the latest status of the selected journey, when was it last changed and which user did that change, the list of users who contributed to the journey, along with the counts for test steps on the journey, and the number of extensions used.
Same as the Journeys section it can be expanded, by clicking the
Other sections such as network requests, performance, plans, etc. will also automatically adapt to the specific journey, showing data only for the given journey.
In addition, a new section will be available: the Execution health trend & activity. This displays a single chart where the executions of the journey are plotted over time, for up to the last 90 days. Each circle represents an execution, where the color represents the outcome of the execution, and the y-axis represents how long it took to finish.
Overlaying the plot, the changes to the journey, as well as user comments, will be displayed as markers to help you see when they may have changed the execution results or their performance.
The chart is interactive: you can drag your mouse over the chart to zoom in on specific time ranges, and hover any point or marker to display some of their details.