Summary
The Summary tab provides a high-level health view of your automated testing over the selected time range, environment, and branches.
It surfaces volume, stability, and trend signals in one place so you can spot spikes, regressions, or noise before delving into details.
Test Run Volume
This chart shows daily runs, split by Passed tests (green) and Failed tests (red). Hover over a date to see exact counts. Use it to spot spikes, compare days, and correlate changes with deployments or data updates.
1. Total Runs
Counts all test runs in the selected time range and environment. Indicates test throughput for the period.
2. Average Runs per Day
Mean number of test runs per calendar day. Helps check CI cadence and scheduling consistency.
3. Total Passed Test Runs
Test Runs with zero failing tests. Track this to gauge build stability and confirm improvements after fixes.
4. Total Failed Test Runs
Test Runs with one or more failing tests. Use this to estimate the triage load and verify that the failure volume is trending downward.
Flakiness Rate
Percentage of executions with inconsistent results for the same code (pass in one run, fail in another). This is a noise indicator.
- High flakiness means wasted triage and unreliable signals.
- Track the curve after fixes to confirm deflakes are effective.
New Failure Rate
Share of test executions that are failing for the first time compared to prior runs. Use it to detect regressions early:
- Spikes indicate recent changes that may have introduced defects or test failures.
- A flat or declining line indicates improved stability for newly added or recently touched areas.
Test Retry Trends
Time series of Total Retries, Total Runs, and Retried Test Cases per day. A rising retry curve suggests flaky behavior or overly tight timeouts. Use it to:
- Quantify how often retries are masking instability.
- Target days or branches that require deflake work.