Specs
The Specs Explorer provides a project-level view of all your test specifications, centralizing their performance and reliability data.
It eliminates the need to locate individual spec files within all test runs by presenting a comprehensive overview in a single table. Use this view to search, sort, and filter specifications to identify performance bottlenecks and reliability issues.
For QA Engineers and Developers
-
For QA Engineers: Identify which specifications have the highest failure or flaky rates to prioritize efforts for stabilization.
-
For Developers: Identify the slowest-running spec files that may be contributing to increased pipeline duration and target them for optimization.
Why This View Matters
-
Project-Wide Visibility: Gain a consolidated view of every spec file's health and performance across the entire project.
-
Identify Bottlenecks: Quickly find the slowest and longest-running specifications to improve test cycle times.
-
Pinpoint Instability: Isolate specs with high failure and flaky rates that introduce noise and risk.
-
Focused Analysis: Use filters to scope data by time period and environment, correlating issues with specific contexts.
Layout
The Specs Explorer table provides key metrics for each specification file. All columns are sortable in ascending or descending order to help you analyze trends and find outliers.
1. Spec File
Displays the name and path of the test specification file. This serves as the primary identifier, linking all metrics directly to a specific test suite or feature area.
It allows teams to instantly locate the relevant source code when a spec shows a high failure rate or slow performance, streamlining the debugging process.
2. Executions
The total number of times the spec file has been executed within the selected time period and environment. This metric tracks test volume and helps contextualize failure rates.
A spec with a high execution count and frequent failures indicates a critical, high-impact issue. In contrast, a spec that fails on a low number of executions might point to a newly introduced or rarely triggered problem.
3. Failure Rate
The percentage of executions that resulted in one or more failed tests. As a core indicator of test reliability, this metric highlights specs that consistently fail.
A high failure rate is a strong indicator of a persistent product bug or a flawed test, helping teams prioritize the most urgent fixes that impact application stability.
4. Flaky Rate
The percentage of executions that contained at least one flaky test. This metric isolates test instability from consistent failures, which is critical for maintaining a trustworthy test suite.
Tracking flakiness helps teams dedicate resources to stabilizing tests, which reduces CI noise and prevents real bugs from being overlooked.
5. Avg Duration
The average time taken to execute the entire spec file. This column is essential for identifying performance bottlenecks within your test suite.
Specs with a long average duration are prime candidates for optimization, helping teams shorten CI/CD cycles and accelerate developer feedback loops.
6. Last Duration
The execution time of the most recent run for this spec file. By comparing the Last Duration to the Avg Duration, you can detect recent performance regressions.
A sudden increase in the last run time can indicate that a recent code change has negatively impacted performance, allowing for a quick investigation.
7. Last Execution
The timestamp of the last execution. This provides recency and context, showing exactly when a spec was last run and on which branch.
This information is vital for determining if a failure is a current, relevant issue or an old problem on an inactive branch.
Hovering over the branch name reveals the specific Branch and Environment, which helps diagnose environment-specific issues.
Controls and Filters
-
Search specs: Find a specific file by entering its name. The table updates to show only matching results.
-
Time Period: Scope the data to a specific window. Options include Last 7, 14, 30, 60, and 90 Days.
-
Environments: Filter the view to show specifications run in one or more selected environments, such as production, staging, or qa.
-
Sync: Manually refresh the data to include the most recent test executions.
Quick Start Steps
1. Set Scope
Use the Time Period and All Environments filters to narrow the focus to a specific timeframe and testing environment.
2. Identify Targets
Sort the table to find outliers. Click the header for Failure Rate or Flaky Rate to bring the most unstable specs to the top. Sort by Avg Duration to find the slowest ones.
3. Find a Specific Spec.
Use the Search specs bar to locate a particular spec file directly without scrolling or sorting.