Skip to main content
The AI Insights tab analyzes a test run and summarizes failures. It groups similar errors, assigns a category to failures, and highlights patterns across recent executions. AI Insights KPI tiles showing error variants, failure categorization, and failure patterns

KPI Tiles

Error Variants

Shows distinct error signatures and how many tests match each variant, for example, timeout, element not found, network error, or timing-related. Use this to identify the most common failure shape in the run.

AI Failure Categorization

Shows how failures are classified into:
  • Actual bug
  • UI change
  • Unstable
  • Miscellaneous
Each label includes a confidence score. Use this to separate likely product issues from unstable tests.

Failure Patterns

Highlights how failing tests behave across recent executions:
  • New Failures: Tests that started failing within the selected window.
  • Regressions: Tests that passed recently but now fail again.
  • Consistent Failures: Tests failing across most or all recent runs.
Use this to decide what to investigate first.

Error Analysis

This table lists failing tests with:
  • Test case
  • Failure category
  • Error text
  • Error variant
  • Duration
Use the duration filter to scope by runtime, for example, fast or medium, when investigating clusters.

Filtering

Selecting any tile (variant, category, or pattern) adds an active filter chip and scopes the table to matching tests. Combine one Error Variant with one AI Category to get a focused slice, for example, Actual Bug + Timeout.