Skip to main content
The Test Runs page lists every Playwright test execution in your project. Identify failing or flaky runs, confirm where they happened (branch and environment), and open detailed evidence for debugging.

Search and Filters

ControlsPurposeOptions
SearchFind runs by text or IDCommit message, run number (for example, #1493)
Time PeriodLimit runs to a date rangeLast 24 hours, 3 days, 7 days, 14 days, 30 days, Custom
Test StatusFilter by outcomePassed, Failed, Skipped, Flaky
DurationSort by runtimeLow to High, High to Low
AuthorShow runs by authorSelect one or more authors
EnvironmentFocus on a mapped environmentproduction, development, hotfix
BranchScope by branchesSelect one or more branches
TagsFilter by run-level or test-case-level tagsSwitch between Run Tags and Case Tags tabs, search, then select one or more tags

Tags Filter

The Tags dropdown contains two tabs:
TabWhat it filters
Run TagsTags attached to the entire test run via the --tag CLI flag
Case TagsTags set on individual test cases via Playwright’s tag metadata
Type in the search box to find a specific tag. Select one or more tags to filter the list. Multiple tags use OR logic: a run matches if it contains any of the selected tags.

Active Test Runs

Runs currently executing appear in a collapsible Active Test Runs section at the top of the list. Results update in real time as tests complete. Each active run displays a progress bar, live pass/fail/skip counts, commit, branch, and CI source. Active test run with sharded execution showing shard tabs, worker status, and live progress bar For sharded runs, the run is labeled SHARDED with tabs for each shard. Select a shard tab to view its workers and currently executing tests. Non-sharded runs show a single progress bar with per-worker detail.

Test Run Key Columns

Test runs list showing run ID, commit info, branch, environment, test results counts, and AI insights columns
ColumnDescription
Test RunRun ID, start time, and executor (CI or Local). Click the CI label to open the job.
CommitCommit message, short SHA, and author. Links to the commit in your Git host.
Branch & EnvironmentBranch name, mapped environment label, and run-level tag chips. When more tags exist than the row can display, a +N badge shows the remaining count.
Test ResultsCounts for Passed, Failed, Flaky, Skipped, Interrupted and total.
AI InsightsCategory counts for Actual Bug, UI Change, Unstable, and Miscellaneous.

Test Run Grouping

Runs that share the same commit hash and commit message are grouped as attempts by TestDino. This usually happens when you rerun a CI workflow or trigger multiple executions for the same commit. Expand the group to see each attempt (for example, Attempt #1, Attempt #2). This grouping helps you:
  • Track reruns for a single commit without scanning separate rows
  • Compare results across attempts to confirm if a rerun fixed flaky failures
  • See how many times a workflow was triggered for the same code change

Run-Level Tags

Attach labels to an entire test run using the --tag CLI flag. Tags appear as chips on each run row in the list and are available as filter values.
npx tdpw test --tag="regression,smoke"
Use run-level tags to label runs by build number, sprint, release, or test type. These are separate from test-case-level tags set via annotations.
Tag typeSet viaScopeExample
Run-level--tag CLI flagEntire test runregression, sprint-42, nightly
Test-case-levelTest annotationsIndividual test casessmoke, critical-path, login

Run Details Header

Opening a test run displays a header bar above the detail tabs. The header contains:
ElementDescription
Commit messageThe commit message and run number
EnvironmentMapped environment badge (for example, STAGE)
BranchBranch name
Commit SHAShort SHA linking to the commit in your Git host
AuthorCommitter name
TimestampWhen the run started
DurationTotal run time
TagsRun-level tags displayed as chips (for example, @regression, @smoke, @v1.2.3)
Tags in the header are the same run-level tags set via the --tag CLI flag. They are visible across all detail tabs (Summary, Specs, Errors, History, Configuration, Coverage, AI Insights).

Quick Start Steps

  1. Set scope - Filter by Time Period, Environment, Branch, Committer, Status, or Tags, and sort by Duration to focus the list.
  2. Scan and open - Review result counts and AI labels, then open a run that needs action.
  3. Review details - The run details page provides six tabs:
    • Summary: Totals for Failed, Flaky, and Skipped with sub-causes and test case analysis
    • Specs: File-centric and tag-centric views. Switch between Spec File and Tag sub-views to group by file or by tag.
    • Errors: Groups failed and flaky tests by error message. Jump to stack traces.
    • History: Outcome and runtime charts across recent runs. Spot spikes and regressions.
    • Configuration: Source, CI, system, and test settings. Detect config drift.
    • Coverage: Statement, branch, function, and line coverage with per-file breakdown.
    • AI Insights: Category breakdowns, error variants, and patterns for new or recurring issues.
Explore runs, CI optimization, and streaming.