Test Cases
Overview

Overview

Summarize the current test’s state for this run. Show status, primary cause, runtime, attempts, and links to evidence.

KPI Tiles

Overview

1. Status

Outcome for this run: Passed, Failed, Skipped, or Flaky. When not passed, the primary technical cause is identified, allowing triage to start with context.

2. Why failing

AI category for the failure: Actual Bug, UI Change, Unstable, or Miscellaneous, with a confidence score. Helps decide whether to fix code, update selectors, or stabilize the test.

Feedback on classification
Available for failed or flaky tests. Use the Test Failure Feedback form to set the correct category and add optional context.

Your input updates this run and improves future classification, making AI Insights more reliable for the team.

Test Failure Feedback

3. Total runtime

Total execution time for this test in the current run. Useful for spotting slowdowns after code or configuration changes.

4. Attempts

Number of retries executed by your retry settings. A pass after a retry often signals instability that needs cleanup.

Evidence panels

Evidance pael

Tabs per attempt (Run, Retry 1, Retry 2).

1. Error details

Exact error text and key line. Copy to reproduce locally or link in a ticket.

2. Test steps

Step list with per-step timing. Confirms where the error occurred in the flow.

3. Screenshots

Captured frames from the attempt. Validate UI state at the point of failure.

4. Console

Browser console output. Use it to correlate network or script errors with UI symptoms.

5. Video

Full recording of the attempt. View the timeline to verify the sequence leading to the error, timing between steps, and visual state across retries.

6. Attachments

Interactive Playwright trace for the attempt. Inspect the timeline, actions, network calls, console, and DOM snapshots; jump to the failing step for root-cause analysis.


Visible only when Playwright tracing is enabled (for example, trace: ‘on’or trace: 'on-first-retry')

7. Feature Comparison - Enable fast review of snapshot failures without leaving TestDino. The panel appears for tests that use Playwright visual assertions (for example, toHaveScreenshot).


Use Visual Comparison to review Actual, Expected, Diff, Side-by-Side, and Slider views of screenshots

ModeWhat it showsHow it helps
DiffColored overlays for changed regionsPinpoints small layout or visual shifts quickly
ActualRuntime screenshot from the failing attemptInspects what was rendered during the test
ExpectedStored baseline (reference) imageConfirms whether the baseline must change
Side by SideExpected and actual in two panesCompare at a glance, scans across elements
SliderInteractive sweep between imagesExamines precise areas for subtle differences

Note:

  • Visible only when visual comparisons were generated by the test suite.

  • If no snapshot artifacts exist, the Image Mismatch panel is hidden.