Skip to main content
This guide covers how to compare attempts, read evidence panels, and analyze error messages. To learn more about preventing flaky tests, see Prevent Flaky Tests.

Check the Flaky Category

TestDino assigns a sub-category to each flaky test:
CategoryWhat to investigate
Timing RelatedRace conditions, animation waits, polling intervals
Environment DependentCI runner differences, resource constraints, parallel execution
Network DependentAPI timeouts, rate limits, service availability
Assertion IntermittentDynamic data, timestamps, random values
Open the test run summary to see which category applies.

Compare Passing and Failing Attempts

Open the test case details. The evidence panel shows tabs for each attempt: Run, Retry 1, Retry 2. Evidence panel showing tabs for each test attempt with screenshots and console logs Compare:
  • Screenshots: Look for UI differences between attempts
  • Console logs: Check for errors or warnings that appear only in failures
  • Network requests: Identify slow or failed API calls
  • Timing: Note duration differences between attempts.

Use the Trace Viewer

If traces are enabled, open the trace for both passing and failing attempts. The trace shows:
  • Action timeline with exact timestamps
  • Network requests and responses
  • Console output
  • DOM snapshots at each step
Look for:
  • Actions that take longer in failing runs
  • Network requests that timeout or return errors
  • Elements that appear at different times

Review Test History

Open the History tab for the test case. Look for patterns:
  • Does it fail at specific times of day?
  • Does it fail more on certain branches?
  • Did flakiness start after a specific commit?
Test case history showing execution timeline with status and duration The execution history table shows status, duration, and retries for each run.

Check Environment Differences

Open Test Run → Configuration. Compare flaky rates across environments. If a test is flaky only in CI but not locally:
  • Check runner resource limits (CPU, memory)
  • Verify browser versions match
  • Look for parallel test interference
If a test is flaky only in specific environments:
  • Check environment-specific configuration
  • Verify test data availability
  • Look for service dependencies

Analyze Error Messages

Open the Errors tab in the test run. Group failures by error message. Common flaky error patterns:
The element appears at inconsistent times. Add explicit waits or check for loading states.
The element exists but is hidden or obscured. Check for overlays, animations, or scroll position.
Dynamic data changes between runs. Mock the data or use flexible assertions.
Service is unavailable intermittently. Add retry logic or mock the endpoint.

Use AI Insights

Open the AI Insights tab for the test case. TestDino provides:
  • Root cause based on error patterns
  • Historical context from similar failures
  • Suggested fixes
AI Insights showing root cause analysis and suggested fixes

Document your findings

Once you identify the root cause:
1

Create an issue

Create an issue in Jira, Linear, or Asana from TestDino
2

Include key details

Include the test name, flaky category, and evidence links
3

Reference specific runs

Reference specific runs that demonstrate the issue
Issue tracking integration showing created issue
This helps your team and improves future AI classifications.