Skip to main content
TestDino groups errors by message and uses AI to categorize failures by root cause. Instead of investigating each failure individually, identify the common cause and fix it.

Quick Reference

Grouping LevelWhat It Shows
Error messageTests that failed with the same error text
Error categoryTests grouped by AI classification

How Error Grouping Works

When multiple tests fail with similar errors, TestDino groups failures by error message. TestDino matches errors by:
  • Error message text
  • Stack trace patterns
  • Failure location in code
Similar errors appear as a single group with a count of affected tests.

View Error Groups

  1. Open a test run
  2. Go to the Errors tab
  3. Expand an error group to see all affected tests
Error grouping view showing errors grouped by message with affected test counts

View Error Categories

1

Go to Analytics

Navigate to the Analytics section
2

Open the Errors tab

Click on the Errors tab to view error patterns
3

View error patterns

View error patterns across multiple runs
Analytics errors view showing error patterns across runs
Analytics shows error trends over time. A new error group appearing after a deployment indicates a regression.
TestDino classifies errors into categories:
CategoryDescription
Assertion FailuresExpected values did not match actual values
Timeout IssuesActions or waits exceeded time limits
Element Not FoundLocators did not resolve to elements
Network IssuesHTTP requests failed or returned errors
JavaScript ErrorsRuntime errors in browser or test code
Browser IssuesBrowser launch, context, or rendering problems
Other FailuresErrors outside the above categories
Filter by category to focus on specific error types.

AI Failure Classification

Error groups inherit AI classifications: Actual Bug: A repeatable product defect. The same error occurs across runs. Fix the application code. UI Change: The interface changed, and the test no longer matches. Update selectors or assertions. Unstable Test: Intermittent failure from timing, state, or environment issues. Stabilize the test. Miscellaneous: Environment or configuration problems. Fix infrastructure or CI setup. Each classification includes a confidence score.

AI Insights View

Open AI Insights from the sidebar for a cross-run view. This shows: Key Metrics: For each classification (Actual Bug, UI Change, Unstable Test, Miscellaneous), AI Insights shows the count and affected tests AI Insights key metrics showing category counts and top affected tests Failure Patterns The same error appearing in multiple runs indicates: Persistent Failures: Tests failing across multiple runs. Persistent failures showing tests failing across multiple runs Emerging Failures: Tests that started failing recently. Emerging failures showing tests that started failing recently

Test Case AI Insights

Open a specific test case and go to the AI Insights tab. This shows:
  • Category and confidence score
  • Root cause analysis
  • Historical context
  • Suggested fixes
Use the feedback form to correct misclassifications. Your input improves future analysis. Test case AI Insights showing category, root cause, and suggested fixes

Error Analytics

Open Analytics → Errors for trends over time: Error Message Over Time: Line graph showing error frequency by category. Identify spikes and trends. Error trends chart showing error frequency over time by category Error Categories Table: Breakdown of errors by type with occurrence counts, affected tests, and first/last detected dates. Error categories table showing breakdown by type with occurrence counts Click any error to see all affected test cases.

Common Error Patterns

Error: locator.click: Error: strict mode violationMultiple tests targeting the same element fail when the selector breaks. Fix the selector once.
Error: Timeout 30000ms exceededOften indicates a shared dependency: slow API, missing service, or environment issue. Check what the affected tests have in common.
Error: expect(received).toBe(expected)Same assertion failing across tests may indicate a data issue or application bug affecting multiple pages.
Error: net::ERR_CONNECTION_REFUSEDService unavailable. All tests depending on that service fail together.

Create Tickets from Error Groups

When an error group needs attention:
1

Open the error group

Navigate to the error group you want to address
2

Raise an issue

Click Raise Bug or Raise Issue
3

Select your tool

Select Jira, Linear, Asana, or Monday
4

Review and submit

TestDino pre-fills the ticket with error details and the affected test count
The ticket links back to the error group for context.
See also: AI Insights for detailed classification information