TestDino FAQs
Get quick answers to the most common questions about TestDino. Browse through topics including getting started, API keys, test runs, AI insights, integrations, and billing. Click on any question to expand the answer.
Getting Started
What is TestDino and how does it work?
TestDino (opens in a new tab) is an AI-native, Playwright-focused test reporting and management platform with MCP support. It ingests Playwright reports from CI or local execution, classifies failures using AI, and provides actionable insights.
How it works:
- Configure Playwright to emit JSON and HTML reports
- Upload reports using the CLI (
tdpwortestdino) or through CI - TestDino processes results, applies AI classification, and links runs to branches/PRs
- View results in dashboards, track trends in Analytics, and create tickets from failures
What problems does TestDino solve for QA teams?
TestDino eliminates the 6 to 8 hours teams spend weekly on manual test failure analysis.
- Manual triage - AI classifies failures as bugs, flaky tests, or UI changes
- Scattered evidence - Aggregates screenshots, videos, traces, and logs in one place
- No historical context - Tracks trends and flakiness across runs
- Slow handoffs - Pre-fills Jira/Linear/Asana tickets with full context
- Unclear readiness - GitHub CI Checks give clear pass/fail signals
How do I get started with TestDino?
To get started:
- Create an organization and a project
- Generate an API key from your project settings
- Configure your Playwright reporter to output JSON format
- Upload your first test run using the TestDino CLI
For detailed instructions, see your Getting Started guide.
Do I need to change my existing Playwright tests?
No. TestDino works with your existing Playwright tests without code modifications.
You only need to:
- Add JSON and HTML (opens in a new tab) reporters to your
playwright.config.js - Upload reports using the TestDino CLI after tests run
TestDino reads Playwright's standard report output. It doesn't require custom annotations, special imports, or framework changes.
How does TestDino differ from Playwright's built-in HTML and JSON reporters?
Playwright reporters show a single run snapshot. TestDino adds (opens in a new tab):
- Cross-run analytics: Trends and failure patterns over time
- AI classification: Automatic categorization (Bug, UI Change, Flaky, Misc)
- Git and PR awareness: Links test runs to commits, branches, and PRs
- Integrations: Jira, Linear, Asana, Slack, GitHub
- Historical tracking: Stability scores and regression detection
Playwright shows what happened. TestDino explains 'why' and 'what' to do next.
How do I upload Playwright reports using the JavaScript or Python CLI?
JavaScript (tdpw):
npx tdpw upload ./playwright-report --token="your-api-key" --upload-htmlPython (testdino):
# Run tests with JSON output
pytest --playwright-json=test-results/report.json
# Upload
testdino upload ./test-results --token="your-api-key" --upload-full-jsonKey flags: --upload-html, --upload-images, --upload-videos, --upload-traces, --upload-full-json
API Keys and Authentication
How do I generate and manage API keys?
- Go to Project Settings. API Keys
- Click Generate Key
- Name the key and set an expiration (if available)
- Copy the secret immediately and store it in your secret manager
- Use it in CI as an environment variable, then reference it in the upload command
My API key expired. How do I rotate it?
- Generate a new key in Project Settings. API Keys
- Update CI secrets with the new key
- Run one upload to confirm it works
- Revoke or delete the old key
How do I troubleshoot API request failures or run ID not found errors?
API failures:
- Verify
TESTDINO_API_KEYis set correctly - Check internet connectivity
- Look for HTTP status codes in error messages
Run ID not found:
- Use
list_testrunsto confirm the run exists - Verify you're querying the correct project
- Check if the run ID format is correct (or use the counter instead)
Test Runs and Uploads
How do I upload test results from CI?
Run tests, then upload the report folder:
npx tdpw upload ./playwright-report --token="${{ secrets.TESTDINO_TOKEN }}" --upload-htmlWhy are my uploaded runs not appearing?
Check these:
- API key: Verify the token is correct and not expired
- Report path: Ensure the folder contains
report.json - Project match: API key must belong to the target project
- Upload success: Check CLI output for errors
- Sync: Click the Sync button in the Test Runs view
Use --verbose for detailed upload logs.
What is the difference between a test run, a test case, and a spec file?
- Test Run: One full execution of your suite, equivalent to one
playwright testcommand - Test Case: One individual test, equivalent to one
test()block - Spec file: One test file that contains one or more test cases
What information does the Test Run Summary provide?
It groups tests by cause with KPI tiles:
- Failed - Assertion Failure, Element Not Found, Timeout, Network, Other
- Flaky - Timing, Environment, Network, Assertion, Other
- Skipped - Manual, Configuration, Conditional
Detailed Analysis (opens in a new tab) shows each test with status, spec file, duration, retries, AI category, and 10-run history preview.
Filter using tokens: s: (status), c: (cluster), @ (tag), b: (browser).
Why are PRs not linking to test runs?
Requirements:
- GitHub integration must be installed and connected
- Test runs must include commit SHA metadata
- The branch must be associated with an open PR
Verify:
- Check Settings > Integrations > GitHub shows connected
- Confirm CI workflow includes git context in the upload
- Ensure PR exists for the branch
AI Insights and Classifications
How does TestDino categorize and analyze test failures using AI?
TestDino's AI groups similar errors, assigns categories, and detects patterns.
AI Categories:
- Actual Bug - Product defect → Fix the code
- UI Change - Selector/DOM changed → Update locators
- Unstable Test - Intermittent failure → Stabilize the test
- Miscellaneous - Environment/config issue → Fix infrastructure
Each failure gets a confidence score. Find AI Insights at run level, test case level, or globally.
What are Error Variants and how are they counted?
Error Variants are distinct error signatures within a category. TestDino normalizes error messages and groups duplicates.
Example:
- Locator ".submit-btn" not found (5 times) → 1 variant
- Locator "#login-form" not found (1 time) → 1 variant
- Total variants: 2 (not 6)
What Failure Categories are present in TestDino?
- Technical Categories: Assertion Failures, Timeout Issues, Element Not Found, Network Issues, JavaScript Errors, Browser Issues, Other Failures
- AI Categories: Actual Bug, UI Change, Unstable Test, Miscellaneous
- Flaky Sub Categories: Timing Related, Environment Dependent, Network Dependent, Assertion Intermittent, Other Flaky
Can I search for specific test cases or errors?
Yes. Use search and filters across Test Runs, and Errors views to:
- Search by commit message or run number
- Filter by status (passed, failed, flaky, skipped)
- Filter by committer, branch, or environment
- Group failures by error message in the Errors view
How do global AI insights differ from run-level AI insights?
| Aspect | Run-Level AI Insights | Global AI Insights |
|---|---|---|
| Scope | Single test run | Across runs for selected time period |
| Location | Test Runs > [Run] > AI Insights tab | AI Insights (sidebar menu) |
| Purpose | Debug this specific run | Identify cross-run patterns |
| Patterns | Error variants in this run | Persistent/Emerging failures over time |
Global AI Insights help answer: "What's repeatedly breaking across my test suite?"
How do I identify which branches or tests are most affected by recurring error types?
Use Analysis Insights (opens in a new tab):
- Error Message Over Time chart - Shows error frequency by day, highlights spikes
- Error Categories table - Each row shows:
- Error message and priority (Critical/Medium/Low)
- Top Tests Affected - Top 3 tests hitting this error
- Branches Affected - All branches where the error appeared (hover to see the list)
Flakiness and Test Health
How can I view and analyze flaky tests across multiple runs?
Multiple views available:
- QA Dashboard- "Most Flaky Tests (opens in a new tab)" section
- Analytics - "Flakiness & Test Issues (opens in a new tab)" chart with list
- Test Cases History (opens in a new tab) - Stability score and "Last Flaky" tile
- Specs Explorer - "Flaky Rate (opens in a new tab)" column for all spec files
- Developer Dashboard - "Flaky Tests Alert (opens in a new tab)" per author
How does TestDino track historical stability for individual test cases?
The Test Case History (opens in a new tab) tab shows:
- Stability % - (Passed Ă· Total Runs) Ă— 100
- Last Status Tiles - Links to Last Passed, Last Failed, Last Flaky runs
- Execution History Table - Status, duration, retries per run (expandable for error details)
History is scoped to the current branch.
Integrations
Which integrations does TestDino support?
TestDino supports (opens in a new tab):
- CI/CD - GitHub
- Issue tracking - Jira, Linear, Asana
- Communication - Slack App, Slack Webhook
How do I integrate TestDino with GitHub for PR checks and comments?
- Install the TestDino GitHub App (opens in a new tab)
- Select repositories to grant access
- In Settings > Integrations > GitHub (opens in a new tab), configure:
- Comments - Enable PR and commit comments per environment
- CI Checks - Enable checks with pass rate thresholds
What Quality Gate rules are available for GitHub CI Checks?
Quality Gate Settings (opens in a new tab):
- Pass Rate - Minimum % of tests that must pass (default: 90%)
- Mandatory Tags - Tests with these tags (e.g.,
@critical) must all pass - Flaky Handling - Strict (flaky = failure) or Neutral (flaky excluded from calculation)
- Environment Overrides - Different rules per environment
Why might a TestDino CI Check fail even if my pass rate looks high?
Most common reason: A Mandatory Tag (opens in a new tab) test failed.
If you configured @critical as mandatory and one critical test fails, the check fails regardless of the overall pass rate.
Other causes:
- Flaky Handling set to "Strict" and flaky tests present
- Environment Override has stricter rules than defaults
How do I create Jira or Linear issues from failed tests?
- Connect the integration in Project Settings. Integrations
- Configure the default project (Jira) or team (Linear)
- Open a failed test case in TestDino
- Click Raise Bug or Raise Issue
- The issue is created with test details, error message, failure history, and links
Environment Mapping and Branch Management
What is Environment Mapping, and why is it important?
Environment Mapping (opens in a new tab) links Git branches to environments (Production, Staging, Dev) using exact names or regex patterns.
Configure in Settings > Branch Mapping (opens in a new tab).
Why it matters:
- Rolls up short-lived branches (feature/*, PR branches) to the correct environment
- Enables environment-specific CI Check rules
- Routes Slack notifications to the right channels
- Filters dashboards and analytics by environment
Can I override environment mapping via the CLI?
Yes. Enable CLI Environment Override in Project Settings, then upload with:
npx tdpw upload ./playwright-report --token="your-token" --environment="staging"What happens if a branch does not match any mapping?
The run appears without an environment label and may not appear in environment-filtered views.
Solutions (opens in a new tab):
- Add a catch-all pattern (e.g.,
.*→ Development) - Add patterns that match your branch naming convention
- Runs remain visible in the unfiltered Test Runs list
Organizations, Projects & Permissions
What is the difference between organizations and projects?
- Organization: Top-level container for your team, users, billing, and settings
- Project: One test suite or application with its own runs, keys, and integrations. Actions in one project don't affect others
Hierarchy: Organization → Projects → Test Runs
How do I invite team members and assign roles?
- Go to your organization's Users & Roles (opens in a new tab) tab
- Click Invite Member and enter their email address
- Assign a role (Owner, Admin, Member, or Viewer)
- Track invitations and adjust permissions as your team grows
For project-level access, open Permissions within the project, click Add Member, select an organization member, and assign a project role (Admin, Editor, or Viewer).
What roles exist at the organization and project level?
Organization Roles:
- Owner - Full control, can invite/update/remove anyone
- Admin - Manages people and settings, can't remove Owner
- Member - Contributes to projects
- Viewer - Read-only access
Project Roles:
- Admin - Manage settings, add/remove members
- Editor - Edit content, assign Viewer roles
- Viewer - Read-only access
What can a Project Admin do that a Viewer cannot?
Project Admins can manage project settings, add/remove members, change roles, configure integrations, and generate/revoke API keys. Viewers have read-only access to test runs and analytics.
Both roles can view data and create Jira/Linear tickets from failed tests.
Billing and Pricing
What are the plan limits, and how is usage calculated?
Plans (opens in a new tab) are typically based on test executions and user or project limits.
Usage is measured monthly. A retry counts as another execution. Track usage in Settings > Usage & Quota. Usage resets monthly on your cycle date.
What counts as a test execution for billing?
A test execution is one test case run:
- Each test case counts as one execution excluded skipped tests
- Retries count separately - A test with 2 retries = 3 executions
- Artifacts do not affect execution count
What happens if I exceed my plan limits?
- Usage is tracked monthly and resets on your billing cycle date
- Overage, if applicable, is billed on the next invoice
- Upgrade if you consistently hit limits
What happens if I cancel my subscription?
- Access continues until the current billing period ends
- No future charges after cancellation
- The organization moves to the Community plan
- Retention and limits fall back to the Community plan
How do I upgrade or downgrade my plan?
- Go to Manage Billing in your organization
- Click View All Plans
- Select the plan
- Confirm the change
Upgrades typically apply immediately. Downgrades typically apply at the end of the current billing period.
Still Have Questions?
- Discord: Join (opens in a new tab) our community
- Email: support@testdino.com
- GitHub: Open an issue on the TestDino repository (opens in a new tab)