Skip to main content
Get quick answers to the most common questions about TestDino. Browse through topics including getting started, API keys, test runs, AI insights, integrations, and billing. Click on any question to expand the answer.

Getting Started

TestDino is an AI-native, Playwright-focused test reporting and management platform with MCP support. It ingests Playwright reports from CI or local execution, classifies failures using AI, and provides actionable insights.How it works:
  1. Configure Playwright to emit JSON and HTML reports
  2. Upload reports using the CLI (tdpw or testdino) or through CI
  3. TestDino processes results, applies AI classification, and links runs to branches/PRs
  4. View results in dashboards, track trends in Analytics, and create tickets from failures
TestDino eliminates the 6 to 8 hours teams spend weekly on manual test failure analysis.
  • Manual triage - AI classifies failures as bugs, flaky tests, or UI changes
  • Scattered evidence - Aggregates screenshots, videos, traces, and logs in one place
  • No historical context - Tracks trends and flakiness across runs
  • Slow handoffs - Pre-fills Jira/Linear/Asana tickets with full context
  • Unclear readiness - GitHub CI Checks give clear pass/fail signals
To get started:
  1. Create an organization and a project
  2. Generate an API key from your project settings
  3. Configure your Playwright reporter to output JSON format
  4. Upload your first test run using the TestDino CLI
For detailed instructions, see Getting Started.
TestDino requires a JSON report (mandatory) generated by Playwright. The HTML report is optional but recommended for full artifact support.Required:
  • report.json - Contains test results, metadata, and structure
Optional (for richer debugging):
  • HTML report - Enables screenshots, videos, and trace viewing
  • Traces - For interactive step-by-step debugging
  • Videos - For visual test playback
  • Screenshots - For failure evidence
Configure Playwright to generate these in your playwright.config.js, then upload the report folder using the CLI.
No. TestDino works with your existing Playwright tests without code modifications.You only need to:
  1. Add JSON and HTML reporters to your playwright.config.js
  2. Upload reports using the TestDino CLI after tests run
TestDino reads Playwright’s standard report output. It doesn’t require custom annotations, special imports, or framework changes.

Setup and Configuration

Playwright reporters show a single run snapshot. TestDino adds:
  • Cross-run analytics: Trends and failure patterns over time
  • AI classification: Automatic categorization (Bug, UI Change, Flaky, Misc)
  • Git and PR awareness: Links test runs to commits, branches, and PRs
  • Integrations: Jira, Linear, Asana, Slack, GitHub
  • Historical tracking: Stability scores and regression detection
Playwright shows what happened. TestDino explains ‘why’ and ‘what’ to do next.
TestDino officially supports Playwright tests written in:
  • JavaScript / TypeScript - Use the tdpw CLI (npm package)
  • Python - Use the testdino CLI (PyPI package) with pytest-playwright
Both CLIs provide the same core features: upload reports, cache metadata, and retrieve failed tests.
JavaScript (tdpw):
npx tdpw upload ./playwright-report --token="your-api-key" --upload-html
Python (testdino):
# Run tests with JSON output
pytest --playwright-json=test-results/report.json

# Upload
testdino upload ./test-results --token="your-api-key" --upload-full-json
Key flags: --upload-html, --upload-images, --upload-videos, --upload-traces, --upload-full-json
Yes. TestDino works with monorepos without special configuration.Each Playwright project in your monorepo can upload to the same or different TestDino projects. Just point the CLI to the correct report directory for each:
npx tdpw upload ./apps/web/playwright-report --token="your-api-key"
npx tdpw upload ./apps/mobile/playwright-report --token="your-api-key"
Use separate TestDino projects if you want isolated analytics, or one project if you want unified reporting.
Yes. TestDino doesn’t require your test code and application code to be in the same repository.TestDino ingests Playwright reports, not source code. As long as your CI generates and uploads reports, TestDino will process them. Use environment mapping to link branches across repos if needed.For GitHub integration, install the TestDino app on the repository where tests run. PR comments and CI checks will appear there.
No. MCP (Model Context Protocol) is completely optional.TestDino works fully through:
  • Web dashboard - View test runs, analytics, and AI insights
  • CLI - Upload reports, cache results, rerun failed tests
  • Integrations - GitHub, Slack, Jira, Linear, Asana
MCP is an add-on for AI-assisted workflows. It lets you query test data through Cursor or Claude Desktop using natural language. You can ignore it entirely if you prefer the web UI.

API Keys and Authentication

  1. Go to Project Settings. API Keys
  2. Click Generate Key
  3. Name the key and set an expiration (if available)
  4. Copy the secret immediately and store it in your secret manager
  5. Use it in CI as an environment variable, then reference it in the upload command
API Keys Management
  1. Generate a new key in Project Settings. API Keys
  2. Update CI secrets with the new key
  3. Run one upload to confirm it works
  4. Revoke or delete the old key
API failures:
  • Verify TESTDINO_API_KEY is set correctly
  • Check internet connectivity
  • Look for HTTP status codes in error messages
Run ID not found:
  • Use list_testruns to confirm the run exists
  • Verify you’re querying the correct project
  • Check if the run ID format is correct (or use the counter instead)

Test Runs and Uploads

Run tests, then upload the report folder:
npx tdpw upload ./playwright-report --token="${{ secrets.TESTDINO_TOKEN }}" --upload-html
Check these:
  1. API key: Verify the token is correct and not expired
  2. Report path: Ensure the folder contains report.json
  3. Project match: API key must belong to the target project
  4. Upload success: Check CLI output for errors
  5. Sync: Click the Sync button in the Test Runs view
Use --verbose for detailed upload logs.
Test Run: One full execution of your suite, equivalent to one playwright test commandTest Run ViewTest Case: One individual test, equivalent to one test() blockTest Case DetailsSpec file: One test file that contains one or more test casesSpec File ViewSpec Tab View
It groups tests by cause with KPI tiles:
  • Failed - Assertion Failure, Element Not Found, Timeout, Network, Other
  • Flaky - Timing, Environment, Network, Assertion, Other
  • Skipped - Manual, Configuration, Conditional Test Run Summary
Detailed Analysis shows each test with status, spec file, duration, retries, AI category, and 10-run history preview.Filter using tokens: s: (status), c: (cluster), @ (tag), b: (browser).Detailed Analysis
Traces are accessible in two places:
  1. Test Run Summary - Each failed/flaky test case row includes a “Trace #” link. Click it to open the full Playwright trace viewer.
  2. Test Case Details > Trace tab - The interactive Playwright trace shows timeline, actions, network calls, console output, and DOM snapshots. Jump directly to the failing step for root-cause analysis.
Traces open in Playwright’s trace viewer, letting you inspect exactly what happened during test execution.Trace Viewer
Traces are interactive debugging tools. In Trace Viewer, you can see actions, network calls, console logs, and DOM snapshots. You can step through execution, inspect element states, and see exactly what Playwright did at each moment.Trace ViewerVideos are screen recordings of test execution. They show what the browser rendered, but don’t provide interactive debugging or network/console data.Trace and Video ViewerUse videos for quick visual review and, trace viewer for deep debugging of failures.
Yes. Use search and filters across Test Runs, and Errors views to:
  • Search by commit message or run number
  • Filter by status (passed, failed, flaky, skipped)
  • Filter by committer, branch, or environment
  • Group failures by error message in the Errors view Error View with Filters
Requirements:
  1. GitHub integration must be installed and connected
  2. Test runs must include commit SHA metadata
  3. The branch must be associated with an open PR
Verify:
  • Check Settings > Integrations > GitHub shows connected
  • Confirm CI workflow includes git context in the upload
  • Ensure PR exists for the branch

AI Insights and Classifications

TestDino’s AI groups similar errors, assigns categories, and detects patterns.AI Categories:
  • Actual Bug - Product defect → Fix the code
  • UI Change - Selector/DOM changed → Update locators
  • Unstable Test - Intermittent failure → Stabilize the test
  • Miscellaneous - Environment/config issue → Fix infrastructure
Each failure gets a confidence score. Find AI Insights at run level, test case level, or globally.AI Insights Key Metrics
Error Variants are distinct error signatures within a category. TestDino normalizes error messages and groups duplicates.Example:
  • Locator .submit-btn not found (5 times) → 1 variant
  • Locator #login-form not found (1 time) → 1 variant
  • Total variants: 2 (not 6) Error Variants View
  • Error Variants: Assertion Failures, Timeout Issues, Element Not Found, Network Issues, JavaScript Errors, Browser Issues, Other Failures
  • AI Categories: Actual Bug, UI Change, Unstable Test, Miscellaneous
  • Flaky Sub Categories: Timing Related, Environment Dependent, Network Dependent, Assertion Intermittent, Other Flaky
AspectRun-Level AI InsightsGlobal AI Insights
ScopeSingle test runAcross runs for selected time period
LocationTest Runs > [Run] > AI Insights tabAI Insights (sidebar menu)
PurposeDebug this specific runIdentify cross-run patterns
PatternsError variants in this runPersistent/Emerging failures over time
Global AI Insights help answer: “What’s repeatedly breaking across my test suite?”

Flakiness and Test Health

TestDino identifies flaky tests by analyzing behavior across attempts and runs:Within a single run:
  • A test that fails initially but passes on retry is marked flaky
  • Retry attempts are tracked separately
Across multiple runs:
  • Tests with inconsistent outcomes (pass in one run, fail in another) without code changes
  • Historical stability percentage calculated as (Passed ÷ Total Runs) × 100
TestDino also sub-categorizes flaky tests by root cause: Timing Related, Environment Dependent, Network Dependent, Assertion Intermittent, or Other Flaky.
Multiple views available:
  1. QA Dashboard- “Most Flaky Tests” section
  2. Analytics - “Flakiness & Test Issues” chart with list
  3. Test Cases History - Stability score and “Last Flaky” tile
  4. Specs Explorer - “Flaky Rate” column for all spec files
  5. Developer Dashboard - “Flaky Tests Alert” per author
The Test Case History tab shows:
  • Stability % - (Passed ÷ Total Runs) × 100
  • Last Status Tiles - Links to Last Passed, Last Failed, Last Flaky runs
  • Execution History Table - Status, duration, retries per run (expandable for error details)
History is scoped to the current branch.GitHub CI Checks Configuration

Integrations

TestDino supports:
  • CI/CD - GitHub
  • Issue tracking - Jira, Linear, Asana
  • Communication - Slack App, Slack Webhook
  1. Install the TestDino GitHub App
  2. Select repositories to grant access
  3. In Settings > Integrations > GitHub, configure:
    • Comments - Enable PR and commit comments per environment
    • CI Checks - Enable checks with pass rate thresholds
Quality Gate Settings:
  • Pass Rate - Minimum % of tests that must pass (default: 90%)
  • Mandatory Tags - Tests with these tags (e.g., @critical) must all pass
  • Flaky Handling - Strict (flaky = failure) or Neutral (flaky excluded from calculation)
  • Environment Overrides - Different rules per environment GitHub CI Checks Configuration
Most common reason: A Mandatory Tag test failed.If you configured @critical as mandatory and one critical test fails, the check fails regardless of the overall pass rate.Other causes:
  • Flaky Handling set to “Strict” and flaky tests present
  • Environment Override has stricter rules than defaults
  1. Connect the integration in Project Settings. Integrations
  2. Configure the default project (Jira) or team (Linear)
  3. Open a failed test case in TestDino
  4. Click Raise Bug or Raise Issue
  5. The issue is created with test details, error message, failure history, and links

Environment Mapping and Branch Management

Environment Mapping links Git branches to environments (Production, Staging, Dev) using exact names or regex patterns.Configure in Settings > Branch Mapping.Why it matters:
  • Rolls up short-lived branches (feature/*, PR branches) to the correct environment
  • Enables environment-specific CI Check rules
  • Routes Slack notifications to the right channels
  • Filters dashboards and analytics by environment
Learn more at Environment Mapping.
Yes. Enable CLI Environment Override in Project Settings, then upload with:
npx tdpw upload ./playwright-report --token="your-token" --environment="staging"
The run appears without an environment label and may not appear in environment-filtered views.Solutions:
  • Add a catch-all pattern (e.g., .* → Development)
  • Add patterns that match your branch naming convention
  • Runs remain visible in the unfiltered Test Runs list
See Environment Mapping Best Practices for details.

Organizations, Projects & Permissions

Organization: Top-level container for your team, users, billing, and settingsOrganization OverviewProject: One test suite or application with its own runs, keys, and integrations. Actions in one project don’t affect othersProject ViewHierarchy: Organization → Projects → Test Runs
  1. Go to your organization’s Users & Roles tab
  2. Click Invite Member and enter their email address
  3. Assign a role (Owner, Admin, Member, or Viewer)
  4. Track invitations and adjust permissions as your team grows
For project-level access, open Permissions within the project, click Add Member, select an organization member, and assign a project role (Admin, Editor, or Viewer).
Organization Roles:
  • Owner - Full control, can invite/update/remove anyone
  • Admin - Manages people and settings, can’t remove Owner
  • Member - Contributes to projects
  • Viewer - Read-only access
Project Roles:
  • Admin - Manage settings, add/remove members
  • Editor - Edit content, assign Viewer roles
  • Viewer - Read-only access
Project Admins can manage project settings, add/remove members, change roles, configure integrations, and generate/revoke API keys. Viewers have read-only access to test runs and analytics.Both roles can view data and create Jira/Linear/Asana tickets from failed tests.

Billing and Pricing

Plans are typically based on test executions and user or project limits.Usage is measured monthly. A retry counts as another execution. Track usage in Settings > Usage & Quota. Usage resets monthly on your cycle date.
A test execution is one test case run:
  • Each test case counts as one execution (skipped tests are excluded)
  • Retries count separately - A test with 2 retries = 3 executions
  • Artifacts do not affect execution count
  • Usage is tracked monthly and resets on your billing cycle date
  • Overage, if applicable, is billed on the next invoice
  • Upgrade if you consistently hit limits
  • Access continues until the current billing period ends
  • No future charges after cancellation
  • The organization moves to the Community plan
  • Retention and limits fall back to the Community plan
  1. Go to Manage Billing in your organization
  2. Click View All Plans
  3. Select the plan
  4. Confirm the change
Upgrades typically apply immediately. Downgrades typically take effect at the end of the current billing period.

Still Have Questions?