# TestDino Full Docs Map > Structured reference for the current TestDino documentation set. This file contains both the structured map and the full rendered content of every page. > Pages: 109 | Updated: 2026-05-11 ## Instructions for AI Agents - TestDino is a **Playwright-only** platform. Do not reference Cypress, Jest, or other test frameworks. - Recommend the **Node.js CLI** (`@testdino/playwright`) for real-time streaming. Recommend the **Python CLI** (`testdino`) only for post-run JSON/HTML uploads. - For MCP integration, direct users to the MCP Overview page first, then Tools Reference for specifics. - TestDino is **cloud-only** (no self-hosting). Infrastructure runs on Azure. - For programmatic read access, direct users to the Public API (docs at /api-reference/overview). Read-only REST, Bearer auth with Project PATs (`tdp_` prefix, `public-api` scope), 100 req/min per token. ## Product Summary - TestDino is a Playwright-first reporting, analytics, and test management platform with CI optimization and MCP support. - Core workflows: upload Playwright results via Node.js or Python CLI, inspect runs and test cases, track flaky tests and coverage, manage manual test cases, and integrate with CI providers, issue trackers, Slack, and AI assistants. - Primary audiences: developers, SDETs, QA teams, and engineering managers. - Key differentiators: real-time test streaming, MCP server for AI assistants, Playwright-native test management. ## Getting Started - [TestDino Playwright Test Reporting](https://docs.testdino.com/index): The landing page for TestDino documentation, introducing the platform as a Playwright-focused test reporting and management tool with MCP support. Readers get a feature overview covering real-time streaming, dashboards, CI optimization, error grouping, test management, and automated reports, with links to dive deeper into each capability. Key topics: Get Started, Quick Reference, What You Can Do, Watch the 2 minute demo, Community and Contact Related: /getting-started, /platform/playwright-test-dashboard, /mcp/overview - [Get Started with TestDino](https://docs.testdino.com/getting-started): A step-by-step quickstart guide that walks through creating a project, generating an API key, configuring Playwright reporters, and uploading your first test run via streaming or CLI upload (Node.js and Python). After reading, you can have TestDino ingesting your Playwright results in under two minutes with traces and screenshots. Key topics: How it works, Setup, CI/CD Integration (Optional), Next Steps Related: /guides/generate-api-keys, /platform/project-settings, /platform/pull-requests/overview - [TestDino Frequently Asked Questions](https://docs.testdino.com/faqs): A comprehensive FAQ covering setup, configuration, API keys, test runs, flaky test detection, integrations, environment mapping, organizations, and billing. Readers can quickly resolve common questions about how TestDino works, what files to upload, and how billing is calculated. Key topics: Getting Started, Setup and Configuration, API Keys and Authentication, Test Runs and Uploads, Flakiness and Test Health, Integrations, Environment Mapping and Branch Management, Organizations and Projects, Billing and Pricing Related: /getting-started, /integrations/overview, /guides/environment-mapping ### CLI Integration - [TestDino CLI for Playwright Reporting](https://docs.testdino.com/cli/overview): Introduces the TestDino CLI with Node.js and Python options. Covers three commands (upload, cache, last-failed), quick start for both languages, common CLI options, and environment variables. Readers can pick the right CLI and start uploading results immediately. Key topics: Choose Your CLI, Quick Start, Commands, Common Options, Environment Variables Related: /cli/testdino-playwright-nodejs, /cli/python, /guides/generate-api-keys - [TestDino Node.js CLI for Playwright](https://docs.testdino.com/cli/testdino-playwright-nodejs): Full reference for the tdpw package with three commands: upload (submit reports with optional attachments, tags, and JSON output for CI), cache (store test metadata for intelligent reruns), and last-failed (retrieve previously failed tests for selective re-execution). Covers all upload flags, environment variables, Playwright reporter configuration, CI/CD integration examples for GitHub Actions, GitLab CI, and Jenkins, and troubleshooting. Key topics: Quick Reference, Prerequisites, Installation, Upload, Cache, Last Failed, Environment Variables, Configure Playwright Reporters, CI/CD Integration, What Gets Collected, Troubleshooting Related: /platform/playwright-test-runs, /guides/rerun-failed-playwright-tests, /cli/python - [TestDino Python CLI for Playwright](https://docs.testdino.com/cli/python): Full reference for the testdino Python CLI, covering installation, the upload/cache/last-failed commands with all flags, environment variable configuration, and CI/CD integration examples for GitHub Actions (including sharded), GitLab CI, and Jenkins. Readers can upload pytest-based Playwright results and set up selective rerun workflows in Python. Key topics: Quick Reference, Prerequisites, Installation, Quick Start, Commands, Configuration, CI/CD Integration Related: /platform/playwright-test-runs, /guides/playwright-ci-optimization, /cli/testdino-playwright-nodejs ## AI - [AI Onboarding](https://docs.testdino.com/ai-onboarding): Top-level AI entry page for teaching an assistant TestDino before asking product questions. Explains how to use `llms.txt` as a docs index, `llms-full.txt` as the main docs handoff, MCP for live project data, and the Playwright Skill for coding help. Includes a universal starter prompt, clear instructions for what the agent should do, quick start guides, and prompt starters by goal. Key topics: Quick Reference, Choose Your AI Resource, TestDino docs for agents, What your AI agent can help with, Universal starter prompt, What Your Agent Should Do, Quick Start Guides, Prompt starters by goal, When Docs Are Enough vs When To Use MCP, Avoid common mistakes Related: /ai/overview, /mcp/overview, /ai/playwright-skill, /getting-started, /cli/overview - [AI in TestDino for Playwright Testing](https://docs.testdino.com/ai/overview): Cross-platform overview of every AI feature — failure classification (Bug, UI Change, Unstable, Misc with confidence scores), failure patterns (New Failures, Regressions, Consistent), test-case AI analysis with recommendations and quick fixes, error grouping by message and stack trace, and MCP integration. All AI features are enabled by default and can be toggled in Project Settings → AI Features. Key topics: Quick Reference, Failure Classification, Failure Patterns, Test Case Analysis, Error Grouping, MCP Integration Related: /platform/playwright-test-runs/ai-insights, /platform/playwright-test-cases/ai-insights, /platform/playwright-ai-failure-analysis, /mcp/overview - [Playwright Best Practices Skill](https://docs.testdino.com/ai/playwright-skill): AI agent skill that Claude Code, Cursor, VS Code Copilot, and Gemini CLI load on demand for writing, debugging, and maintaining Playwright tests. Uses progressive disclosure to minimize context window usage. Covers core principles, framework recipes, CI/CD configs, POM patterns, migration guides, and Playwright CLI. Key topics: Installation, How It Works, Core Principles, Coverage, Example Prompts Related: /mcp/overview ### TestDino MCP - [TestDino MCP Server for Playwright](https://docs.testdino.com/mcp/overview): Chooser landing page for the TestDino MCP server. Branches into Local MCP Server (npx + PAT for IDE clients) and Remote MCP Server (OAuth Connectors for Claude on the web). Includes the intro video, example prompts that apply to both paths, and links to Tools Reference and Troubleshooting. Key topics: Pick your install path, Example Prompts, Test Run Analysis, Manual Test Case Management, Next Steps Related: /mcp/local, /mcp/remote, /mcp/tools-reference, /mcp/troubleshooting - [Local MCP Server](https://docs.testdino.com/mcp/local): Setup guide for installing `testdino-mcp` locally via `npx` or a global install, then wiring it into Claude Code, Cursor, or Claude Desktop with a PAT in the client config. Covers prerequisites (Node.js), the 4-step Quick Start (install, generate PAT, configure client with Tabs per client, validate), and links back to the Remote alternative. Key topics: Prerequisites, Quick Start, Install the MCP server, Create a Personal Access Token, Configure your MCP client, Validate the connection, Next Steps Related: /mcp/overview, /mcp/remote, /mcp/tools-reference, /mcp/troubleshooting - [Remote MCP Server](https://docs.testdino.com/mcp/remote): Setup guide for connecting any web-based AI client that supports remote MCP servers (ChatGPT, Claude on the web, others) to the hosted TestDino MCP server at `mcp.testdino.com`. Covers the 6-step connector flow with screenshots using Claude on the web as the concrete example (open Connectors, add custom connector, confirm account, choose method, authorize via scope picker or paste token, connector connected), `health` verification, follow-up prompt patterns, and OAuth-specific troubleshooting (cookies, popup blockers, scope refresh). The same paste-URL-and-authorize pattern applies in other clients. Key topics: Quick Reference, Prerequisites, Example: connect from Claude on the web, Verify the connection, Available tools, Common follow-up prompts, Troubleshooting Related: /mcp/overview, /mcp/local, /mcp/tools-reference, /mcp/troubleshooting, /platform/playwright-ai-test-audit - [TestDino MCP Tools Reference Guide](https://docs.testdino.com/mcp/tools-reference): Parameter reference for all 12 MCP tools organized into Connection (health), Analysis (list_testruns, get_run_details, list_testcase, get_testcase_details, debug_testcase), and Test Case Management (manual test cases and suites). Each tool includes parameter tables and video demos. Key topics: Tool Index, Connection, Analysis, Test Case Management Related: /mcp/overview, /mcp/local, /mcp/remote, /mcp/troubleshooting - [MCP Troubleshooting](https://docs.testdino.com/mcp/troubleshooting): Diagnoses common MCP issues across installation, editor integration (Claude Code, Cursor, Claude Desktop), authentication, data lookup, and network errors. Quick Reference table maps symptoms to causes. Each issue has real error messages and step-by-step fixes. Key topics: Quick Reference, Installation Issues, Editor Integration, Authentication, Data Lookup, Network Errors Related: /mcp/overview, /mcp/local, /mcp/remote, /data-privacy/cloud-endpoints ## Guides - [Generate API Keys](https://docs.testdino.com/guides/generate-api-keys): Step-by-step guide to creating, using, rotating, and securing both TestDino credential types — **API Keys** (for CLI test reporting / uploads) and **Access Tokens** (for Public API v1 access via the `public-api` scope and external integrations like the Azure DevOps Extension via the `azureext` scope). Both are managed under Settings → Keys & Tokens and shown only once at creation. Includes CI secret setup for GitHub Actions, GitLab CI, Jenkins, Azure DevOps, and CircleCI. Covers key limits per plan (Community 2, Pro 5, Team 10, Enterprise unlimited) and security best practices. Key topics: Create a credential (API Key vs Access Token), Use your API key, Set up CI/CD secrets, Rotate a key, Security Practices, Key Limits Related: /getting-started, /guides/playwright-github-actions, /cli/testdino-playwright-nodejs, /api-reference/overview - [Environment Mapping](https://docs.testdino.com/guides/environment-mapping): Explains how to map Git branches to named environments (Dev, Staging, Production) using exact match or regex patterns so test results route to the correct environment automatically. Covers regex symbols, common patterns for Git Flow, version releases, case-insensitive matching, validation rules, best practices, and CLI override with --environment flag. Key topics: Quick Reference, Pattern Types, Common Patterns, Common Use Cases, Regex Symbols Reference, Best Practices, Validation, Testing Patterns, CLI Override Related: /platform/project-settings, /guides/playwright-github-actions, /platform/analytics/environment - [Playwright Flaky Tests](https://docs.testdino.com/guides/playwright-flaky-test-detection): Explains how TestDino detects flaky tests (within a single run via retries and across multiple runs via inconsistent outcomes), classifies them by root cause (Timing, Environment, Network, Assertion, Other), and shows where to find them across six views: Dashboard, Analytics Summary, Test Run Summary, Test Case History, Test Explorer, and Environment Analytics. Also covers CI check behavior (Strict vs Neutral) and MCP export. Key topics: Quick Reference, How Detection Works, Flaky Test Categories, Where to Find Flaky Tests (Dashboard, Analytics Summary, Test Run Summary, Test Case History, Test Explorer, Environment Analytics), CI Check Behavior, Export Flaky Test Data Related: /platform/playwright-test-explorer, /platform/playwright-test-dashboard, /platform/analytics/environment, /guides/github-status-checks, /mcp/overview - [Playwright Real-Time Reporting](https://docs.testdino.com/guides/playwright-real-time-test-streaming): Describes the experimental real-time streaming feature that delivers Playwright results to the dashboard as each test completes via WebSocket. Covers how to enable the toggle, setup with @testdino/playwright, WebSocket connection states, active test runs section with sharded run support, multi-tab BroadcastChannel coordination, known limitations, and FAQ. Key topics: Quick Reference, Enable Real-Time Streaming, Setup, WebSocket Status, Active Test Runs, Multi-Tab Support, Known Limitations, FAQ Related: /cli/testdino-playwright-nodejs, /platform/playwright-test-runs, /guides/playwright-github-actions - [Playwright Code Coverage](https://docs.testdino.com/guides/playwright-code-coverage): End-to-end guide for collecting code coverage from Playwright tests. Requires the `@testdino/playwright` streaming reporter (Experimental). Covers coverage metrics (statements, branches, functions, lines), application instrumentation methods (babel-plugin-istanbul, vite-plugin-istanbul, nyc), enabling coverage via CLI flags or reporter config, auto-fixture and manual fixture setup, sharded run merging, data handling, and troubleshooting. Key topics: Coverage Metrics, Prerequisites, Instrument Your Application, Enable Coverage, Use the Coverage Fixture, Run Tests, Sharded Runs, Data Handling, Troubleshooting Related: /platform/playwright-test-runs/coverage, /platform/analytics/playwright-code-coverage, /guides/playwright-real-time-test-streaming - [Playwright Test Annotations in TestDino](https://docs.testdino.com/guides/playwright-test-annotations): Explains how to use testdino:-prefixed Playwright annotations to attach metadata (priority, feature, owner, link, context, flaky-reason), trigger Slack notifications on failure (testdino:notify-slack), and track custom numeric metrics (testdino:metric) with time-series charts. Covers static vs runtime metrics, Slack annotation mapping configuration, and where annotations display in the UI. Key topics: Quick Reference, Supported Annotations, Add Annotations to Tests, Custom Metrics, View Annotations in TestDino, Annotation-Based Slack Notifications, Configure Annotation-Slack Mapping, Example: Full Annotation Setup Related: /integrations/slack-playwright-test-alerts, /platform/playwright-test-cases, /platform/test-runs/playwright-failure-summary - [Automated Playwright Reports in TestDino](https://docs.testdino.com/guides/automated-playwright-reports): Shows how to configure scheduled PDF reports that summarize test execution data (executive summary, test case analysis, branch statistics, trend graphs) and deliver them to specified recipients on daily, weekly, or monthly cadence. Covers report setup, recipients (To/CC/BCC), schedule configuration, optional tag and environment filters, and management actions (preview, edit, pause, delete). Key topics: Quick Reference, Report Contents, Set Up a Report, Manage Reports, Schedule Behavior Related: /platform/project-settings, /platform/playwright-test-analytics ### CI Setup - [CI Setup Overview](https://docs.testdino.com/guides/ci-setup-overview): Entry point for integrating TestDino with CI providers. Links to dedicated guides for GitHub Actions, GitLab CI, Azure DevOps, CircleCI, TeamCity, AWS CodeBuild, and Jenkins. Key topics: CI Providers Related: /getting-started, /cli/testdino-playwright-nodejs - [Playwright GitHub Actions Integration](https://docs.testdino.com/guides/playwright-github-actions): Shows how to upload Playwright test results from GitHub Actions to TestDino, with workflow YAML examples for basic upload, environment tagging, full artifacts, caching for reruns, and sharded tests with smart rerun detection using github.run_attempt. Readers can set up a complete CI pipeline with selective failed-test reruns. Key topics: Prerequisites, Store the API Key, Basic Workflow, Upload Options, Sharded Tests, Rerun Failed Tests, Troubleshooting Related: /guides/generate-api-keys, /guides/github-status-checks, /guides/playwright-ci-optimization - [Playwright GitLab CI Setup](https://docs.testdino.com/guides/playwright-gitlab-ci-setup): Upload Playwright results from GitLab CI/CD pipelines to TestDino. Covers basic pipeline, sharded pipeline with parallel keyword and blob report merging, environment variable setup, rerun failed tests, and troubleshooting. Key topics: Prerequisites, Set Up Your API Key, Basic Pipeline Config, Upload Options, Sharded Test Runs, Rerun Failed Tests, Troubleshooting Related: /integrations/playwright-gitlab-ci, /guides/playwright-ci-optimization - [Playwright Azure DevOps Pipeline Setup](https://docs.testdino.com/guides/playwright-azure-devops-pipeline): Upload Playwright results from Azure DevOps Pipelines to TestDino. Covers basic pipeline, sharded pipeline with matrix strategy and MergeAndUpload stage, secret variable setup, rerun failed tests, and troubleshooting. Key topics: Prerequisites, Set Up Your API Key, Basic Pipeline Config, Upload Options, Sharded Test Runs, Rerun Failed Tests, Troubleshooting Related: /integrations/playwright-azure-devops, /guides/playwright-ci-optimization - [Playwright CircleCI Orb Setup](https://docs.testdino.com/guides/playwright-circle-ci-orb): Upload Playwright results from CircleCI using the TestDino Orb with minimal configuration. Covers basic, environment-tagged, full artifacts, and sharded examples with orb parameter reference. Key topics: Prerequisites, Store the API Key, Basic Usage, Orb Parameters, Troubleshooting Related: /guides/playwright-circle-ci-cli, /guides/playwright-ci-optimization - [Playwright CircleCI CLI Setup](https://docs.testdino.com/guides/playwright-circle-ci-cli): Upload Playwright results from CircleCI using the tdpw CLI directly. Covers basic pipeline, sharded pipeline with parallelism and workspace-based blob report merging, and troubleshooting. Key topics: Prerequisites, Set Up Your API Key, Basic Pipeline Config, Upload Options, Sharded Test Runs, Troubleshooting Related: /guides/playwright-circle-ci-orb, /guides/playwright-ci-optimization - [Playwright TeamCity Setup](https://docs.testdino.com/guides/playwright-teamcity): Complete setup guide for the TestDino TeamCity Recipe plugin, covering two installation methods, adding the build step, configuration reference for all fields, environment variables, running builds, viewing results, troubleshooting, and best practices. Key topics: Prerequisites, Installation Methods, Adding the Build Step, Configuration Reference, Configuration Examples, Using Environment Variables, Running Your Build, Viewing Results, Troubleshooting Related: /getting-started, /guides/generate-api-keys - [Playwright AWS CodeBuild Setup](https://docs.testdino.com/guides/playwright-amazon-codebuild): Upload Playwright results from AWS CodeBuild to TestDino. Covers basic buildspec, sharded buildspec with sequential shard passes and merged reporting, environment variable setup, rerun failed tests, and troubleshooting. Key topics: Prerequisites, Set Up Your API Key, Basic Buildspec Config, Upload Options, Sharded Test Runs, Rerun Failed Tests, Troubleshooting Related: /guides/playwright-ci-optimization - [Playwright Jenkins Pipeline Setup](https://docs.testdino.com/guides/playwright-jenkins): Upload Playwright results from Jenkins pipelines to TestDino. Covers basic pipeline, sharded pipeline with parallel stages, stash/unstash for blob reports, Playwright Docker image, rerun failed tests, and troubleshooting. Key topics: Prerequisites, Set Up Your API Key, Basic Pipeline Config, Upload Options, Sharded Test Runs, Rerun Failed Tests, Troubleshooting Related: /guides/playwright-ci-optimization ### Debug Failures - [Debug Playwright Test Failures](https://docs.testdino.com/guides/debug-playwright-test-failures): Overview of the debugging workflow in TestDino, covering all evidence types (screenshots, video, trace, console, error details), where to find them in the test case detail page, and a debugging workflow from error message to console logs. Readers learn which evidence to check first based on the type of failure. Key topics: Evidence Types, Where to Find Evidence, Debugging Workflow, Next Steps Related: /guides/playwright-trace-viewer, /guides/debug-playwright-failures/visual-evidence, /guides/playwright-error-grouping - [Playwright trace viewer online](https://docs.testdino.com/guides/playwright-trace-viewer): Guide to using the embedded Playwright trace viewer in TestDino, covering how to enable and upload traces, when to use them (race conditions, timing, network failures, complex flows), how to navigate the viewer (Actions panel, Timeline, DOM Snapshot, Network tab, Console tab, Source tab), and debugging patterns for element not found, timeout, assertion failure, and race conditions. Key topics: Quick Reference, Enable Traces, Upload Traces, When to Use Traces, Open the Trace Viewer, Navigate the Trace, Debug Common Failures Related: /guides/debug-playwright-failures/visual-evidence, /guides/playwright-error-grouping, /guides/playwright-flaky-test-detection - [Visual Evidence for Test Failures](https://docs.testdino.com/guides/debug-playwright-failures/visual-evidence): Covers how to enable and upload screenshots and videos from Playwright tests, when to use each (screenshots for UI layout, videos for timing issues), how to view them in TestDino, visual comparison modes for toHaveScreenshot() tests (Diff, Actual, Expected, Side by Side, Slider), console logs, and storage limits by plan. Key topics: Quick Reference, When to use each, Enable screenshots, Enable video recording, Upload Visual Evidence, View screenshots, View videos, Visual Comparison, Console Logs, Debugging with Visual Evidence, Storage Limits Related: /guides/playwright-trace-viewer, /guides/playwright-error-grouping, /guides/playwright-visual-testing - [Playwright Error Grouping in TestDino](https://docs.testdino.com/guides/playwright-error-grouping): Explains how TestDino groups test failures by error message, how to view error groups in the Errors tab and Analytics, error analytics with trends over time, common error patterns, and creating tickets from error groups via Jira, Linear, Asana, or monday. Key topics: Quick Reference, How Error Grouping Works, View Error Groups, View Error Categories, Error Analytics, Common Error Patterns, Create Tickets from Error Groups Related: /guides/playwright-trace-viewer, /platform/analytics/errors ### CI Optimization - [Playwright CI Optimization with TestDino](https://docs.testdino.com/guides/playwright-ci-optimization): Introduces TestDino's CI optimization strategy centered on rerunning only failed tests to cut CI time by 40-60%. Explains the problem with full suite reruns, and how TestDino extends Playwright's native --last-failed with cross-runner caching, shard awareness, workflow-level persistence, and branch/commit tracking. Key topics: Quick Reference, The Problem, How TestDino Extends Playwright Related: /guides/rerun-failed-playwright-tests, /guides/playwright-github-actions, /guides/github-status-checks - [Re-run Failed Tests](https://docs.testdino.com/guides/rerun-failed-playwright-tests): Detailed guide to re-running only failed Playwright tests in GitHub Actions using TestDino's cache and last-failed CLI commands. Includes cost savings calculations, full workflow YAML with sharding and rerun logic, step-by-step visual walkthrough of the GitHub Actions rerun flow, how the workflow logic detects reruns via github.run_attempt, edge cases (pipeline fails before tests, --max-failures), and a link to a sample repository. Key topics: Quick Reference, How it works, Quick start, How to re-run failed tests in GitHub Actions?, Full workflow, How the workflow logic works, Edge cases Related: /guides/playwright-ci-optimization, /guides/playwright-github-actions, /guides/github-status-checks ### Playwright - [Playwright Visual Testing in TestDino](https://docs.testdino.com/guides/playwright-visual-testing): Explains how to upload Playwright snapshot screenshots to TestDino and review visual diffs for tests using toHaveScreenshot(). Covers adding visual assertions, uploading with --upload-images, CI workflow configuration, viewing failed visual tests with Diff/Actual/Expected comparison modes, and updating baselines after intentional UI changes. Key topics: Quick Reference, Quick Start Steps, Examples Related: /guides/debug-playwright-failures/visual-evidence, /cli/testdino-playwright-nodejs - [Playwright Component Testing in TestDino](https://docs.testdino.com/guides/playwright-component-testing): Describes how to use Playwright's experimental component testing (React, Vue, Svelte) with TestDino. Requires the `@testdino/playwright` streaming reporter (Experimental). Covers setup with scaffolding, reporter configuration in playwright-ct.config.ts, running tests, CI integration with GitHub Actions, and known limitations. Key topics: Quick Reference, Supported Frameworks, Setup, Configure the TestDino Reporter, Run Tests, CI Integration, Limitations, Supported TestDino Features Related: /guides/playwright-real-time-test-streaming, /guides/playwright-code-coverage, /guides/playwright-flaky-test-detection ### Git Provider - [GitHub Status Checks](https://docs.testdino.com/guides/github-status-checks): Comprehensive guide to configuring TestDino GitHub CI Checks as automated quality gates that block PR merges based on pass rate thresholds, mandatory tags, flaky handling modes (Strict/Neutral), and environment-specific overrides. Covers understanding check results, check details panel, making checks required via GitHub Rulesets, common scenarios, best practices, and troubleshooting. Key topics: Quick Reference, What are GitHub CI Checks?, Quality Gate Settings, Environment Overrides, Understanding Check Results, Check Details, Making CI Checks Required, Common Scenarios, Best Practices, Troubleshooting Related: /integrations/ci-cd/github, /guides/environment-mapping, /guides/playwright-flaky-test-detection - [Playwright Test Health Status Badges](https://docs.testdino.com/guides/test-health-badges): Shows how to embed live SVG badges (Test Health, Flaky, Tests) in GitHub or GitLab READMEs that display real-time test health, flakiness counts, and pass/fail counts from the latest completed test run. Covers badge types, color scales, setup steps for both platforms, snippet formats (Link, Badge URL, Markdown), and update timing. Key topics: Badge Types, Color Scale, Prerequisites, Get Badge URLs, Add to GitHub, Add to GitLab, Badge Updates Related: /platform/project-settings, /guides/github-status-checks ## Platform ### Organizations - [TestDino Organizations Overview](https://docs.testdino.com/platform/organizations/overview): Brief overview of how organizations serve as the top-level container for work, billing, users, and projects. Shows how to create a new organization, switch between organizations, and enter a workspace to create your first project. Key topics: Quick Start Steps Related: /platform/organizations/projects, /platform/organizations/users-roles - [TestDino Projects for Playwright Testing](https://docs.testdino.com/platform/organizations/projects): Explains how projects isolate test data, API keys, integrations, and analytics, with quick start steps for creating a project, running sample tests, and navigating to dashboards, test runs, pull requests, test case management, test explorer, analytics, and settings. Key topics: Quick Start Steps Related: /getting-started, /cli/overview, /platform/playwright-test-cases - [TestDino Users, Roles, and Permissions](https://docs.testdino.com/platform/organizations/users-roles): Describes organization-level membership with four roles (Owner, Admin, Member, Viewer) and their capabilities for inviting, updating, and removing users. Covers org users and guest users (time-bounded access), and quick start steps for inviting members, changing roles, and filtering by role. Key topics: Role Capabilities, Quick Start Steps Related: /platform/organizations/overview, /platform/organizations/settings - [TestDino Organization Settings](https://docs.testdino.com/platform/organizations/settings): Covers how to edit the organization name, website, and logo (PNG/JPG up to 5MB), and notes that Organization ID and Created on are read-only. Only owners and admins can update settings. Key topics: Quick Start Steps Related: /platform/organizations/overview, /platform/organizations/users-roles - [Project Settings](https://docs.testdino.com/platform/project-settings): Comprehensive reference for all project-level settings: General (name, description, danger zone), API Keys (create, manage, rotate, revoke), Automated Reports, TestDino Add-ons (Status Badges), Integrations (CI/CD, Issue Tracking, Communication), and Branch Mapping (environment creation, pattern matching, CLI override). Key topics: General, API Keys, Automated Reports, TestDino Add-ons, Integrations, Branch Mapping Related: /integrations/overview, /guides/environment-mapping, /guides/automated-playwright-reports ### Billing & Usage - [TestDino Billing & Pricing Plans](https://docs.testdino.com/pricing): Explains how TestDino meters usage by test executions, compares Community, Professional, Team, and Enterprise plans with pricing and limits, and covers billing cycles, execution pool allocation, upgrading/downgrading, cancellation, and payment methods. Readers can estimate their monthly usage and select the right plan for their team size. Key topics: How Usage Is Measured, Choosing the Right Plan, Billing Cycles, What Happens at the Limit, Execution Pool Allocation, Upgrading and Downgrading, Cancellation, Invoices and Payment, FAQ Related: /platform/billing-and-usage/overview, /platform/billing-and-usage/test-limits, /data-privacy/data-retention - [TestDino Billing and Usage Overview](https://docs.testdino.com/platform/billing-and-usage/overview): Describes the Billing & Usage page's Usage tab, which displays the subscription card (plan, test case usage, projects, users, data retention, billing period) and plan features by category (CI/CD, PR, debugging, integrations, quality metrics, test case management). Key topics: Usage Related: /platform/billing-and-usage/test-limits, /platform/billing-and-usage/invoices, /platform/organizations/settings - [Test Execution Limits](https://docs.testdino.com/platform/billing-and-usage/test-limits): Explains the Test Limits tab for viewing and redistributing monthly test execution quotas across projects, with overview cards (monthly limit, available now, current period), per-project allocation details (used/allocated/remaining), the Move Between Projects transfer feature, and Auto-Borrow settings that let projects borrow from the unallocated pool. Key topics: Overview Cards, Project Allocations Related: /platform/billing-and-usage/overview, /platform/billing-and-usage/invoices - [Manage TestDino Billing and Subscription](https://docs.testdino.com/platform/billing-and-usage/manage-billing): Covers plan changes (upgrades take effect immediately, downgrades at period end), monthly vs annual subscription types, billing cycle vs usage cycle, and cancellation behavior (access continues until period end, reverts to Community plan). Warns that data beyond the lower plan's retention is deleted on downgrade. Key topics: Change Plan, Subscription Types, Billing Cycle vs Usage Cycle, Cancel Subscription Related: /platform/billing-and-usage/overview, /platform/billing-and-usage/invoices - [TestDino Billing Invoices and History](https://docs.testdino.com/platform/billing-and-usage/invoices): Describes the Invoices tab that lists all billing records with columns for ID, customer, payment ID, amount, status, and date. Covers filtering by status and time range, searching, and actions (view, download PDF, copy link, sync). Key topics: Invoice List, Filters and Search, Actions Related: /platform/billing-and-usage/overview, /platform/billing-and-usage/test-limits ### Dashboard - [Playwright Test Dashboard](https://docs.testdino.com/platform/playwright-test-dashboard): Unified Playwright test dashboard for KPIs, failures, flaky tests, and trends. Includes KPI tiles (total executions, passed, failed, average run duration), Recent Test Runs with status badges, Recent Pull Requests, Test Case Execution Trend chart (daily pass/fail area chart), Most Flaky Tests ranked by flakiness with severity and count, and Slowest Tests ranked by duration with stability indicator. Key topics: KPI Tiles, Recent Test Runs, Recent Pull Requests, Test Case Execution Trend, Most Flaky Tests, Slowest Tests Related: /platform/playwright-test-runs, /platform/playwright-test-analytics ### Pull Requests - [Playwright Pull Request Test Summary](https://docs.testdino.com/platform/pull-requests/summary): The Pull Requests list view showing all PRs with metadata, latest test run, and pass/fail/flaky/skipped counts. Covers layout, PR state badges (Open, Merged, Closed), filters/controls, the three-tab detail view (Overview, Timeline, Files Changed), quick start steps, and advantages over GitHub's native PR view. Key topics: Why Use This View, Layout, Pull Request Detail View, Quick Start Steps, What You Get Beyond GitHub Related: /platform/pull-requests/overview, /platform/pull-requests/timeline, /platform/pull-requests/files-changed - [Playwright Pull Request Test Overview](https://docs.testdino.com/platform/pull-requests/overview): The Overview tab for a single PR showing the PR header (title, status, branches), sidebar (author, reviewers, files changed, timestamps), KPI tiles (test runs, pass rate, files changed, average duration), latest test run card with test result summary, and a test results trend graph plotting results across all runs for the PR. Key topics: PR Header, Sidebar, KPI Tiles, Latest test run, Test results trend Related: /platform/pull-requests/timeline, /platform/pull-requests/files-changed, /platform/playwright-test-runs - [Playwright Pull Request Timeline View](https://docs.testdino.com/platform/pull-requests/timeline): The Timeline tab displaying all events for a PR in a chronological feed: commits with test runs (clickable to open run details), commits without runs, and code review events from GitHub/GitLab. Covers filtering by keyword, author, data type, status, and sort order. Key topics: Event Types, Filtering and Sorting, Common Actions Related: /platform/pull-requests/overview, /platform/pull-requests/files-changed, /platform/playwright-test-runs - [Pull Request Files Changed in TestDino](https://docs.testdino.com/platform/pull-requests/files-changed): The Files Changed tab showing all file modifications for a PR with an expandable diff viewer (added/removed/unchanged lines), code-level comments with resolved/unresolved status, file search and filter controls. Requires an active GitHub or GitLab integration. Key topics: Layout, Common Actions Related: /platform/pull-requests/overview, /platform/pull-requests/timeline, /integrations/ci-cd/github ### Test Runs - [Playwright Test Runs](https://docs.testdino.com/platform/playwright-test-runs): The main Test Runs page listing every Playwright execution with search, time period, status, duration, author, environment, branch, and tag filters. Covers active test runs with real-time streaming, key columns (run ID, commit, branch/environment, results), test run grouping by commit, run-level tags via --tag CLI flag, run details header, and links to all six detail tabs (Summary, Specs, Errors, History, Configuration, Coverage). Key topics: Search and Filters, Run Details Header, Quick Start Steps Related: /platform/test-runs/playwright-failure-summary, /platform/playwright-test-runs/specs, /platform/playwright-test-runs/errors - [Playwright Test Run Failure Summary](https://docs.testdino.com/platform/test-runs/playwright-failure-summary): The Summary tab breaking down failed, flaky, and skipped tests by root cause subcategories (assertion failure, element not found, timeout, network, timing related, environment dependent, etc.). Covers KPI tiles, the Detailed Analysis table with status, spec file, duration, retries, annotation badges, history preview, trace links, search tokens (s:, c:, @, b:), sorting, and context carry-over from tile selections. Key topics: KPI Tiles, Detailed Analysis Related: /platform/playwright-test-runs/specs, /platform/playwright-test-runs/errors - [Playwright Test Run Spec File View](https://docs.testdino.com/platform/playwright-test-runs/specs): The Specs tab with two sub-views: Spec File (grouping results by file path with status bars, sort, filter, search) and Tag (grouping results by test-case tags to assess health of tag subsets like @smoke or @regression). Each view has a left panel list and right panel detail with test rows linking to full evidence. Key topics: Spec File View, Tag View Related: /platform/test-runs/playwright-failure-summary, /platform/playwright-test-runs/errors, /guides/playwright-test-annotations - [Playwright Test Run Error Grouping](https://docs.testdino.com/platform/playwright-test-runs/errors): The Errors tab grouping failed and flaky tests by error message within a run. Covers search, tag filter, status filters (All/Failed/Flaky), expand/collapse controls, error group rows with affected test counts, test case rows with browser and retry info, and a side panel showing status, duration, retries, error message, stack trace, and link to full test case details. Key topics: Layout, Side Panel Related: /platform/test-runs/playwright-failure-summary, /platform/playwright-test-runs/specs, /platform/playwright-test-run-history - [Playwright Test Run History in TestDino](https://docs.testdino.com/platform/playwright-test-run-history): The History tab showing outcome and duration trends for recent runs on the same branch and environment. Includes a Run History chart (passed/failed/flaky/skipped counts over time) and a Test Execution Time chart (total runtime per run with trend highlighting). Use it to detect instability, regressions, and environment changes. Key topics: Run History, Test Execution Time Related: /platform/test-runs/playwright-failure-summary, /platform/playwright-test-runs/specs, /platform/playwright-test-runs/configuration - [Playwright Test Run Configuration View](https://docs.testdino.com/platform/playwright-test-runs/configuration): The Configuration tab showing the full execution context for a test run across four sections: Source Control (branch, commit, author, links), CI Pipeline (provider, workflow, build number, trigger), System Info (OS, CPU, memory, Node.js/Playwright versions), and Test Configuration (browsers, workers, retries, timeouts, reporters, artifacts). Use it to compare runs and find configuration drift. Key topics: 1. Source Control, 2. CI Pipeline, 3. System Info, 4. Test Configuration Related: /platform/test-runs/playwright-failure-summary, /platform/playwright-test-runs/specs, /platform/playwright-test-run-history - [Playwright Test Run Code Coverage View](https://docs.testdino.com/platform/playwright-test-runs/coverage): The Coverage tab showing per-run code coverage with four summary metrics (statements, branches, functions, lines), a coverage badge on the Test Runs list, per-file breakdown in List and Tree views, and a Coverage Diff comparing current vs baseline (previous run or target branch). Only appears when coverage is enabled and the app is instrumented. Key topics: Coverage Badge, Coverage Summary, Coverage by File, Coverage Diff Related: /guides/playwright-code-coverage, /platform/analytics/playwright-code-coverage, /platform/test-runs/playwright-failure-summary - [Playwright Test Run AI Insights](https://docs.testdino.com/platform/playwright-test-runs/ai-insights): The AI Insights tab analyzing a test run with KPI tiles for error variants, AI failure categorization (Actual Bug, UI Change, Unstable Test, Miscellaneous with confidence scores), and failure patterns (New Failures, Regressions, Consistent Failures). Includes an error analysis table with filtering by variant, category, and pattern. Key topics: KPI Tiles, Error Variants, AI Failure Categorization, Failure Patterns, Error Analysis, Filtering Related: /platform/test-runs/playwright-failure-summary, /platform/playwright-test-runs/specs, /platform/playwright-test-runs/errors - [Test Run Analytics](https://docs.testdino.com/platform/analytics/test-run): The Test Run analytics view with KPI metrics (average run time, fastest run, speed improvement), a tag health table comparing stability across run-level tags, Speed by Branch Performance chart, Test Execution Efficiency Trends area chart, and Test Run Speed Distribution stacked bars categorizing daily runs into Fast/Normal/Slow groups. Key topics: Metrics, Tags, Speed by Branch Performance, Test Execution Efficiency Trends, Test Run Speed Distribution Related: /platform/analytics/playwright-test-health-summary, /platform/analytics/test-case, /platform/analytics/environment ### Test Cases - [Playwright Test Cases](https://docs.testdino.com/platform/playwright-test-cases): The Test Case detail view for a single test result within a run, showing KPI tiles (status, total runtime, retry attempts), annotations panel, and evidence grouped by attempt: error details, test steps, screenshots, console, video, trace viewer, and visual comparison for toHaveScreenshot() tests with Diff/Actual/Expected/Side-by-side/Slider modes. Key topics: KPI Tiles, Annotations, Evidence Related: /platform/playwright-test-case-history, /guides/debug-playwright-failures/visual-evidence - [Playwright Test Case AI Insights](https://docs.testdino.com/platform/playwright-test-cases/ai-insights): AI diagnosis for failed or flaky test cases with confidence-scored category labels, AI recommendations linking to recent changes, historical insight showing recurring vs. new failures, and targeted quick fixes. Available for failed or flaky tests. Key topics: Category and Confidence Score, AI Recommendations, Historical Insight, Quick Fixes Related: /platform/playwright-test-cases, /platform/playwright-test-case-history, /platform/playwright-test-runs - [Playwright Test Case History Tracking](https://docs.testdino.com/platform/playwright-test-case-history): The History tab showing every execution of a single test case on the active branch with test metrics (stability percentage, total runs, outcome counts), last status tiles (Last Passed/Failed/Flaky with run links), and an execution history table with columns for timestamp, run ID, status, duration, retries, run location, and expandable error details. Key topics: What You See, How to Read Stability, Why It Matters Related: /platform/playwright-test-cases, /platform/playwright-test-runs - [Playwright Test Explorer in TestDino](https://docs.testdino.com/platform/playwright-test-explorer): A centralized view of all test cases in a project with two modes: Hierarchical (grouped by spec file) and Flat (all tests in a single table). Sortable columns include executions, failure rate, flaky rate, average duration, platform, tags, recent status, and last run. Includes search with regex, filters (time period, tags, platforms, environment), and a details drawer with Overview (run history, platform analytics, environment analytics) and Errors (aggregated error messages with first/last seen dates). Key topics: View Modes, Table Columns, Filtering and Search, Test Case Details, Quick Start, Pagination Related: /platform/playwright-test-runs, /platform/playwright-test-cases, /platform/playwright-test-analytics - [Test Case Analytics](https://docs.testdino.com/platform/analytics/test-case): The Test Case analytics view with KPIs (average test cases per run, fastest/slowest test, average duration), Slowest Test Cases table (avg duration, frequency, max duration, performance trend), Test Execution Performance chart with dynamic performance bands, New Test Cases trend chart tracking suite growth, and Test Cases Pass/Fail History comparing up to 10 tests side-by-side with pass rate trend lines. Key topics: Key Metrics, Slowest Test Cases, Test Execution Performance, New Test Cases, Test Cases Pass/Fail History Related: /platform/analytics/playwright-test-health-summary, /platform/analytics/test-run, /platform/analytics/errors ### Analytics - [Playwright Test Analytics in TestDino](https://docs.testdino.com/platform/playwright-test-analytics): Overview of the Analytics section with six views: Summary (volume, stability, trends), Test Run (speed KPIs, tag health, branch performance), Test Case (duration, pass/fail trends), Errors (error grouping over time), Coverage (trends across branches), and Environment (per-environment pass rates and volume). Covers shared global filters (time period, environment, branches) and quick start steps. Key topics: What Analytics shows, Analytics Capabilities, Filters, Quick Start Steps Related: /platform/analytics/playwright-test-health-summary, /platform/analytics/test-run, /platform/analytics/errors - [Playwright Test Health Analytics Summary](https://docs.testdino.com/platform/analytics/playwright-test-health-summary): The Summary analytics view with four chart sections: Test Run Volume (daily passed/failed with totals, average runs per day), Flakiness & Test Issues (flakiness percentage trend with flaky test list), New Failures (new failure rate trend with affected test list), and Test Retry Trends (total retries, total runs, retried test cases per day). Key topics: Test Run Volume, Flakiness & Test Issues, New Failures, Test Retry Trends Related: /platform/analytics/test-run, /platform/analytics/test-case, /platform/analytics/errors - [Environment Analytics](https://docs.testdino.com/platform/analytics/environment): The Environment analytics view comparing test health across environments with Execution Results by Environment tiles (success rate, passed/failed counts), Environment Analysis showing branch and OS distribution, Pass Rate Trends time-series chart per environment, and Test Run Volume per environment over time. Readers can isolate environment-specific issues and identify infrastructure problems. Key topics: Execution Results by Environment, Environment Analysis, Pass Rate Trends, Test Run Volume Related: /platform/analytics/playwright-test-health-summary, /platform/analytics/test-run, /guides/environment-mapping - [Playwright Code Coverage Analytics](https://docs.testdino.com/platform/analytics/playwright-code-coverage): The Coverage analytics view with a time-series chart plotting statement, branch, function, and line coverage across runs, Coverage by Branch comparison of average statement coverage, and a Coverage Diff view comparing coverage between branches or time periods sorted by largest regressions. Readers can track coverage trends, validate feature branch coverage, and identify files that lost coverage after refactors. Key topics: Coverage Trends, Coverage by Branch, Coverage Diff, Filters Related: /guides/playwright-code-coverage, /platform/playwright-test-runs/coverage, /platform/analytics/playwright-test-health-summary - [Playwright Error Analytics in TestDino](https://docs.testdino.com/platform/analytics/errors): The Errors analytics tab grouping error messages by type (Assertion Failures, Timeout Issues, Element Not Found, Network Issues, JavaScript Errors, Browser Issues, Other) with three KPI tiles (total errors, unique error types, affected tests), an Error Message Over Time line graph showing daily error frequency by category, and an Error Categories table with occurrence counts, affected tests, first/last detected dates, and a side panel showing all affected test cases per error. Key topics: Error Type Reference, Metrics, Error Message Over Time, Error Categories Related: /platform/analytics/playwright-test-health-summary, /guides/playwright-error-grouping, /platform/playwright-test-runs/errors ### AI Failure Analysis - [Playwright AI Failure Analysis Overview](https://docs.testdino.com/platform/playwright-ai-failure-analysis): Cross-run AI analysis that categorizes failures into Actual Bug, UI Change, Unstable Test, and Miscellaneous. Key metrics tiles show totals and top-impacted tests per category. Failure patterns section groups failures into Persistent (recurring across runs) and Emerging (recently started) with test name, branch, run IDs, and failure count. Scoped by time range and environment. Key topics: Get Started, Key Metrics, Failure Patterns, Persistent Failures, Emerging Failures Related: /platform/playwright-test-runs/ai-insights, /platform/playwright-test-cases/ai-insights, /platform/project-settings - [Playwright AI Test Audit](https://docs.testdino.com/platform/playwright-ai-test-audit): AI-driven audit of Playwright test suites triggered through the TestDino MCP server. Returns a 0–100 health score across three bands (Excellent 85–100, Fair 65–84, Poor 0–64), prioritized issues with severity (Critical, High, Medium, Low) and category (9 categories including Surface-Level Tests, Missing Validation, Flaky or Unstable, Hard to Maintain, Missing Scenarios, Organization & Ownership, Setup & Configuration, Duplication & Overlap, General Issues), file-level evidence with line numbers, and a downloadable markdown report. Supports four scopes: Test Case, Feature, Spec File, Suite. Audit history paginated 10 per page. Key topics: Setup, Reading the Report, Audit Score, Overview Tab, Issues, Quick Actions, Full Report Tab, Categories and Severity, Issue Categories, Severity Levels, Audit History Related: /platform/playwright-ai-failure-analysis, /mcp/overview, /mcp/tools-reference, /platform/project-settings ### Test Management - [Playwright Test Case Management](https://docs.testdino.com/test-management/playwright-test-case-management): Overview of the standalone Test Case Management workspace for creating, organizing, and maintaining manual and automated test cases. Covers key concepts (suite hierarchy up to 6 levels, list/grid views, custom fields, attachments, version history, import/export, bulk operations), workspace layout with KPI tiles, search and filter functionality, quick start steps, permissions (Admin/Editor/Viewer), and limits. Key topics: Key Concepts, Workspace Overview, Quick Start Steps, Permissions, Limits Related: /test-management/suites, /test-management/test-case/structure, /test-management/import-export - [TestDino Test Suite Management](https://docs.testdino.com/test-management/suites): Explains how to organize test cases into a nested suite hierarchy, create root-level suites and subsuites, and perform operations like edit, delete, reorder, expand/collapse, and add subsuite. Covers the default "Unassigned" suite that holds imported test cases without suite information. Key topics: Hierarchy Model, Create Suites and Subsuites, Edit, reorder, expand, or collapse, Default "Unassigned" Suite Related: /test-management/test-case/structure, /test-management/test-case/organizing-at-scale, /test-management/import-export - [Test Case Structure in TestDino](https://docs.testdino.com/test-management/test-case/structure): Defines the complete anatomy of a test case: core fields (title, description, key), classification (status, priority, severity, type, behavior, layer), automation fields (manual/automated, flaky, muted), pre/postconditions, test steps in Classic and Gherkin formats, tags, custom fields (text, textarea, number, dropdown, checkbox) with limits, attachments (up to 5), version history with diff and restore, and metadata. Key topics: Core Fields, Classification, Automation Fields, Pre/Post-conditions, Test Steps, Tags, Custom Fields, Attachments, Version History, Metadata Related: /test-management/test-case/creating-editing, /test-management/test-case/organizing-at-scale, /platform/project-settings - [Creating and Editing Test Cases](https://docs.testdino.com/test-management/test-case/creating-editing): Describes three methods for creating test cases (full form via New Test Case button, quick creation within a suite, and from suite context menu), inline editing in Sheet View and Full-Screen View with double-click editing, and adding test steps in Classic and Gherkin formats. Includes video demos for each creation and editing method. Key topics: Three Creation Methods, Inline Editing, Adding Test Steps Related: /test-management/test-case/structure, /test-management/test-case/organizing-at-scale, /test-management/import-export - [Organizing Test Cases at Scale](https://docs.testdino.com/test-management/test-case/organizing-at-scale): Guides teams on scaling their test repository using suites for hierarchical grouping and tags for cross-suite categorization (smoke, regression, etc.), with a comparison table for when to use each. Covers bulk operations on suites (reorder, delete, expand/collapse) and test cases (move to suite, change classifications, add/remove tags, delete), with a 200-item bulk edit limit. Key topics: Suite Assignment and Hierarchy, Bulk Operations Related: /test-management/suites, /test-management/test-case/structure, /test-management/import-export - [Import and Export Test Cases in TestDino](https://docs.testdino.com/test-management/import-export): Covers bulk test case import from CSV (with column mapping, enum value mapping, duplicate handling, and preview) and from TestRail (with automatic field mapping and suite hierarchy preservation), downloading the CSV template, and exporting test cases as CSV with various scope scenarios (no filters, filtered, selected, suite-level). Unmapped CSV columns are automatically created as custom fields. Key topics: Import CSV, Import from TestRail, Download CSV Template, Export CSV Related: /test-management/test-case/structure, /test-management/suites, /mcp/overview ## Resources ### Integrations - [TestDino Integrations Overview](https://docs.testdino.com/integrations/overview): Central hub listing all available integrations (GitHub, GitLab, Azure DevOps, TeamCity, Jira, Linear, Asana, monday, Slack App, Slack Webhook) with descriptions of what each provides: automated PR comments, CI checks, environment-routed Slack summaries, and prefilled issue creation. Readers understand the integration landscape and can navigate to any specific integration page. Key topics: How It Helps, What Integrations Provide, Available Integrations Related: /integrations/ci-cd/github, /integrations/jira-playwright-test-failures, /integrations/slack-playwright-test-alerts #### CI/CD - [GitHub Integration](https://docs.testdino.com/integrations/ci-cd/github): Guide to installing the TestDino GitHub App for automated test reporting on pull requests and commits, including app installation from the GitHub Marketplace, repository selection, configuring PR/commit comments with branch mapping, and setting up CI Checks with quality gates. Readers can get automated test summaries and pass/fail status checks on every PR. Key topics: How does it work?, Quick Start Steps, Why this helps Related: /guides/github-status-checks, /integrations/playwright-azure-devops, /integrations/ci-cd/teamcity - [Playwright GitLab CI Integration](https://docs.testdino.com/integrations/playwright-gitlab-ci): Explains how to connect GitLab repositories to TestDino for automated test reporting on merge requests and commits, including authorization, comment settings with branch mapping, merge request sync, and how GitLab MRs appear in TestDino's Pull Requests view. Only one Git provider (GitHub or GitLab) can be active per project. Key topics: How It Works, Quick Start Steps, Merge Requests in TestDino, CLI Compatibility Related: /integrations/ci-cd/github, /platform/pull-requests/summary, /guides/environment-mapping - [GitLab Self-Managed Integration](https://docs.testdino.com/integrations/playwright-gitlab-self-hosted): Connect TestDino to a GitLab instance running on your own infrastructure (on-prem, VPC, or corporate domain) via OAuth. Covers registering a GitLab OAuth Application, entering Instance URL / Application ID / Secret in TestDino, authorizing via your GitLab, MR and commit comment configuration with per-environment overrides, encrypted secret storage (AES-256-GCM), automatic token refresh, and managing or switching the connection. Self-signed TLS certificates are not supported. Key topics: What the Self-Managed Integration Does, Prerequisites, Create a GitLab OAuth Application, Connect TestDino with Self-Managed GitLab, Configure MR and Commit Comments, Security, Manage the Connection, Troubleshooting, FAQ Related: /integrations/playwright-gitlab-ci, /integrations/ci-cd/github, /guides/environment-mapping - [Playwright Azure DevOps Integration](https://docs.testdino.com/integrations/playwright-azure-devops): Walks through installing the TestDino Azure DevOps extension from the Visual Studio Marketplace, generating a Project Access Token, connecting the extension, viewing test runs with pass/fail/flaky/skipped counts directly inside Azure DevOps, filtering by time range, and managing tokens. The extension provides read-only access over HTTPS. Key topics: Quick Reference, Key Features, Prerequisites, Install the Extension, Connect Azure DevOps to TestDino, Viewing Test Runs, Filtering and Refreshing Data, Removing or Updating the API Token, Permissions and Security, Troubleshooting Related: /integrations/ci-cd/github, /integrations/ci-cd/teamcity, /getting-started - [TeamCity Integration](https://docs.testdino.com/integrations/ci-cd/teamcity): Overview of the TestDino TeamCity Recipe that uploads Playwright test reports from TeamCity builds, including quick start steps for installing the recipe, adding the build step, configuring upload options (JSON, HTML, images, videos, traces, full bundle), and what gets uploaded. Links to the detailed TeamCity setup guide for configuration options and troubleshooting. Key topics: How does it work?, Quick Start Steps, What gets uploaded, Why this helps Related: /guides/playwright-teamcity, /integrations/playwright-azure-devops, /integrations/ci-cd/github #### Issue Tracking - [Jira Integration for Playwright Failures](https://docs.testdino.com/integrations/jira-playwright-test-failures): Shows how to connect Jira and create prefilled bug reports directly from failed or flaky Playwright tests in TestDino. The prefilled form includes Jira fields (project, issue type, priority, labels, assignee, sprint), a structured description with test details, failure information, focused steps, links, and screenshots. Available on Pro, Team, and Enterprise plans. Key topics: How Jira works with TestDino, Create a Jira bug report in TestDino, What TestDino pre-fills, After you create the Issue, Why this helps Related: /integrations/issue-tracking/linear, /integrations/issue-tracking/asana, /integrations/slack-playwright-test-alerts - [Linear Integration](https://docs.testdino.com/integrations/issue-tracking/linear): Explains how to connect a Linear workspace and create prefilled bug reports from failed or flaky tests, with workspace/team selection, issue type, priority, labels, assignee, and a structured description containing test details, failure cluster, code context, recent history, console output, and links. Available on Pro, Team, and Enterprise plans. Key topics: How Linear works with TestDino, Create a Linear bug report in TestDino, What TestDino pre-fills, After you create the issue, Why this helps Related: /integrations/jira-playwright-test-failures, /integrations/issue-tracking/asana, /integrations/slack-playwright-test-alerts - [Asana Integration](https://docs.testdino.com/integrations/issue-tracking/asana): Describes how to connect an Asana workspace and create prefilled tasks from failed or flaky Playwright tests, with workspace/project selection, labels, assignee, and a description containing test details, failure information, test steps, console output, links, and screenshots. Available on Pro, Team, and Enterprise plans. Key topics: How Asana works with TestDino, Create an Asana task in TestDino, What TestDino pre-fills, After you create the issue, Why this helps Related: /integrations/jira-playwright-test-failures, /integrations/issue-tracking/linear, /integrations/slack-playwright-test-alerts - [monday.com Integration](https://docs.testdino.com/integrations/issue-tracking/mon): Covers the monday.com integration for creating items from failed/flaky tests with full test context mapping to monday fields, plus a dashboard widget that displays latest test run stats (run ID, status summary, duration, environment, branch) directly inside monday. Available on Pro and Enterprise plans. Key topics: Quick Reference, Prerequisites, What you can do, Create a monday bug report in TestDino, What TestDino pre-fills, TestDino Widget for monday, Integration Setup, Why use this integration Related: /integrations/jira-playwright-test-failures, /integrations/issue-tracking/linear, /integrations/slack-playwright-test-alerts #### Communication - [Slack Integration](https://docs.testdino.com/integrations/slack-playwright-test-alerts): Explains the Slack App integration that sends run summaries with environment-based channel routing and annotation-based alerts when individual tests with testdino:notify-slack fail. Covers connecting Slack via OAuth, mapping default and environment-specific channels, configuring annotation-to-Slack mappings, and how it differs from the Slack Webhook (which sends to a single channel without routing). Key topics: How does it work?, Quick Start Steps, Configuration Scenarios, Why this helps, Annotation-Based Alerts, How it's Different from Slack Webhook Related: /guides/playwright-test-annotations, /integrations/slack/webhook - [Slack Webhook Integration for Playwright](https://docs.testdino.com/integrations/slack/webhook): Covers the simpler Slack Webhook integration that sends test run summaries (status, counts, success rate, duration, environment, branch, author, commit) to a single Slack channel via an Incoming Webhook URL. Available on Pro, Team, and Enterprise plans. Does not support environment routing or annotation-based alerts. Key topics: What a Slack message contains, Set up Slack Related: /integrations/slack-playwright-test-alerts ### Data Privacy - [TestDino Data Privacy Overview](https://docs.testdino.com/data-privacy/overview): High-level overview of TestDino's data privacy posture, covering what data is collected, links to detailed pages on access, redaction, retention, and cloud endpoints. Readers understand the privacy architecture and know where to find detailed policies. Key topics: Quick Reference Related: /data-privacy/access-to-customer-data, /data-privacy/data-redaction, /data-privacy/data-retention - [TestDino Access to Customer Data](https://docs.testdino.com/data-privacy/access-to-customer-data): Exhaustive catalog of every data category TestDino collects: team and administration, organization data, source attribution, test results (run-level, test case, test steps, flaky test data), artifacts, CI/CD environment data (git, PR, pipeline, system, framework config, sharding), third-party integrations (GitHub, Jira, Linear, Asana, Slack, monday, Razorpay), and billing. Also lists what TestDino does NOT collect. Key topics: Team & Administration, Organization Data, Source Attribution, Test Results, Artifacts, CI/CD Environment Data, Third-Party Integrations, Billing & Subscription Data, What TestDino Does NOT Collect Related: /data-privacy/data-redaction, /data-privacy/data-retention - [TestDino Data Redaction for Playwright](https://docs.testdino.com/data-privacy/data-redaction): Describes the Enterprise-plan data redaction feature that automatically detects and masks secrets (API keys, tokens, passwords, connection strings, private keys) in uploaded artifacts before they appear in the dashboard. Covers the detection-scrubbing-backup-display pipeline, what gets and does not get redacted, and infrastructure-level log redaction active on all plans. Key topics: How Redaction Works, What Gets Redacted, What Does NOT Get Redacted, Infrastructure-Level Log Redaction Related: /data-privacy/access-to-customer-data, /data-privacy/data-retention - [TestDino Data Retention Policies](https://docs.testdino.com/data-privacy/data-retention): Details retention periods by subscription tier (Free 14d, Pro 90d, Team 365d, Enterprise custom) and by data category (accounts, sessions, API keys, audit logs, etc.), automated cleanup job schedules, cascade deletion behavior, and GDPR data rights (export, deletion, portability). Readers understand exactly when their data expires and how to request exports or deletions. Key topics: Retention by Subscription Tier, Retention by Data Category, Automated Cleanup, GDPR Data Rights, Important Notes Related: /data-privacy/access-to-customer-data, /data-privacy/cloud-endpoints - [TestDino Cloud Endpoints and Domains](https://docs.testdino.com/data-privacy/cloud-endpoints): Lists all internet-facing TestDino services (dashboard, API, reporter, WebSocket, auth, billing webhook, GitHub webhook), internal services (health check, Azure Blob Storage), and network security configuration (CORS, TLS 1.2+, rate limiting, security headers). Includes a firewall allowlist for organizations with network-level restrictions. Key topics: Core Services, Authentication & Billing, Integration Services, Internal Services, Network & Security, Firewall Configuration Related: /data-privacy/access-to-customer-data, /data-privacy/data-retention ### Support & Changelog - [TestDino Support and Contact Channels](https://docs.testdino.com/support): Lists all support channels (email, Discord), support hours, how to report issues with the right details, CLI debug mode for verbose logging, feature request submission, and security disclosure procedures. Readers will know exactly what information to include when filing a support ticket and how to enable debug output. Key topics: Support Channels, Support Hours, Reporting Issues, CLI Debug Mode, Feature Requests, Security Issues, Resources Related: /faqs, /getting-started - [TestDino Changelog and Release Notes](https://docs.testdino.com/changelog): Directs users to the external changelog, roadmap, and feedback board for product updates, and lists CLI package changelogs for both the Node.js (tdpw) and Python (testdino) packages on npm and PyPI. Readers can stay informed about new features, vote on requests, and track CLI version history. Key topics: Product Updates, Feedback & Feature Requests, CLI Changelogs, Stay Connected Related: (external links only) ## Public API - [TestDino Public API Overview](https://docs.testdino.com/api-reference/overview): Entry point for the read-only REST API. Covers what it exposes, the project-scoped base URL (`https://api.testdino.com/api/public/v1/{projectId}/...`), a "What's covered" table mapping all 11 endpoint sections, and a CardGroup index linking to API standards (auth, rate limits, response format, errors, pagination, date filtering). Use it to pull test data into internal dashboards, build custom reports, or integrate TestDino into CI tooling. Key topics: Base URL, What's covered, API standards Related: /api-reference/quickstart, /api-reference/conventions, /guides/generate-api-keys - [API Quickstart](https://docs.testdino.com/api-reference/quickstart): Step-by-step walkthrough for generating a Project PAT (`tdp_` prefix, `public-api` scope), exporting it as an environment variable, verifying with `/token-info`, and calling `/test-runs`. Includes curl, Node.js, and Python snippets, plus troubleshooting for 401/403/429. Key topics: Prerequisites, Steps, What to try next, Troubleshooting Related: /api-reference/overview, /api-reference/conventions, /guides/generate-api-keys - [API Standards](https://docs.testdino.com/api-reference/conventions): Single page documenting every shared mechanic across endpoints — authentication, rate limits (100/min per token, 200/min per IP, 1/min for PDF), response envelope, error codes, `page`/`limit` pagination, date filtering (`dateRange` vs `startDate`+`endDate` with precedence rules), and `?include=` parameters. Key topics: Quick reference, Authentication, Rate limits, Response format, Errors, Pagination, Date filtering, Include parameters Related: /api-reference/overview, /api-reference/quickstart - Auto-generated endpoint pages (16 total, 11 sections) from `/api-reference/openapi.yml`: Token Info, Test Runs, Test Cases, Specs, Manual Tests, Test Case Explorer, Dashboard, Filters, Reports (PDF), Analytics (summary + performance), Usage. Each includes interactive playground, request/response schemas, and multi-language examples. `?include=` usage (errors/coverage/specs/artifacts on `/test-runs/{runId}`; history pagination on `/test-cases/{caseId}`) is documented in [API Standards](https://docs.testdino.com/api-reference/conventions#include-parameters). One endpoint has a hand-written MDX override: `GET /reports/pdf` (binary response + 1/min rate limit). --- # Full Page Content > Complete rendered content of all TestDino documentation pages below. ## TestDino Playwright Test Reporting > Source: https://docs.testdino.com/index > Description: Centralized Playwright test reporting with trace viewer, flaky test detection, error grouping, and CI integration. TestDino is a Playwright-focused test reporting and management platform with MCP support. It reduces CI time and costs while maintaining reliable test suites across teams and repositories. Developers, SDETs, QA Engineers, and Engineering Managers use TestDino to manage test cases, track test health, catch regressions early, and maintain large-scale test suites. ## Get Started - [Quick Setup](https://docs.testdino.com/getting-started): Set up TestDino in minutes - [Check Sandbox](https://sandbox.testdino.com): See sample test reports ## Quick Reference | Feature | Description | | :--- | :--- | | [Real-time streaming](/guides/playwright-real-time-test-streaming) | Results appear on the dashboard as each test completes | | [Suite history and dashboard](/platform/playwright-test-dashboard) | Unified view of test health, trends, flaky tests, and PR status | | [CI optimization](/guides/playwright-ci-optimization) | Rerun only failed tests to cut runtime and CI costs | | [Evidence panel](/guides/debug-playwright-test-failures) | Screenshots, videos, console logs, traces, and visual diffs | | [Flake tracking](/guides/playwright-flaky-test-detection) | Detect and categorize flaky tests across runs | | [Error grouping](/guides/playwright-error-grouping) | Group failures by root cause and error message | | [MCP Server](/mcp/overview) | AI agents query test runs, logs, traces, and suggest fixes | | [Test management](/test-management/playwright-test-case-management) | Organize test cases, suites, and ownership linked to real runs | | [Automated reports](/guides/automated-playwright-reports) | Scheduled PDF summaries on daily, weekly, or monthly cadence | ## What You Can Do ### Analyze Failures | Feature | Description | | :------ | :---------- | | [**Dashboard**](/platform/playwright-test-dashboard) | Unified view of test health, trends, flaky tests, and PR status. | | [**Branch Environment Mapping**](/guides/environment-mapping) | Map branches to environments so trends and failures stay comparable. | | [**Test Runs**](/platform/playwright-test-runs) | Debug with traces, screenshots, videos, and console logs. | ### Optimize CI Runtime | Feature | Description | | :------ | :---------- | | [**Rerun Failed Tests**](/guides/rerun-failed-playwright-tests) | Re-run only what failed to cut pipeline time. | | [**Smart Rerun Detection and Grouping**](/platform/playwright-test-runs#test-run-grouping) | Groups reruns by branch and commit with full attempt history tracking. | | [**GitHub CI Checks**](/guides/github-status-checks) | Block merges until tests meet quality gates. | ### Reduce Manual Triage | Feature | Description | | :------ | :---------- | | [**Integrations**](/integrations/overview) | Create tickets and send Slack summaries by environment. | | [**MCP Server**](/mcp/overview) | Let AI agents query your test data directly. | | [**Test Case Management**](/platform/playwright-test-cases) | Organize tests in suites with bulk operations. | ## Watch the 2 minute demo [Video: TestDino MCP Server video](https://www.youtube.com/embed/bBwu88xWpdI?si=axqOvcNbRQAJiU8W) ## Community and Contact - [Discord](https://discord.gg/hGY9kqSm58): Join the community for questions and updates. - [GitHub](https://github.com/testdino-hq): Follow us on GitHub - [Email](mailto:support@testdino.com): [support@testdino.com](mailto:support@testdino.com) - [FAQs](https://docs.testdino.com/faqs): Find answers to common questions. --- ## Get Started with TestDino > Source: https://docs.testdino.com/getting-started > Description: Set up TestDino in 2 minutes. Configure Playwright reporters, generate an API key, and upload your first test report with traces and screenshots. TestDino collects Playwright test results and organizes them into dashboards with traces, screenshots, logs, and trends. ## How It Works 1. Configure Playwright to output JSON and HTML reports 2. Run your tests 3. Upload the report to TestDino using the CLI ## Setup ### Create a Project & API Key Sign in to [TestDino](https://app.testdino.com) and create a new organization and project. ![Create a new project in TestDino](https://testdinostr.blob.core.windows.net/docs/docs/getting-started/create-new-project.webp) Then generate an [API key](/guides/generate-api-keys) and copy it — it is only shown once. ![Generate a key](https://testdinostr.blob.core.windows.net/docs/docs/getting-started/generate-api-key.webp) > **Warning:** Treat the API key as a secret. Store it in your CI's secret store if you plan to upload from CI. ### Configure Playwright Reporters Add JSON and HTML reporters to your Playwright config: ```javascript playwright.config.js reporter: [ ['html', { outputDir: './playwright-report' }], ['json', { outputFile: './playwright-report/report.json' }], ] ``` > **Note:** The HTML reporter **must** be listed before the JSON reporter. Playwright's HTML reporter clears its output directory on each run, so placing it first ensures `report.json` is not deleted. ### Run Tests & Upload **Node.js:** ```bash # Install the TestDino CLI (first time only) npm install tdpw # Run your tests npx playwright test # Upload results to TestDino npx tdpw upload ./playwright-report --token="" --upload-html ``` Omit `--upload-html` if you did not configure the HTML reporter. **Python:** ```bash # Install dependencies (first time only) pip install pytest-playwright-json pytest-html testdino # Run your tests pytest \ --playwright-json=test-results/report.json \ --html=test-results/index.html \ --self-contained-html # Upload results to TestDino testdino upload ./test-results --token="" --upload-full-json ``` Omit `--upload-full-json` if you skipped HTML. Use `--upload-images` or `--upload-videos` as needed. ### Verify in Dashboard Open **Test Runs** in your project. Confirm the run appears with pass/fail counts, duration, and screenshots (if HTML was uploaded). ## CI/CD Integration (Optional) Add these steps to your workflow **after** your test execution step. **GitHub Actions - Node.js:** ```yaml .github/workflows/test.yml - name: Install TestDino CLI run: npm install tdpw - name: Upload to TestDino if: always() run: npx tdpw upload ./playwright-report --token="${{secrets.TESTDINO_TOKEN}}" --upload-html ``` **GitHub Actions - Python:** ```yaml .github/workflows/test.yml - name: Install dependencies run: | pip install pytest pytest-playwright pytest-playwright-json pytest-html testdino playwright install chromium --with-deps - name: Run tests run: | pytest \ --html=test-results/index.html \ --self-contained-html \ --playwright-json=test-results/report.json - name: Upload to TestDino if: always() run: testdino upload ./test-results --token="${{secrets.TESTDINO_TOKEN}}" --upload-full-json ``` ## Next Steps - [Branch Mapping](https://docs.testdino.com/platform/project-settings#branch-mapping): Configure branch mapping for your project - [Pull Requests](https://docs.testdino.com/platform/pull-requests/overview): View test results linked to pull requests - [Analytics](https://docs.testdino.com/platform/playwright-test-analytics): Explore test analytics and insights - [Users & Roles](https://docs.testdino.com/platform/organizations/users-roles): Manage organization members and permissions ## TestDino Frequently Asked Questions > Source: https://docs.testdino.com/faqs > Description: Answers to common questions about TestDino setup, API keys, Playwright test runs, integrations, and billing. Get quick answers to the most common questions about TestDino. Browse through topics including getting started, API keys, test runs, integrations, and billing. Click on any question to expand the answer. ## Getting Started **What is TestDino and how does it work?** [TestDino](https://testdino.com/) is a Playwright-focused test reporting and management platform with MCP support. It ingests Playwright reports from CI or local execution and provides actionable insights. **How it works:** 1. Configure Playwright to emit JSON and HTML reports 2. Upload reports using the CLI (`tdpw` or `testdino`) or through CI 3. TestDino processes results and links runs to branches/PRs 4. View results in dashboards, track trends in Analytics, and create tickets from failures **What problems does TestDino solve for QA teams?** TestDino eliminates the 6 to 8 hours teams spend weekly on manual test failure analysis. - **Manual triage** - Error grouping and flakiness tracking reduce manual investigation - **Scattered evidence** - Aggregates screenshots, videos, traces, and logs in one place - **No historical context** - Tracks trends and flakiness across runs - **Slow handoffs** - Pre-fills Jira/Linear/Asana tickets with full context - **Unclear readiness** - GitHub CI Checks give clear pass/fail signals **How do I get started with TestDino?** To get started: 1. Create an organization and a project 2. Generate an API key from your project settings 3. Configure your Playwright reporter to output JSON format 4. Upload your first test run using the TestDino CLI For detailed instructions, see [Getting Started](/getting-started). **What files do I need to upload to TestDino?** TestDino requires a **JSON report** (mandatory) generated by Playwright. The HTML report is optional but recommended for full artifact support. **Required:** * `report.json` - Contains test results, metadata, and structure **Optional (for richer debugging):** * HTML report - Enables screenshots, videos, and trace viewing * Traces - For interactive step-by-step debugging * Videos - For visual test playback * Screenshots - For failure evidence Configure Playwright to generate these in your `playwright.config.js`, then upload the report folder using the CLI. **Do I need to change my existing Playwright tests?** **No.** TestDino works with your existing Playwright tests without code modifications. You only need to: 1. [Add JSON and HTML](/getting-started#configure-local-setup) reporters to your `playwright.config.js` 2. Upload reports using the TestDino CLI after tests run TestDino reads Playwright's standard report output. It doesn't require custom annotations, special imports, or framework changes. ## Setup and Configuration **How does TestDino differ from Playwright's built-in HTML and JSON reporters?** Playwright reporters show a **single run snapshot**. [TestDino adds](/index#why-testdino): - **Cross-run analytics**: Trends and failure patterns over time - **Git and PR awareness**: Links test runs to commits, branches, and PRs - **Integrations**: Jira, Linear, Asana, Slack, GitHub - **Historical tracking**: Stability scores and regression detection Playwright shows what happened. TestDino explains 'why' and 'what' to do next. **Which programming languages does TestDino support?** TestDino officially supports Playwright tests written in: * **JavaScript / TypeScript** - Use the `tdpw` CLI (npm package) * **Python** - Use the `testdino` CLI (PyPI package) with `pytest-playwright` Both CLIs provide the same core features: upload reports, cache metadata, and retrieve failed tests. **How do I upload Playwright reports using the JavaScript or Python CLI?** **JavaScript (tdpw):** ```bash npx tdpw upload ./playwright-report --token="your-api-key" --upload-html ``` **Python (testdino):** ```bash # Run tests with JSON output pytest --playwright-json=test-results/report.json # Upload testdino upload ./test-results --token="your-api-key" --upload-full-json ``` **Key flags:** `--upload-html`, `--upload-images`, `--upload-videos`, `--upload-traces`, `--upload-full-json` **Does TestDino work with monorepos?** **Yes.** TestDino works with monorepos without special configuration. Each Playwright project in your monorepo can upload to the same or different TestDino projects. Just point the CLI to the correct report directory for each: ```bash npx tdpw upload ./apps/web/playwright-report --token="your-api-key" ``` ```bash npx tdpw upload ./apps/mobile/playwright-report --token="your-api-key" ``` Use separate TestDino projects if you want isolated analytics, or one project if you want unified reporting. **Can I use TestDino if my tests are in a separate repository?** **Yes.** TestDino doesn't require your test code and application code to be in the same repository. TestDino ingests Playwright reports, not source code. As long as your CI generates and uploads reports, TestDino will process them. Use environment mapping to link branches across repos if needed. For GitHub integration, install the TestDino app on the repository where tests run. PR comments and CI checks will appear there. **Do I need MCP to use TestDino?** **No.** MCP (Model Context Protocol) is completely optional. TestDino works fully through: - **Web dashboard** - View test runs, analytics, and dashboards - **CLI** - Upload reports, cache results, rerun failed tests - **Integrations** - GitHub, Slack, Jira, Linear, Asana MCP is an add-on for AI-assisted workflows. It lets you query test data through Cursor or Claude Desktop using natural language. You can ignore it entirely if you prefer the web UI. ## API Keys and Authentication **How do I generate and manage API keys?** 1. Go to **Project Settings. API Keys** 2. Click **Generate Key** 3. Name the key and set an expiration (if available) 4. Copy the secret immediately and store it in your secret manager 5. Use it in CI as an environment variable, then reference it in the upload command ![API Keys Management](https://testdinostr.blob.core.windows.net/docs/docs/faqs/api-keys.webp) **My API key expired. How do I rotate it?** 1. Generate a new key in **Project Settings. API Keys** 2. Update CI secrets with the new key 3. Run one upload to confirm it works 4. Revoke or delete the old key **How do I troubleshoot API request failures or run ID not found errors?** **API failures:** - Verify `TESTDINO_API_KEY` is set correctly - Check internet connectivity - Look for HTTP status codes in error messages **Run ID not found:** - Use `list_testruns` to confirm the run exists - Verify you're querying the correct project - Check if the run ID format is correct (or use the counter instead) ## Test Runs and Uploads **How do I upload test results from CI?** Run tests, then upload the report folder: ```bash npx tdpw upload ./playwright-report --token="${{ secrets.TESTDINO_TOKEN }}" --upload-html ``` **Why are my uploaded runs not appearing?** Check these: 1. **API key**: Verify the token is correct and not expired 2. **Report path**: Ensure the folder contains `report.json` 3. **Project match**: API key must belong to the target project 4. **Upload success**: Check CLI output for errors 5. **Sync**: Click the Sync button in the Test Runs view Use `--verbose` for detailed upload logs. **What is the difference between a test run, a test case, and a spec file?** **Test Run**: One full execution of your suite, equivalent to one `playwright test` command ![Test Run View](https://testdinostr.blob.core.windows.net/docs/docs/faqs/test-runs.webp) **Test Case**: One individual test, equivalent to one `test()` block ![Test Case Details](https://testdinostr.blob.core.windows.net/docs/docs/faqs/test-case.webp) **Spec file:** One test file that contains one or more test cases ![Spec File View](https://testdinostr.blob.core.windows.net/docs/docs/faqs/specs.webp) ![Spec Tab View](https://testdinostr.blob.core.windows.net/docs/docs/faqs/spec-tab.webp) **What information does the Test Run Summary provide?** It groups tests by cause with KPI tiles: - **Failed** - Assertion Failure, Element Not Found, Timeout, Network, Other - **Flaky** - Timing, Environment, Network, Assertion, Other - **Skipped** - Manual, Configuration, Conditional ![Test Run Summary](https://testdinostr.blob.core.windows.net/docs/docs/faqs/test-runs-summary-kpi.webp) [Detailed Analysis](/platform/test-runs/playwright-failure-summary#detailed-analysis) shows each test with status, spec file, duration, retries, and 10-run history preview. **Filter** using tokens: `s:` (status), `c:` (cluster), `@` (tag), `b:` (browser). ![Detailed Analysis](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/summary/detailed-analysis.webp) **How do I view Playwright traces in TestDino?** Traces are accessible in two places: 1. [**Test Run Summary**](/platform/test-runs/playwright-failure-summary#detailed-analysis) - Each failed/flaky test case row includes a "Trace #" link. Click it to open the full Playwright trace viewer. 2. [**Test Case Details > Trace tab**](/platform/playwright-test-cases#6-trace) - The interactive Playwright trace shows timeline, actions, network calls, console output, and DOM snapshots. Jump directly to the failing step for root-cause analysis. Traces open in Playwright's trace viewer, letting you inspect exactly what happened during test execution. ![Trace Viewer](https://testdinostr.blob.core.windows.net/docs/docs/faqs/trace.webp) **What's the difference between traces and videos in TestDino?** [**Traces**](/platform/playwright-test-cases#6-trace) are interactive debugging tools. In Trace Viewer, you can see actions, network calls, console logs, and DOM snapshots. You can step through execution, inspect element states, and see exactly what Playwright did at each moment. ![Trace Viewer](https://testdinostr.blob.core.windows.net/docs/docs/faqs/trace.webp) [**Videos**](/platform/playwright-test-cases#5-video) are screen recordings of test execution. They show what the browser rendered, but don't provide interactive debugging or network/console data. ![Trace and Video Viewer](https://testdinostr.blob.core.windows.net/docs/docs/faqs/trace-and-video.png) Use videos for quick visual review and, trace viewer for deep debugging of failures. **Can I search for specific test cases or errors?** Yes. Use search and filters across [**Test Runs**](/platform/playwright-test-runs#search-and-filters), and [**Errors**](/platform/playwright-test-runs/errors#search) views to: - Search by commit message or run number - Filter by status (passed, failed, flaky, skipped) - Filter by committer, branch, or environment - Group failures by error message in the Errors view ![Error View with Filters](https://testdinostr.blob.core.windows.net/docs/docs/faqs/test-runs-error.webp) **Why are PRs not linking to test runs?** **Requirements:** 1. GitHub integration must be installed and connected 2. Test runs must include commit SHA metadata 3. The branch must be associated with an open PR **Verify:** - Check **Settings > Integrations > GitHub** shows connected - Confirm CI workflow includes git context in the upload - Ensure PR exists for the branch ## Flakiness and Test Health **How does TestDino detect flaky tests?** TestDino identifies flaky tests by analyzing behavior across attempts and runs: **Within a single run:** * A test that fails initially but passes on retry is marked flaky * Retry attempts are tracked separately **Across multiple runs:** * Tests with inconsistent outcomes (pass in one run, fail in another) without code changes * Historical stability percentage calculated as (Passed / Total Runs) x 100 TestDino also sub-categorizes flaky tests by root cause: Timing Related, Environment Dependent, Network Dependent, Assertion Intermittent, or Other Flaky. **How can I view and analyze flaky tests across multiple runs?** Multiple views available: 1. **Dashboard** - "[Most Flaky Tests](/platform/playwright-test-dashboard#most-flaky-tests)" section 2. **Analytics** - "[Flakiness & Test Issues](/platform/analytics/playwright-test-health-summary#flakiness--test-issues)" chart with list 3. [**Test Cases History**](/platform/playwright-test-case-history) - Stability score and "Last Flaky" tile 4. **Test Explorer** - "[Flaky Rate](/platform/playwright-test-explorer#table-columns)" column for all spec files and test cases **How does TestDino track historical stability for individual test cases?** The [**Test Case History**](/platform/playwright-test-case-history) tab shows: - **Stability %** - (Passed / Total Runs) x 100 - **Last Status Tiles** - Links to Last Passed, Last Failed, Last Flaky runs - **Execution History Table** - Status, duration, retries per run (expandable for error details) History is scoped to the current branch. ![GitHub CI Checks Configuration](https://testdinostr.blob.core.windows.net/docs/docs/faqs/history-tab.webp) ## Integrations **Which integrations does TestDino support?** [TestDino supports](/integrations/overview#available-integrations): - **CI/CD -** GitHub, GitLab, Azure DevOps, TeamCity - **Issue tracking -** Jira, Linear, Asana, monday - **Communication -** Slack App, Slack Webhook **How do I integrate TestDino with GitHub for PR checks and comments?** 1. Install the [TestDino GitHub App](https://github.com/apps/testdino-playwright-reporter) 2. Select repositories to grant access 3. In [**Settings > Integrations > GitHub**](/integrations/ci-cd/github), configure: - **Comments** - Enable PR and commit comments per environment - **CI Checks** - Enable checks with pass rate thresholds **What Quality Gate rules are available for GitHub CI Checks?** [Quality Gate Settings](/guides/github-status-checks#quality-gate-settings): - **Pass Rate** - Minimum % of tests that must pass (default: 90%) - **Mandatory Tags** - Tests with these tags (e.g., `@critical`) must all pass - **Flaky Handling** - Strict (flaky = failure) or Neutral (flaky excluded from calculation) - **Environment Overrides** - Different rules per environment ![GitHub CI Checks Configuration](https://testdinostr.blob.core.windows.net/docs/docs/faqs/github-settings.webp) **Why might a TestDino CI Check fail even if my pass rate looks high?** Most common reason: A [Mandatory Tag](/guides/github-status-checks#3-mandatory-tags-not-working) test failed. If you configured `@critical` as mandatory and one critical test fails, the check fails regardless of the overall pass rate. **Other causes:** - Flaky Handling set to "Strict" and flaky tests present - Environment Override has stricter rules than defaults **How do I create Jira or Linear issues from failed tests?** 1. Connect the integration in **Project Settings. Integrations** 2. Configure the default project (Jira) or team (Linear) 3. Open a failed test case in TestDino 4. Click **Raise Bug** or **Raise Issue** 5. The issue is created with test details, error message, failure history, and links [Video: Create Jira or Linear Issues](https://www.youtube.com/embed/M7Hg4TpjOM8) ## Environment Mapping and Branch Management **What is Environment Mapping, and why is it important?** Environment Mapping links Git branches to environments (Production, Staging, Dev) using exact names or regex patterns. Configure in Settings > Branch Mapping. **Why it matters:** - Rolls up short-lived branches (feature/\*, PR branches) to the correct environment - Enables environment-specific CI Check rules - Routes Slack notifications to the right channels - Filters dashboards and analytics by environment Learn more at [Environment Mapping](/guides/environment-mapping). [Video: Branch Mapping in TestDino](https://www.youtube.com/embed/oVaYPIsYrJA) **Can I override environment mapping via the CLI?** Yes. Enable **CLI Environment Override** in Project Settings, then upload with: ```bash npx tdpw upload ./playwright-report --token="your-token" --environment="staging" ``` **What happens if a branch does not match any mapping?** The run appears without an environment label and may not appear in environment-filtered views. **Solutions:** - Add a catch-all pattern (e.g., `.*` → Development) - Add patterns that match your branch naming convention - Runs remain visible in the unfiltered Test Runs list See [Environment Mapping Best Practices](/guides/environment-mapping#best-practices) for details. ## Organizations and Projects **What is the difference between organizations and projects?** **Organization**: Top-level container for your team, users, billing, and settings ![Organization Overview](https://testdinostr.blob.core.windows.net/docs/docs/faqs/organization.webp) **Project**: One test suite or application with its own runs, keys, and integrations. Actions in one project don't affect others ![Project View](https://testdinostr.blob.core.windows.net/docs/docs/faqs/projects.webp) **Hierarchy:** Organization → Projects → Test Runs **How do I invite team members and assign roles?** 1. Go to your organization's [Users & Roles](/platform/organizations/users-roles) tab 2. Click **Invite Member** and enter their email address 3. Assign a role (Owner, Admin, Member, or Viewer) 4. Track invitations and adjust roles as your team grows **What roles exist at the organization level?** - **Owner** - Full control, can invite/update/remove anyone - **Admin** - Manages people and settings, can't remove Owner - **Member** - Contributes to projects - **Viewer** - Read-only access ## Billing and Pricing **What are the plan limits, and how is usage calculated?** [Plans](/pricing#plans-and-limits) are typically based on test executions and user or project limits. Usage is measured monthly. A retry counts as another execution. Track usage in **Settings > Usage & Quota**. Usage resets monthly on your cycle date. **What counts as a test execution for billing?** A test execution is one test case run: - Each test case counts as one execution (skipped tests are excluded) - **Retries count separately** - A test with 2 retries = 3 executions - Artifacts do not affect execution count **What happens if I exceed my plan limits?** - Usage is tracked monthly and resets on your billing cycle date - Overage, if applicable, is billed on the next invoice - Upgrade if you consistently hit limits **What happens if I cancel my subscription?** - Access continues until the current billing period ends - No future charges after cancellation - The organization moves to the Community plan - Retention and limits fall back to the Community plan **How do I upgrade or downgrade my plan?** 1. Go to **Manage Billing** in your organization 2. Click **View All Plans** 3. Select the plan 4. Confirm the change Upgrades typically apply immediately. Downgrades typically take effect at the end of the current billing period. ## Still Have Questions? - [Discord](https://discord.gg/hGY9kqSm58): Join the community for questions and updates. - [Email](mailto:support@testdino.com): Contact us at support@testdino.com --- ## TestDino Billing & Pricing Plans > Source: https://docs.testdino.com/pricing > Description: Compare TestDino pricing plans for Playwright test reporting. Understand usage limits, test execution metering, and billing cycles for your team. TestDino bills based on monthly test executions. This page covers how usage is measured, how billing cycles work, and how to evaluate which plan fits your team. ## Quick Reference | Topic | Description | | :--- | :--- | | [How Usage Is Measured](#how-usage-is-measured) | What counts as a test execution | | [Choosing the Right Plan](#choosing-the-right-plan) | Estimate usage and pick a plan | | [Billing Cycles](#billing-cycles) | Monthly, annual, and trial billing | | [What Happens at the Limit](#what-happens-at-the-limit) | Behavior when executions run out | | [Execution Pool Allocation](#execution-pool-allocation) | Distribute limits across projects | | [Upgrading and Downgrading](#upgrading-and-downgrading) | Changing plans mid-cycle | | [Cancellation](#cancellation) | Ending your subscription | | [Invoices and Payment](#invoices-and-payment) | Payment methods and invoice access | | [FAQ](#faq) | Common billing questions | ## How Usage Is Measured TestDino meters usage by **test executions**. One test execution is one test case (`test()` or `it()` block) that runs and reports a result to TestDino via the reporter. **What counts:** - Each test that runs and reports a result (pass, fail, or flaky) - A flaky test counts as **one execution** regardless of how many retries it took - Reruns of the same test within a single run count as one execution **What does not count:** - Skipped tests (`it.skip`, `test.skip`) - Retry attempts on the same test - Attachments (screenshots, videos, traces) do not affect execution count ### Example A test suite with 200 test cases. 5 tests are skipped. 3 tests are flaky and each retried 3 times. ``` Executions counted: 195 (200 total - 5 skipped) Retries: not counted separately Attachments: not counted ``` The 3 flaky tests each count as 1 execution, not 3. ## Choosing the Right Plan Estimate your monthly executions: ``` Monthly executions = (number of test cases) x (CI runs per month) ``` | Scenario | Tests | CI Frequency | Monthly Executions | Recommended Plan | | :--- | :--- | :--- | :--- | :--- | | Solo developer | 100 | 2x/day | 6,000 | Professional | | Small team | 300 | 3x/day | 27,000 | Team | | Growing team | 500 | 4x/day | 60,000 | Team | | Large team | 1,000+ | 5x/day | 150,000+ | Enterprise | ### Plan limits at a glance | | Community (Free) | Professional | Team | Enterprise | | :--- | :--- | :--- | :--- | :--- | | **Monthly price** | Free | $49/mo | $99/mo | Custom | | **Annual price** | Free | $39/mo, billed yearly | $79/mo, billed yearly | Custom | | **Monthly executions** | 5,000 | 25,000 | 75,000 | Custom | | **Team members** | 1 | 3 | 30 | Custom | | **Projects** | 1 | 3 | 5 | Custom | | **Data retention** | 14 days | 60 days | 365 days | Custom | | **Artifact storage** | 1 GB | 5 GB | 10 GB | Unlimited | > **Tip:** Annual plans save approximately 20% compared to monthly billing. ## Billing Cycles **Monthly:** Subscriptions renew on the same date each month. Usage resets at the start of each billing period. **Annual:** Annual subscriptions are billed once per year at a discounted rate. Usage resets on a 30-day rolling window within the annual term. **Free Trial:** New customers on paid plans receive a **14-day free trial** with full plan features. A payment method is required to start the trial. The subscription converts to a paid plan automatically when the trial ends. One trial per organization. ## What Happens at the Limit When your organization reaches the monthly execution limit: ### Test submissions blocked New test results are blocked for the remainder of the billing period. ### Existing data stays accessible Dashboards, reports, and historical test data remain available. ### Usage notifications Email notifications are sent at 50%, 75%, 90%, and 100% usage thresholds. To continue submitting tests, upgrade to a higher plan. The upgrade takes effect immediately. ## Execution Pool Allocation Paid plans include a monthly execution pool shared across all projects in your organization. Organization admins control how this pool is distributed. | Concept | Description | | :--- | :--- | | Total pool | Monthly executions included in your plan | | Allocated | Executions assigned to specific projects | | Unallocated | Remaining executions available for assignment or auto-borrow | ### Transfer between projects Move executions from one project to another or from the unallocated pool. Transfers take effect immediately and do not reset until the next billing cycle. ### Auto-borrow When enabled, projects that exhaust their allocation automatically borrow from the unallocated pool. Test runs continue without interruption. - Auto-borrow is configurable per organization - Borrowed executions reduce availability for other projects Organization owners, admins, and billing roles can transfer limits and configure auto-borrow. Members and viewers can view pool allocation but cannot modify it. See [Test Limits](/platform/billing-and-usage/test-limits) for the full management interface. ## Upgrading and Downgrading **Upgrading:** Upgrade at any time from **Billing & Usage > Change Plan**. - The new plan takes effect immediately - Usage limits and feature access update instantly - If upgrading from a trial, the trial ends and billing begins Plan order: Community (Free) > Professional > Team > Enterprise **Downgrading:** Downgrade from **Billing & Usage > Change Plan**. - The downgrade takes effect at the end of the current billing period - Current plan features remain available until the period ends - If your current usage exceeds the lower plan's limits (members, projects), reduce usage before the downgrade activates > **Warning:** Test data exceeding the lower plan's retention period is deleted according to [data retention policies](/data-privacy/data-retention). Upgrading later does not restore deleted data. ## Cancellation Cancel your subscription from **Billing & Usage > Cancel Subscription**. - Cancellation can be immediate or at end of billing period - The organization reverts to the free Community plan after cancellation - To resume service after cancellation, start a new subscription ## Invoices and Payment ### Payment methods TestDino processes payments through Razorpay. Accepted methods: | Method | | | :--- | :--- | | Credit and debit cards | Visa, Mastercard, RuPay | | Net banking | All major banks | | UPI | Google Pay, PhonePe, etc. | | Digital wallets | Supported wallets via Razorpay | | Bank transfer | Direct bank transfer | No card data is stored by TestDino. ### Invoices Access all invoices from **Billing & Usage > Invoices**. Each invoice shows the plan, billing period, amount, and payment status. ### Billing contacts Configure who receives billing notifications in **Billing & Usage > Billing Emails**. Notifications include payment confirmations, failure alerts, invoice receipts, and usage threshold alerts. > **Note:** Billing notifications are separate from operational notifications. Operational notifications (test failures, alerts) are sent to members with Admin role by default. ## FAQ **How do I estimate which plan I need?** Multiply your test count by the number of CI runs per month. For example, 300 tests running 3 times per day is 27,000 executions/month, which fits the Team plan (75,000 limit). **Do retries and flaky tests inflate my usage?** No. A flaky test counts as one execution regardless of retry attempts. Retries within the same run do not add to the count. **Do screenshots, videos, or traces affect billing?** No. Attachment volume does not affect execution count. Artifact storage is tracked separately and included in your plan. **What happens if I exceed my storage limit?** New artifacts are rejected when storage is full. Existing artifacts remain accessible. Storage is freed automatically when data expires per your plan's retention period. **Can I change plans in the middle of a billing cycle?** Yes. Upgrades take effect immediately with updated limits. Downgrades take effect at the end of the current billing period. **Is the free plan limited in features?** The Community plan includes flaky test detection, quality metrics, failure categorization, and a dashboard summary. It does not include PR features, Slack/Jira integrations, or team management. **Can I switch between monthly and annual billing?** Contact **support@testdino.com** to switch billing cycles. Changes take effect at the end of the current billing period. **What happens if my payment fails?** TestDino retries the payment automatically. If the payment remains unpaid past the due date, the subscription is marked past due. Resolve payment issues in **Billing & Usage** to restore service. **Can I pause my subscription instead of cancelling?** Yes. Contact **support@testdino.com** to pause your subscription. Features are inactive while paused. Resume at any time. **Who can manage billing?** Organization owners and members with the `org_billing` role can access billing settings and manage subscriptions. ## Related Monitor usage, limits, invoices, and retention. - [Billing & Usage](https://docs.testdino.com/platform/billing-and-usage/overview): Monitor subscription status and current usage - [Test Limits](https://docs.testdino.com/platform/billing-and-usage/test-limits): Allocate execution limits across projects - [Invoices](https://docs.testdino.com/platform/billing-and-usage/invoices): View, download, and filter billing invoices - [Data Retention](https://docs.testdino.com/data-privacy/data-retention): How long test data and artifacts are retained --- ## TestDino Support and Contact Channels > Source: https://docs.testdino.com/support > Description: Contact the TestDino support team for help with setup, integrations, billing, or product questions via email or community channels. The TestDino support team is available to help with setup, troubleshooting, and platform questions. ## Support Channels Reach the team by email or join Discord. - [Email](mailto:support@testdino.com): Reach us at **support@testdino.com** for technical issues, account questions, and follow-ups. - [Discord](https://discord.gg/hGY9kqSm58): Join the community for discussions, questions, and updates. ## Support Hours | Detail | Info | | :--- | :--- | | Hours | 9:00 AM - 6:00 PM IST, Monday - Friday | | Response time | Within 24 hours for all channels | | Priority support | Available for Team and Enterprise plans | The team is distributed across timezones and responds to messages as quickly as possible. ## Reporting Issues When submitting a technical issue, include as much of the following as possible to speed up resolution. ### Describe the issue Explain what happened, what you expected, and how often it occurs (every run, intermittent, or one-time). ### Include identifiers Provide the **Test Run ID** or **Dashboard URL** for the affected run. Include the project name if not obvious from the URL. ### Share environment details Include your testing framework, framework version, CLI version (`tdpw` or `testdino`), and CI provider. ### Attach evidence Add screenshots, error messages with full stack traces, or CI execution logs. Redact sensitive data (tokens, credentials) before sharing. ### What to Include | Category | Details | | :--- | :--- | | Issue frequency | Every run, intermittent, or one-time | | Framework | Playwright version, Node.js/Python version | | CLI version | `npx tdpw --version` or `testdino --version` | | Run ID | Test Run ID or dashboard URL | | Error output | Full error message and stack trace | | CI logs | Pipeline logs with sensitive data redacted | | Configuration | `playwright.config.js` or `playwright.config.ts` settings | ## CLI Debug Mode If the support team requests debug logs, enable verbose output: **Node.js:** ```bash npx tdpw upload ./playwright-report --token="your-token" --verbose ``` **Python:** ```bash testdino upload ./test-results --token="your-token" --verbose ``` Share the full output with the support team. Redact API tokens before sharing. ## Feature Requests Submit feature requests through: - **Email** at [support@testdino.com](mailto:support@testdino.com) - **Discord** in the feature-requests channel Include the problem you are trying to solve, not only the feature you want. Context helps the team prioritize and design the right solution. ## Security Issues Report security vulnerabilities directly to **support@testdino.com**. Do not report security issues through public channels. ## Resources FAQs, changelog, and getting started. - [FAQs](https://docs.testdino.com/faqs): Answers to common questions about setup, features, and billing - [Changelog](https://changelog.testdino.com): Latest updates, features, and fixes - [Getting Started](https://docs.testdino.com/getting-started): Step-by-step setup guide for new users --- ## TestDino Changelog and Release Notes > Source: https://docs.testdino.com/changelog > Description: Latest TestDino product updates, new features, bug fixes, and Playwright reporting improvements. Stay current with every release and CLI change. Stay up to date with product updates, new features, and bug fixes across TestDino. ## Product Updates Stay informed about releases and planned work. - [Changelog](https://changelog.testdino.com): Latest features, improvements, and bug fixes. Subscribe via RSS to get notified. - [Roadmap](https://changelog.testdino.com/roadmap): View planned features and track development progress. ## Feedback & Feature Requests TestDino uses a community-driven feedback board. Browse existing requests, vote on features, and submit new ideas. - [Browse Feedback](https://changelog.testdino.com/feedback): Vote on feature requests and track what the community wants. - [Submit a Request](https://changelog.testdino.com/feedback): Share your ideas to help shape the product. > **Tip:** Vote on existing requests before submitting duplicates. Votes help prioritize what gets built next. ## CLI Changelogs Release history for the TestDino CLI packages. | Package | Language | Changelog | | :--- | :--- | :--- | | `tdpw` | Node.js | [npm](https://www.npmjs.com/package/tdpw?activeTab=versions) | | `testdino` | Python | [PyPI](https://pypi.org/project/testdino/#history) | ## Stay Connected Join the community and follow TestDino updates. - [Discord](https://discord.gg/hGY9kqSm58): Community discussions and announcements - [X (Twitter)](https://x.com/testdinohq): Product updates and tips - [LinkedIn](https://www.linkedin.com/company/testdino): Company news and updates --- ## Playwright Best Practices Skill > Source: https://docs.testdino.com/ai/playwright-skill > Description: AI agent skill for writing, debugging, and maintaining Playwright tests using proven patterns, best practices, and TestDino integration workflows. The Playwright Best Practices Skill is a knowledge pack that AI coding agents (Claude Code, Cursor, VS Code Copilot, Gemini CLI) load on demand. It provides guidance for writing, debugging, and maintaining Playwright tests based on proven patterns. The skill follows the open [Agent Skills](https://agentskills.io/home) specification and works with any compatible tool. ## Installation ```bash npx skills add testdino-hq/playwright-skill ``` Install individual sub-skills when you need specific coverage: ```bash npx skills add testdino-hq/playwright-skill/core npx skills add testdino-hq/playwright-skill/ci npx skills add testdino-hq/playwright-skill/pom npx skills add testdino-hq/playwright-skill/migration npx skills add testdino-hq/playwright-skill/playwright-cli ``` No additional configuration is required. The skill activates automatically when your AI agent detects a Playwright-related task. ### Supported Tools | Tool | Link | | :--- | :--- | | Claude Code | [code.claude.com/docs/en/skills](https://code.claude.com/docs/en/skills) | | Cursor | [cursor.com/docs/context/skills](https://cursor.com/docs/context/skills) | | VS Code Copilot | [code.visualstudio.com/docs/copilot/customization/agent-skills](https://code.visualstudio.com/docs/copilot/customization/agent-skills) | | Gemini CLI | [geminicli.com/docs/cli/skills](https://geminicli.com/docs/cli/skills/) | ## How It Works An Agent Skill is a directory containing a `SKILL.md` manifest and reference documents. The manifest declares metadata, core instructions, and an index of guides. Reference documents contain topic-specific guidance the agent reads on demand. ``` testdino-hq/playwright-skill/ ├── core/ │ ├── SKILL.md │ ├── locators.md │ ├── assertions-and-waiting.md │ ├── authentication.md │ ├── network-mocking.md │ └── ... ├── ci/ │ ├── SKILL.md │ └── ... ├── pom/ │ ├── SKILL.md │ └── ... ├── migration/ │ ├── SKILL.md │ └── ... └── playwright-cli/ ├── SKILL.md └── ... ``` ### Progressive Disclosure Skills use a three-stage loading model to minimize context window usage: | Stage | What Loads | Token Cost | | :--- | :--- | :--- | | Idle | Skill name and one-line description | ~50 tokens | | Activated | `SKILL.md` manifest with core principles and guide index | ~500 tokens | | Reference pull | Individual guide files, loaded selectively per task | ~200-800 tokens per guide | The agent determines which stage to enter based on task relevance. A task like "fix this flaky Playwright test" causes the agent to load the manifest, then selectively read `debugging.md` and `flaky-tests.md` rather than every guide in the skill. This differs from loading full documentation into a system prompt (30,000+ tokens upfront regardless of relevance) or RAG retrieval (variable quality, no opinionated guidance). ## Core Principles The `SKILL.md` manifest in the core skill defines ten principles applied to all generated code: 1. Prefer accessible selectors (`getByRole`, `getByLabel`) over CSS/XPath 2. Never use hard timeouts. Wait on conditions 3. Use web-first assertions that auto-retry 4. Isolate every test. No shared state 5. Centralize URLs via `baseURL` 6. Retries: `2` in CI, `0` locally 7. Traces: `'on-first-retry'` 8. Share state via fixtures, not module globals 9. One behavior per test 10. Mock external dependencies only. Never mock your own app ## Coverage ### Core Locators, locator strategy, assertions and waiting, fixtures and hooks, forms and validation, drag and drop, file operations, test data management, test organization, authentication flows, OAuth and SSO, API testing, CRUD testing, network mocking, and when-to-mock decision framework. ### Framework Recipes React, Next.js (App Router and Pages Router), Vue, Angular. Framework-specific selectors, routing, SSR/hydration patterns. ### Visual and Accessibility Visual regression with `toHaveScreenshot()`, axe-core integration, keyboard navigation, WCAG compliance. ### Debugging Trace viewer, UI mode, error index, flaky test diagnosis, common pitfalls, error and edge case handling. ### Advanced WebSockets, Service Workers, PWAs, Canvas/WebGL, Electron, browser extensions, iframes, Shadow DOM, i18n, multi-user collaboration, security testing, performance testing. ### CI/CD | Provider | Coverage | | :--- | :--- | | GitHub Actions | Workflow configuration, artifact upload, sharding with matrix strategy | | GitLab CI | Pipeline stages, browser caching, parallel jobs | | CircleCI | Orb usage, test splitting, resource classes | | Azure DevOps | YAML pipelines, hosted agent configuration | | Jenkins | Declarative pipelines, Playwright in Docker | | Docker | Official Playwright image, dependency caching, multi-stage builds | | Cross-cutting | Test sharding (`--shard=N/M`), reporting aggregation, code coverage collection | ### Page Object Model POM implementation patterns with a decision guide for choosing between POMs, fixtures, and helper functions. Includes trade-off analysis. ### Migration API mapping tables and incremental migration strategies: - **Cypress to Playwright** — `cy.get()` to `page.locator()`, `cy.intercept()` to `page.route()` - **Selenium/WebDriver to Playwright** — driver setup, element interaction, and wait strategy translation ### Playwright CLI Browser commands, request mocking, script execution, session and storage management, `codegen` for test generation, tracing, screenshots, video recording, and device emulation. ## Example Prompts The skill activates based on task inference. These prompts demonstrate what triggers the agent to load relevant guides: | Prompt | Guides Loaded | | :--- | :--- | | "Write Playwright tests for a multi-step form wizard with validation" | Locator strategy, form validation, test organization | | "Optimize our CI pipeline time with test sharding" | CI/CD, sharding, matrix-based parallel execution | | "This checkout test passes locally but times out in CI" | Debugging, flaky test diagnosis, CI environment differences | | "Test the real-time collaboration feature in our app" | Multi-user testing, WebSocket, SSE interception | | "Migrate our Cypress login tests to Playwright" | Migration mapping tables, authentication patterns | | "Add visual regression tests for dashboard components" | Visual testing, `toHaveScreenshot()`, CI artifact handling | ## Related - [GitHub Repository](https://docs.testdino.comhttps://github.com/testdino-hq/playwright-skill): Full source and all guides. MIT licensed. - [Agent Skills Specification](https://docs.testdino.comhttps://agentskills.io/home): The open standard defining how skills, manifests, and progressive disclosure work. - [TestDino MCP](https://docs.testdino.com/mcp/overview): Connect AI assistants to your test data. ## TestDino CLI for Playwright Reporting > Source: https://docs.testdino.com/cli/overview > Description: Upload Playwright test results to TestDino from any CI provider or local machine. Choose between the Node.js or Python CLI based on your test stack. The TestDino CLI uploads Playwright test results, caches test metadata for intelligent reruns, and retrieves previously failed tests. It collects metadata, artifacts, and execution data from local runs and CI pipelines. > **Note:** Git initialization is required. The CLI reads commit hash, branch name, and author information from your repository. ## Choose Your CLI - [Node.js](https://docs.testdino.com/cli/testdino-playwright-nodejs): Upload reports, cache results, and rerun failed tests. Use `npx tdpw upload` after tests run. - [Python](https://docs.testdino.com/cli/python): Upload pytest-based Playwright reports. Use with `pytest-playwright` and `pytest-playwright-json`. ## Quick Start **Node.js:** ```bash npm install tdpw ``` ### Configure Playwright reporters ```javascript playwright.config.js reporter: [ ['html', { outputDir: './playwright-report' }], ['json', { outputFile: './playwright-report/report.json' }], ] ``` ### Run tests ```bash npx playwright test ``` ### Upload results ```bash npx tdpw upload ./playwright-report --token="your-api-token" --upload-html ``` **Python:** ```bash pip install pytest-playwright-json pytest-html testdino ``` ### Run tests with JSON output ```bash pytest \ --playwright-json=test-results/report.json \ --html=test-results/index.html \ --self-contained-html ``` ### Upload results ```bash testdino upload ./test-results --token="your-api-token" --upload-full-json ``` ## Commands | Command | Description | | :--- | :--- | | `tdpw upload ` | Upload Playwright reports with optional attachments | | `tdpw cache` | Store test metadata for intelligent reruns | | `tdpw last-failed` | Retrieve previously failed tests for selective reruns | ## Common Options | Option | Description | | :--- | :--- | | `-t, --token ` | Authentication token (required) | | `--environment ` | Environment tag (e.g., `staging`, `production`) | | `--tag ` | Comma-separated run tags (max 5) | | `-v, --verbose` | Enable verbose logging | ## Environment Variables | Variable | Description | | :--- | :--- | | `TESTDINO_TOKEN` | Authentication token | | `TESTDINO_TARGET_ENV` | Default environment tag | | `TESTDINO_RUN_TAGS` | Default comma-separated run tags | ## Related - [View Test Runs](https://docs.testdino.com/platform/playwright-test-runs): View uploaded test results in the dashboard - [Rerun Failed Tests](https://docs.testdino.com/guides/rerun-failed-playwright-tests): CI workflow for rerunning only failures - [Generate API Keys](https://docs.testdino.com/guides/generate-api-keys): Create and manage API tokens - [Python CLI](https://docs.testdino.com/cli/python): Use TestDino with pytest ## Node.js CLI for Playwright > Source: https://docs.testdino.com/cli/testdino-playwright-nodejs > Description: Install tdpw to upload Playwright test results, cache metadata for intelligent reruns, and retrieve failed tests from previous runs. `tdpw` uploads Playwright test reports to TestDino, caches test metadata for intelligent reruns, and retrieves previously failed tests for selective re-execution. ## Quick Reference | Command | What it does | | :--- | :--- | | [`upload`](#upload) | Upload Playwright JSON and HTML reports to TestDino | | [`cache`](#cache) | Store test execution metadata for intelligent reruns | | [`last-failed`](#last-failed) | Retrieve previously failed tests for selective reruns | ## Prerequisites - Node.js `>= 18.0.0` - `@playwright/test` `>= 1.52.0` - TestDino API token ([generate one](/guides/generate-api-keys)) - Git initialized repository (for commit and branch metadata) ## Installation ```bash npm install tdpw ``` Or use without installing: ```bash npx tdpw --token="your-token" ``` --- ## Upload Uploads Playwright test reports to TestDino with optional attachments. ```bash npx tdpw upload ./playwright-report --token="your-token" ``` With full attachments: ```bash npx tdpw upload ./playwright-report --token="your-token" --upload-full-json ``` With tags and environment: ```bash npx tdpw upload ./playwright-report --token="your-token" --environment="staging" --tag="regression,smoke" ``` ### Upload Options | Flag | Description | Default | | :--- | :--- | :--- | | `` | Directory containing Playwright reports | Required | | `-t, --token ` | TestDino API token | Required | | `--environment ` | Target environment tag | `unknown` | | `--tag ` | Comma-separated run tags (max 5) | None | | `--upload-images` | Upload image attachments | `false` | | `--upload-videos` | Upload video attachments | `false` | | `--upload-html` | Upload HTML reports with screenshots and traces | `false` | | `--upload-traces` | Upload trace files | `false` | | `--upload-files` | Upload file attachments (.md, .pdf, .txt, .log) | `false` | | `--upload-full-json` | Upload all attachments | `false` | | `--json` | Output results as JSON for CI/CD pipelines | `false` | | `-v, --verbose` | Enable verbose logging | `false` | ### Run Tags Tags categorize entire test runs. Format: letters, numbers, hyphens, underscores, and dots only. Maximum 5 tags per run. ```bash npx tdpw upload ./playwright-report --token="your-token" --tag="smoke,regression,v1.2.3" ``` ### JSON Output for CI/CD Use `--json` to capture structured output in CI pipelines: ```bash RESULT=$(npx tdpw upload ./playwright-report --token="$TESTDINO_TOKEN" --json) TEST_RUN_ID=$(echo "$RESULT" | jq -r '.data.testRunId') URL=$(echo "$RESULT" | jq -r '.data.url') ``` Success response: ```json { "success": true, "data": { "testRunId": "test_run_abc123", "url": "https://app.testdino.com/..." } } ``` Error response: ```json { "success": false, "error": { "code": "AUTH_ERROR", "message": "Invalid API key or unauthorized access" } } ``` --- ## Cache Stores test execution metadata after Playwright runs complete. This metadata enables `last-failed` to identify which tests to rerun. ```bash npx tdpw cache --token="your-token" ``` With a custom working directory: ```bash npx tdpw cache --working-dir ./test-results --token="your-token" ``` ### Cache Options | Flag | Description | Default | | :--- | :--- | :--- | | `--working-dir ` | Directory to scan for test results | Current directory | | `--cache-id ` | Custom cache ID override | Auto-detected | | `-t, --token ` | TestDino API token | Required | | `-v, --verbose` | Enable verbose logging | `false` | --- ## Last Failed Retrieves previously failed tests for selective reruns. Outputs test file paths that can be passed directly to `npx playwright test`. ```bash npx tdpw last-failed --token="your-token" ``` Rerun only failed tests: ```bash npx playwright test $(npx tdpw last-failed --token="your-token") ``` ### Last Failed Options | Flag | Description | Default | | :--- | :--- | :--- | | `--cache-id ` | Custom cache ID override | Auto-detected | | `--branch ` | Custom branch name override | Auto-detected | | `--commit ` | Custom commit hash override | Auto-detected | | `-t, --token ` | TestDino API token | Required | | `-v, --verbose` | Enable verbose logging | `false` | ### Intelligent Rerun Workflow ```bash # Run all tests npx playwright test # Cache the results npx tdpw cache --token="$TESTDINO_TOKEN" # Upload the report npx tdpw upload ./playwright-report --token="$TESTDINO_TOKEN" --upload-full-json # On next run, rerun only failures npx playwright test $(npx tdpw last-failed --token="$TESTDINO_TOKEN") ``` --- ## Environment Variables | Variable | Description | | :--- | :--- | | `TESTDINO_TOKEN` | Authentication token | | `TESTDINO_TARGET_ENV` | Default environment tag | | `TESTDINO_RUN_TAGS` | Default comma-separated run tags | --- ## Configure Playwright Reporters Add JSON and HTML reporters to your Playwright config before uploading: ```javascript playwright.config.js reporter: [ ['html', { outputDir: './playwright-report' }], ['json', { outputFile: './playwright-report/report.json' }], ] ``` > **Note:** The HTML reporter **must** be listed before the JSON reporter. Playwright's HTML reporter clears its output directory on each run, so placing it first ensures `report.json` is not deleted. --- ## CI/CD Integration **GitHub Actions:** ```yaml .github/workflows/test.yml name: Playwright Tests on: [push, pull_request] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 with: fetch-depth: 0 - uses: actions/setup-node@v4 with: node-version: '20' - name: Install dependencies run: npm ci - name: Install Playwright run: npx playwright install --with-deps - name: Run tests run: npx playwright test - name: Upload to TestDino if: always() env: TESTDINO_TOKEN: ${{ secrets.TESTDINO_TOKEN }} run: npx tdpw upload ./playwright-report --upload-html ``` **GitHub Actions (JSON output):** ```yaml .github/workflows/test.yml - name: Upload and capture results if: always() run: | RESULT=$(npx tdpw upload ./playwright-report --token="${{ secrets.TESTDINO_TOKEN }}" --json) URL=$(echo "$RESULT" | jq -r '.data.url') echo "TESTDINO_URL=$URL" >> $GITHUB_ENV ``` **GitLab CI:** ```yaml .gitlab-ci.yml e2e-tests: image: mcr.microsoft.com/playwright:v1.50.0-jammy script: - npm ci - npx playwright test - npx tdpw upload ./playwright-report --upload-html variables: TESTDINO_TOKEN: $TESTDINO_TOKEN ``` **Jenkins:** ```groovy sh 'npx playwright test' sh 'npx tdpw upload ./playwright-report --token="$TESTDINO_TOKEN" --upload-html' ``` --- ## What Gets Collected | Category | Data | | :--- | :--- | | Git | Branch, commit hash, author, message, repository URL | | CI | Provider, build ID, PR details | | System | OS, CPU, memory, Node.js version | | Playwright | Version, workers, projects, shard info | | Artifacts | Screenshots, videos, traces, console output (when `--upload-html` or `--upload-full-json` used) | | Annotations | `testdino:` annotations from test metadata ([guide](/guides/playwright-test-annotations)) | --- ## Troubleshooting **Token is required but not provided** Verify the token is set: ```bash echo $TESTDINO_TOKEN ``` Pass it directly to confirm: ```bash npx tdpw upload ./playwright-report --token="your-api-token" ``` Generate a new token from [API Keys](/guides/generate-api-keys) if the issue persists. **Execution limit reached** Your account reached its monthly quota. Tests continue to run normally. Only uploads to TestDino pause until the quota resets. Upgrade your plan or wait for the monthly reset. **report.json not found** Ensure your Playwright config includes the JSON reporter: ```javascript reporter: [ ['html', { outputDir: './playwright-report' }], ['json', { outputFile: './playwright-report/report.json' }], ] ``` Confirm the output directory matches the path passed to `tdpw upload`. **Enable debug logging** ```bash npx tdpw upload ./playwright-report --verbose ``` ## Related - [View Test Runs](https://docs.testdino.com/platform/playwright-test-runs): Explore test results in the platform - [Rerun Failed Tests](https://docs.testdino.com/guides/rerun-failed-playwright-tests): CI workflow for rerunning only failures - [Python CLI](https://docs.testdino.com/cli/python): Use TestDino with pytest - [Code Coverage](https://docs.testdino.com/guides/playwright-code-coverage): Collect coverage from Playwright tests ## Python CLI for Playwright > Source: https://docs.testdino.com/cli/python > Description: Install and configure the testdino Python package to upload pytest or Python-based Playwright test results directly to TestDino. `testdino` is a Python CLI that uploads pytest-based Playwright reports, caches test metadata, and retrieves failed tests for reruns. ## Quick Reference | Topic | Link | | :--- | :--- | | [Installation](#installation) | `pip install testdino` | | [Quick Start](#quick-start) | Run tests, upload, cache | | [Commands](#commands) | `upload`, `cache`, `last-failed` | | [Configuration](#configuration) | Environment variables | | [CI/CD](#cicd-integration) | GitHub Actions, GitLab CI, Jenkins | ## Prerequisites - Python `>= 3.9` - `pytest` with `pytest-playwright` - `pytest-playwright-json` (generates the required JSON report) - TestDino API token ([generate one](/guides/generate-api-keys)) - Git initialized repository (for commit and branch metadata) ## Installation ```bash pip install pytest-playwright-json pytest-html testdino ``` --- ## Quick Start ### Run tests with JSON output ```bash pytest \ --playwright-json=test-results/report.json \ --html=test-results/index.html \ --self-contained-html ``` ### Upload the report ```bash testdino upload ./test-results --token="your-token" ``` ### Cache metadata for reruns ```bash testdino cache --working-dir test-results --token="your-token" ``` > **Note:** The upload command requires a JSON report. Always run pytest with the `--playwright-json` flag. --- ## Commands ### `upload` Upload Playwright test reports and artifacts to TestDino. ```bash testdino upload --token="your-token" ``` **Upload flags:** | Flag | Description | | :--- | :--- | | `--upload-images` | Upload screenshots | | `--upload-videos` | Upload video recordings | | `--upload-html` | Upload HTML reports | | `--upload-traces` | Upload trace files | | `--upload-files` | Upload file attachments (`.md`, `.pdf`, `.txt`, `.log`) | | `--upload-full-json` | Upload all attachments | ```bash # Upload with all artifacts testdino upload ./test-results --token="your-token" --upload-full-json # Upload with specific artifacts testdino upload ./test-results --token="your-token" --upload-images --upload-videos # Upload with environment tag testdino upload ./test-results --token="your-token" --environment="staging" ``` ### `cache` Store test execution metadata after a run. Powers the `last-failed` command. ```bash testdino cache --working-dir test-results --token="your-token" ``` | Option | Description | Default | | :--- | :--- | :--- | | `--working-dir ` | Directory containing test results | Current directory | | `--cache-id ` | Custom cache ID override | Auto-detected | ### `last-failed` Retrieve cached test failures for reruns. Outputs test identifiers that pass directly to pytest. ```bash # Print failed tests testdino last-failed --token="your-token" # Rerun only failed tests pytest $(testdino last-failed --token="your-token") # Failed tests for a specific shard testdino last-failed --shard="2/5" --token="your-token" ``` | Option | Description | Default | | :--- | :--- | :--- | | `--branch ` | Branch name override | Auto-detected | | `--commit ` | Commit hash override | Auto-detected | | `--shard ` | Shard specification (e.g., `2/5`) | None | | `--environment ` | Environment name for filtering | None | | `--cache-id ` | Custom cache ID override | Auto-detected | ### Global Options These options apply to all commands. | Option | Description | | :--- | :--- | | `-t, --token ` | TestDino API token (required) | | `-v, --verbose` | Enable verbose logging | --- ## Configuration Set environment variables to avoid passing flags with each command. | Variable | Description | | :--- | :--- | | `TESTDINO_TOKEN` | Authentication token | | `TESTDINO_TARGET_ENV` | Default environment tag | ```bash export TESTDINO_TOKEN="your-api-token" export TESTDINO_TARGET_ENV="staging" ``` --- ## CI/CD Integration **GitHub Actions:** ```yaml .github/workflows/test.yml name: Playwright Tests on: [push, pull_request] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 with: fetch-depth: 0 - uses: actions/setup-python@v5 with: python-version: '3.11' - name: Install dependencies run: | pip install pytest pytest-playwright pytest-playwright-json pytest-html testdino playwright install chromium --with-deps - name: Run tests run: | pytest \ --playwright-json=test-results/report.json \ --html=test-results/index.html \ --self-contained-html - name: Cache metadata if: always() run: testdino cache --working-dir test-results --token="${{ secrets.TESTDINO_TOKEN }}" - name: Upload reports if: always() env: TESTDINO_TOKEN: ${{ secrets.TESTDINO_TOKEN }} run: testdino upload ./test-results --token="${{ secrets.TESTDINO_TOKEN }}" --upload-full-json ``` **GitHub Actions (Sharded):** ```yaml .github/workflows/test-sharded.yml name: Playwright Tests (Sharded) on: [push, pull_request] jobs: test: runs-on: ubuntu-latest strategy: fail-fast: false matrix: shardIndex: [1, 2, 3, 4] shardTotal: [4] steps: - uses: actions/checkout@v4 with: fetch-depth: 0 - uses: actions/setup-python@v5 with: python-version: '3.11' - name: Install dependencies run: | pip install pytest pytest-playwright pytest-playwright-json pytest-html testdino playwright install chromium --with-deps - name: Run tests shell: bash env: TESTDINO_TOKEN: ${{ secrets.TESTDINO_TOKEN }} run: | mkdir -p test-results if [[ "${{ github.run_attempt }}" -gt 1 ]]; then testdino last-failed \ --shard=${{ matrix.shardIndex }}/${{ matrix.shardTotal }} \ --token="$TESTDINO_TOKEN" > last-failed-flags.txt FAILED_TESTS="$(cat last-failed-flags.txt | tail -1)" if [[ -z "$FAILED_TESTS" ]]; then echo "No failed tests found. Exiting." exit 0 fi eval "pytest $FAILED_TESTS \ --playwright-json=test-results/report.json \ --html=test-results/index.html \ --self-contained-html -v" || true exit 0 fi pytest \ --playwright-json=test-results/report.json \ --html=test-results/index.html \ --self-contained-html -v || true - name: Cache metadata if: always() run: testdino cache --working-dir test-results --token="${{ secrets.TESTDINO_TOKEN }}" - name: Upload reports if: always() run: testdino upload ./test-results --token="${{ secrets.TESTDINO_TOKEN }}" --upload-full-json ``` **GitLab CI:** ```yaml .gitlab-ci.yml image: python:3.11 stages: - test playwright-tests: stage: test script: - pip install pytest pytest-playwright pytest-playwright-json pytest-html testdino - playwright install chromium --with-deps - pytest --playwright-json=test-results/report.json --html=test-results/index.html --self-contained-html - testdino upload ./test-results --token="$TESTDINO_TOKEN" --upload-full-json when: always ``` **Jenkins:** ```groovy Jenkinsfile pipeline { agent any environment { TESTDINO_TOKEN = credentials('testdino-token') } stages { stage('Test') { steps { sh 'pip install pytest pytest-playwright pytest-playwright-json pytest-html testdino' sh 'playwright install chromium --with-deps' sh 'pytest --playwright-json=test-results/report.json --html=test-results/index.html --self-contained-html' sh 'testdino upload ./test-results --token="$TESTDINO_TOKEN" --upload-full-json' } } } } ``` ## Related Explore test results, optimize your CI pipeline, and manage API keys. - [View Test Runs](https://docs.testdino.com/platform/playwright-test-runs): Explore test results in the platform - [CI Optimization](https://docs.testdino.com/guides/playwright-ci-optimization): Optimize your CI pipeline - [Node.js CLI](https://docs.testdino.com/cli/testdino-playwright-nodejs): Use TestDino with Playwright for Node.js - [Generate API Keys](https://docs.testdino.com/guides/generate-api-keys): Create and manage API tokens --- ## TestDino Data Privacy Overview > Source: https://docs.testdino.com/data-privacy/overview > Description: How TestDino handles customer data from Playwright test runs, including collection, storage, redaction, and retention policies. TestDino collects only the data required for test execution management, analytics, and platform administration. This section covers what data is collected, how it is protected, and how long it is retained. ## Quick Reference | Section | Description | | :--- | :--- | | [Access to Customer Data](/data-privacy/access-to-customer-data) | All data categories collected during test execution, registration, and platform usage | | [Data Redaction](/data-privacy/data-redaction) | Automatic detection and removal of secrets from trace files and test artifacts | | [Data Retention](/data-privacy/data-retention) | Retention periods by subscription tier and data category | | [Cloud Endpoints](/data-privacy/cloud-endpoints) | Internet-facing services, domains, and security configuration | --- For security or data privacy inquiries, email **support@testdino.com** or use the [discord](https://discord.gg/hGY9kqSm58) community. --- ## TestDino Access to Customer Data > Source: https://docs.testdino.com/data-privacy/access-to-customer-data > Description: Full breakdown of data categories TestDino collects during test execution, account registration, and product usage. Understand what is stored and why. TestDino collects data for test execution management, result analytics, CI/CD integration, and platform administration. This page lists every data category. ## Quick Reference | Category | Description | | :--- | :--- | | [Team & Administration](#team--administration) | User profiles, authentication, account status | | [Organization Data](#organization-data) | Org profiles, members, roles, invitations | | [Source Attribution](#source-attribution) | UTM parameters, ad tracking, referrers | | [Test Results](#test-results) | Run metadata, test cases, steps, flaky tests | | [Artifacts](#artifacts) | Screenshots, videos, traces, logs | | [CI/CD Environment](#cicd-environment-data) | Git, PR, pipeline, system, sharding metadata | | [Third-Party Integrations](#third-party-integrations) | GitHub, Jira, Linear, Asana, Slack, Monday.com | | [Billing](#billing--subscription-data) | Plans, usage, payments, invoices | --- ## Team & Administration Data collected during registration, authentication, and account management. | Data Point | Purpose | | :--- | :--- | | Email address | Account identification, login, notifications | | First name, last name | Display name across the platform | | Profile picture URL | User avatar | | Password | Authentication (bcrypt hashed, 12 salt rounds, never stored in plaintext) | | Google OAuth ID | SSO authentication via Google | | Auth provider | Tracks authentication method: `email`, `google`, or `both` | | Last login timestamp | Security auditing and session management | | Email verification status | Account verification compliance | | Terms accepted at | GDPR: records when user accepted terms | | Policy version | GDPR: records which privacy policy version was accepted | | Access IP addresses | Rate limiting, security monitoring, audit trails | | Failed login attempts | Account lockout (locks after 5 failed attempts for 30 minutes) | ## Organization Data | Data Point | Purpose | | :--- | :--- | | Organization name, description | Identification | | Logo URL | Branding (stored in Azure Blob Storage) | | Website URL | Organization profile | | Owner job role | Onboarding context | | Industry | Onboarding context | | Team size | Capacity planning (`1`, `2-10`, `11-50`, `51-100`, `101-250`, `250+`) | | Member list | Access control: user ID, role, invite timestamp, join timestamp | | Member roles | RBAC: `org_owner`, `org_admin`, `org_member`, `org_billing`, `org_viewer` | | Pending invites | Email, role, invitation timestamp for unaccepted invitations | | External member access | Time-limited access for external collaborators (default 30 days, max 365 days) | | Ownership transfer history | Audit trail for ownership changes | ## Source Attribution Collected during registration for marketing attribution. Stored in the user profile. | Data Point | Purpose | | :--- | :--- | | UTM parameters | `utm_source`, `utm_medium`, `utm_campaign`, `utm_term`, `utm_content` | | Ad tracking IDs | `gclid` (Google), `fbclid` (Facebook), `ad_group`, `keyword`, `placement`, `creative` | | Referrer URL | Traffic source attribution | | Entry/exit path | User journey tracking | | Session ID | Attribution session correlation | | Landing page variant | A/B test tracking | | Tracking version | Attribution schema version for backward compatibility | ## Test Results Data collected when test runs are submitted via the TestDino reporter. ### Run-Level Data | Data Point | Purpose | | :--- | :--- | | Run counter | Unique sequential identifier per project | | Status | `pending`, `running`, `completed`, `failed`, `cancelled`, `interrupted` | | Start/end timestamps | Duration calculation and timeline display | | Total duration | Performance tracking | | Test statistics | Total, passed, failed, skipped, flaky, timed out, interrupted counts | | Total attempts | Retry tracking | | Custom tags | User-defined categorization | ### Test Case Data | Data Point | Purpose | | :--- | :--- | | Test title | Identification | | Test status | Pass/fail/skip result | | Duration | Performance measurement | | Attempts count | Retry tracking | | File location | Source file path, line number, column number | | Tags | User-defined categorization | ### Test Steps | Data Point | Purpose | | :--- | :--- | | Step title | Identification | | Step category | `hook`, `fixture`, `pw:api`, `test.step` | | Duration | Performance measurement | | Location | File path, line, column | | Parent step reference | Nested step hierarchy | | Error details | Error message, stack trace, location, code snippet | | Annotations | Type and description metadata | ### Flaky Test Data | Data Point | Purpose | | :--- | :--- | | Flaky test identifiers | Tests that failed then passed on retry | | Retry count before passing | Flakiness severity measurement | | Error categorization | `assertion failure`, `element not found`, `timeout`, `network`, `other` | ## Artifacts Test artifacts uploaded during execution. | Artifact Type | Storage | Purpose | | :--- | :--- | :--- | | Screenshots | Azure Blob Storage | Visual test evidence and debugging | | Video recordings | Azure Blob Storage | Test execution replay | | Playwright traces | Azure Blob Storage | Step-by-step execution debugging | | Console output / logs | Azure Blob Storage | Runtime log inspection | | Inline attachments | MongoDB (base64-encoded) | Small embedded test artifacts | Storage path format: `{projectId}/{testRunId}/{artifactType}/` Access is controlled via time-limited SAS tokens (48-hour expiry) with least-privilege permissions. ## CI/CD Environment Data Metadata collected from the CI/CD environment where tests run. ### Version Control (Git) | Data Point | Purpose | | :--- | :--- | | Commit hash (SHA) | Unique commit identification, indexed for lookup | | Commit message | Context for the change being tested | | Commit author name | Attribution | | Commit author email | Attribution | | Commit author ID | GitHub user ID for deduplication | | Commit timestamp | Timeline tracking | | Branch name | Environment mapping, filtering | | Repository name and URL | Source identification | ### Pull Request Data | Data Point | Purpose | | :--- | :--- | | PR ID | Unique identification | | PR title | Context display | | PR URL | Link back to source | | PR status | `open`, `draft`, `ready_for_review`, `changes_requested`, `approved`, `merged`, `closed` | ### CI Pipeline Data | Data Point | Purpose | | :--- | :--- | | CI provider | GitHub Actions, GitLab CI, Jenkins, CircleCI, etc. | | Pipeline ID | Unique pipeline identification | | Pipeline name and URL | Display and linking | | Build number | Build identification | | Trigger type | `manual`, `push`, `pull_request`, `scheduled` | | Environment name and type | `production`, `staging`, `dev` classification | ### System Metadata | Data Point | Purpose | | :--- | :--- | | Hostname | Runner identification | | CPU count and model | Hardware profiling | | Memory | Resource capacity | | Operating system | Platform identification | | Node.js version | Runtime version | | Playwright version | Framework version | ### Test Framework Configuration | Data Point | Purpose | | :--- | :--- | | Framework name and version | Compatibility tracking | | Config file path | Project configuration reference | | Root/test directories | Project structure | | Timeout, retries, workers | Execution settings | | Parallelization settings | Concurrency configuration | | Browser settings | Name, viewport, headless mode, trace/screenshot/video config | | Project configurations | Per-project overrides | ### Sharding Metadata | Data Point | Purpose | | :--- | :--- | | Shard count | Total shards expected | | Received/completed shards | Progress tracking | | Per-shard status and timing | Individual shard monitoring | | Per-shard statistics | Distributed test result aggregation | ## Third-Party Integrations TestDino integrates with external services for CI/CD workflows, issue tracking, and notifications. Access is scoped to minimum required permissions. ### GitHub | Access Scope | Details | | :--- | :--- | | Installation ID | App installation identification | | Repository access | Repository ID, name, full name, privacy status | | PR comments | Posts test result summaries on pull requests (configurable per branch) | | Commit status comments | Posts status on commits (configurable) | | CI checks | Pass rate threshold (default 90%), mandatory tags, flaky handling (`strict`/`neutral`) | | Push webhooks | Receives push events (verified via `X-Hub-Signature-256`) | | Repository data | No source code access. Limited to metadata and status posting. | ### Jira | Access Scope | Details | | :--- | :--- | | Connection type | User-level personal API tokens | | Access | Issue creation from test failures | | Data stored | Access token, refresh token (encrypted), token expiry, workspace ID | ### Linear | Access Scope | Details | | :--- | :--- | | Connection type | User-level personal tokens | | Access | Issue tracking from test failures | | Data stored | Access token, refresh token (encrypted), token expiry | ### Asana | Access Scope | Details | | :--- | :--- | | Connection type | User-level personal tokens | | Access | Task creation from test failures | | Data stored | Access token, refresh token (encrypted), token expiry | ### Slack | Access Scope | Details | | :--- | :--- | | Connection type | Webhook URLs and/or OAuth-based Slack App | | Access | Posts test failure notifications to configured channels | | Data stored | Webhook URL, OAuth tokens (encrypted), workspace/channel IDs | ### Monday.com | Access Scope | Details | | :--- | :--- | | Connection type | Board-level integration | | Access | Workflow item creation from test events | | Data stored | Access token (encrypted), board ID, workspace ID | ### Razorpay (Billing) | Access Scope | Details | | :--- | :--- | | Connection type | Server-to-server webhooks | | Access | Payment events only (`charged`, `failed`, `cancelled`) | | Verification | Webhook signature verification | | Data stored | Payment ID, subscription ID, amount, currency, status | > **Note:** No raw card data is stored. All payment processing is handled by Razorpay. ## Billing & Subscription Data | Data Point | Purpose | | :--- | :--- | | Plan tier | `free`, `pro`, `team`, `enterprise` | | Billing cycle | `monthly`, `annual` | | Subscription status | `active`, `trialing`, `paused`, `cancelled`, `cancelling`, `pending_cancellation` | | Billing email | Invoice delivery | | Last payment date | Payment tracking | | Usage metrics | Executions used, per-project allocations, borrowed/lent amounts | | Usage alert thresholds | Notifications at 50%, 75%, 90%, 100% | | Payment records | Razorpay payment ID, amount, currency, status | | Invoices | Invoice number, amount, tax, status, issue/due/paid dates | ## What TestDino Does NOT Collect - Source code or repository contents - Raw credit card numbers or banking details (handled by Razorpay) - Personal browsing history outside of TestDino - Data from services beyond the integrations listed above ## Related Learn how data is redacted and how long it is retained. - [Data Redaction](https://docs.testdino.com/data-privacy/data-redaction): How secrets are detected and removed from artifacts - [Data Retention](https://docs.testdino.com/data-privacy/data-retention): Retention periods by tier and data category --- ## TestDino Data Redaction for Playwright > Source: https://docs.testdino.com/data-privacy/data-redaction > Description: TestDino automatically detects and scrubs secrets, tokens, and API keys from Playwright trace files and test artifacts before storing them. TestDino scans test artifacts for secrets and replaces them with masked values before they appear in the dashboard. This prevents accidental exposure of tokens, passwords, API keys, and credentials during debugging and review. > **Note:** Data Redaction is an Enterprise plan feature. Contact **support@testdino.com** to enable it. ## Quick Reference | Item | Description | | :--- | :--- | | [Detected Patterns](#detected-patterns) | API keys, tokens, passwords, connection strings, private keys | | [Redaction Process](#how-redaction-works) | Detection, scrubbing, backup, and display | | [Scope](#what-gets-redacted) | Traces, console output, errors, network logs, attachments | | [Exclusions](#what-does-not-get-redacted) | Test titles, file paths, git metadata, timing data | --- ## How Redaction Works ### 1. Detection When artifacts (traces, logs, console output) are uploaded, the system scans content for patterns matching sensitive information. ### Detected Patterns | Pattern Type | Examples | | :--- | :--- | | API keys | `sk_live_*`, `sk_test_*`, `api_key_*`, `AKIA*` (AWS) | | Authentication tokens | Bearer tokens, JWT tokens, OAuth access/refresh tokens | | Passwords | Password fields in configuration, connection strings | | Environment variables | `DATABASE_URL`, `SECRET_KEY`, `PRIVATE_KEY`, custom secrets | | Connection strings | Database URIs containing credentials | | Private keys | RSA/EC private key blocks, PEM-encoded certificates | | Cloud credentials | AWS secret keys, GCP service account keys, Azure connection strings | ### 2. Scrubbing Identified secrets are replaced with `*********`. No sensitive values persist in readable form. ```json before-redaction.json { "api_key": "sk_live_123456789abcdef", "database_url": "mongodb+srv://admin:s3cr3tP@ss@cluster.mongodb.net/prod", "auth_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..." } ``` ```json after-redaction.json { "api_key": "*********", "database_url": "*********", "auth_token": "*********" } ``` ### 3. Secure Backup TestDino retains an encrypted backup of the original unredacted files for authorized audit and incident investigation. - Encrypted at rest using AES-256 - Access restricted to designated security administrators - Both versions follow the organization's configured [data retention policy](/data-privacy/data-retention) - Backup access is logged for audit trail ### 4. Display When viewing or downloading artifacts in the dashboard: - Detected secrets appear as `*********` - Redaction applies across all artifact types: traces, logs, console output - Original artifact structure and formatting is preserved. Only secret values are replaced. ## What Gets Redacted | Artifact Type | Redaction Scope | | :--- | :--- | | Playwright traces | Environment variables, inline secrets, auth tokens in network request/response headers | | Console output | Logged secrets, connection strings, token values | | Test step errors | Secrets in error messages or stack traces | | Network logs | Authorization headers, cookie values, API keys in URLs | | Inline attachments | Secrets in base64-decoded attachment content | ## What Does NOT Get Redacted - Test titles and descriptions (user-authored content) - File paths and line numbers - Non-secret environment metadata (OS, Node.js version, browser settings) - Git metadata (commit hashes, branch names, author names) - Test statistics and timing data ## Infrastructure-Level Log Redaction TestDino uses structured logging with built-in sensitive data redaction at the infrastructure level. This runs independently of the artifact redaction feature and is active on all plans. - Passwords, tokens, and API keys are stripped from application logs - Audit logs record user ID, action, resource, and IP address ## Related Review what data is collected and how long it is retained. - [Access to Customer Data](https://docs.testdino.com/data-privacy/access-to-customer-data): Full list of data TestDino collects - [Data Retention](https://docs.testdino.com/data-privacy/data-retention): Retention periods by tier and data category --- ## TestDino Data Retention Policies > Source: https://docs.testdino.com/data-privacy/data-retention > Description: Retention periods for Playwright test data, trace files, screenshots, videos, and account data. Understand limits by subscription tier. TestDino applies retention policies based on subscription tier and data category. Data is automatically deleted after the retention period expires. > **Note:** Enterprise customers can configure custom retention periods. Custom retention may affect pricing. Contact **support@testdino.com** for details. ## Quick Reference | Category | Description | | :--- | :--- | | [By Subscription Tier](#retention-by-subscription-tier) | Artifacts, test results, test details, analytics | | [By Data Category](#retention-by-data-category) | Accounts, sessions, API keys, tokens, billing, audit logs | | [Automated Cleanup](#automated-cleanup) | Scheduled jobs and cascade deletion | | [GDPR Data Rights](#gdpr-data-rights) | Export, deletion, portability | --- ## Retention by Subscription Tier All test-related data (artifacts, results, details, analytics) follows the same retention period per tier. | Tier | Retention Period | | :--- | :--- | | Free | 14 days | | Pro | 90 days | | Team | 365 days | | Enterprise | Custom | This applies to: - **Artifacts**: Screenshots, videos, Playwright traces, console output - **Test Results**: Run metadata, test case results, pass/fail status, error details, flaky test data - **Test Details**: Execution steps, annotations, attachment metadata - **Analytics & Reports**: Aggregated metrics, trend data, flakiness rates, performance analytics ## Retention by Data Category | Data Category | Retention Period | Notes | | :--- | :--- | :--- | | User accounts | Until deletion | Soft-deleted on account deletion request | | Organization data | Until deletion | Retained while organization is active | | API keys | 30 days after expiry | Marked inactive, cleaned up periodically | | Sessions | 7 days | Cleaned up daily | | Verification tokens | 1 hour | Email verification and password reset tokens | | Integration tokens | Until revoked | OAuth tokens for third-party integrations | | Billing records | Per legal requirements | Payment and invoice records | | Git metadata | Matches test run retention | Commits and PR data tied to test run lifecycle | | Source attribution | Matches user account | UTM and ad tracking data stored with user profile | | Audit logs | 12 months | Security and compliance audit trail | ## Automated Cleanup TestDino runs scheduled jobs to enforce retention policies. | Job | Schedule | Action | | :--- | :--- | :--- | | Data retention cleanup | Daily, 3:00 AM UTC | Deletes expired test runs, suites, cases, and artifacts | | Session cleanup | Daily | Removes expired sessions (older than 7 days) | | API key cleanup | Every 5 minutes | Marks expired API keys as inactive | | External access cleanup | Daily, 3:00 AM UTC | Removes expired external member access and invitations | | Incomplete shard cleanup | Every 5 minutes | Recovers or marks orphaned shards after 10-minute timeout | | Orphaned run cleanup | Daily | Deletes test runs stuck in `pending`/`running` for 24+ hours | ### Cascade Deletion When a test run expires and is deleted: 1. Test run record is deleted from MongoDB 2. Associated test suites are cascade deleted 3. Associated test cases are cascade deleted 4. Artifacts (screenshots, videos, traces) are deleted from Azure Blob Storage 5. Cached analytics referencing the run are invalidated ## GDPR Data Rights ### Data Export Users can request a full export of personal data: - User profile information - Organization memberships - Project data - Test run history - Settings and preferences ### Account Deletion Users can request account deletion: - Account is soft-deleted (`isDeleted` flag set) - Personal data is anonymized per compliance requirements - Associated data is removed or anonymized per the retention schedule - Deletion is irreversible after processing ### Data Portability Test results and analytics can be exported via the TestDino API for migration or backup. ## Important Notes - Retention periods start from the creation date (e.g., test run completion date for results) - Data may be deleted at any point after the period expires. Deletion does not occur on the exact expiry date. - Upgrading your plan does not restore data already deleted under a previous plan's retention policy - Enterprise custom retention periods may affect subscription pricing ## Related Review what data is collected and which endpoints are used. - [Access to Customer Data](https://docs.testdino.com/data-privacy/access-to-customer-data): Full list of data TestDino collects - [Cloud Endpoints](https://docs.testdino.com/data-privacy/cloud-endpoints): Internet-facing services and security configuration --- ## TestDino Cloud Endpoints and Domains > Source: https://docs.testdino.com/data-privacy/cloud-endpoints > Description: Complete list of internet-facing TestDino services, domains, and IP ranges used to send Playwright test results and artifacts securely. All internet-facing TestDino services and their security configuration. ## Quick Reference | Section | Description | | :--- | :--- | | [Core Services](#core-services) | Dashboard, API, reporter, WebSocket | | [Authentication & Billing](#authentication--billing) | Auth endpoints, payment webhooks | | [Integrations](#integration-services) | GitHub webhooks, third-party connectors | | [Internal Services](#internal-services) | Health checks, artifact storage | | [Network Security](#network--security) | CORS, TLS, rate limiting, security headers | | [Firewall Config](#firewall-configuration) | Domains to allowlist | --- ## Core Services | Endpoint | Description | | :--- | :--- | | `app.testdino.com` | Web application: dashboard, test results, analytics, settings | | `api.testdino.com` | Primary API: authentication, test data, project management, client-server communication | | `api.testdino.com/api/reports/playwright` | Reporter endpoint: receives test results via API key authentication | | `api.testdino.com/stream` | WebSocket: real-time test result updates | ## Authentication & Billing | Endpoint | Description | | :--- | :--- | | `api.testdino.com/api/auth` | Login, registration, OAuth callbacks, password reset, email verification | | `api.testdino.com/api/v1/webhooks/razorpay` | Payment webhook: receives events from Razorpay (signature-verified) | ## Integration Services | Endpoint | Description | | :--- | :--- | | `api.testdino.com/api/integrations/v1/github/webhook` | GitHub webhook: push events and PR updates (verified via `X-Hub-Signature-256`) | | Integration service (internal) | Manages Jira, Linear, Asana, Slack, Monday.com connections. Not internet-facing. Accessed via the primary API. | ## Internal Services | Service | Description | | :--- | :--- | | `api.testdino.com/health` | Health check for monitoring and load balancer probes | | Azure Blob Storage | Artifact storage: screenshots, videos, traces. Access via time-limited SAS tokens (48-hour expiry). | ## Network & Security ### CORS The API enforces strict CORS policies: - Only explicitly configured origins are allowed (no wildcard `*`) - Origins are defined per environment via `CORS_ORIGIN` configuration ### TLS - All endpoints enforce HTTPS (TLS 1.2+) - HTTP requests redirect to HTTPS - Certificates are managed via hosting infrastructure ### Rate Limiting | Endpoint | Limit | | :--- | :--- | | Login | 10 requests / 15 minutes | | Registration | 5 requests / 15 minutes | | Email verification | 3 requests / 60 minutes | | Password reset | 2 requests / 60 minutes | | Global API | 1,000 requests / 15 minutes | ### Security Headers All responses include headers via Helmet: - `Content-Security-Policy` (CSP) - `X-Frame-Options` - `X-Content-Type-Options` - `Strict-Transport-Security` (HSTS) ## Firewall Configuration If your organization uses network-level allowlisting, add these domains: | Domain | Required For | | :--- | :--- | | `app.testdino.com` | Dashboard access | | `api.testdino.com` | API, reporter submission, webhooks, WebSocket streaming | | Azure Blob Storage domain | Artifact uploads and downloads | ## Related Review what data is collected and how long it is retained. - [Access to Customer Data](https://docs.testdino.com/data-privacy/access-to-customer-data): Full list of data TestDino collects - [Data Retention](https://docs.testdino.com/data-privacy/data-retention): Retention periods by tier and data category --- ## TestDino MCP Server for Playwright > Source: https://docs.testdino.com/mcp/overview > Description: Connect Claude Code, Cursor, or Claude Desktop to TestDino via MCP. Query Playwright test results, failures, and analytics from your IDE. TestDino MCP Server connects Claude Code, Cursor, Claude Desktop, and other MCP-compatible clients to your TestDino workspace. Your assistant retrieves real test run data, artifacts, and manual test cases to investigate failures and manage test cases without switching contexts. MCP (Model Context Protocol) is an open standard that defines how an assistant talks to external tools through a consistent interface. [Video: TestDino MCP Overview](https://www.youtube.com/embed/Zoo3aAic6Tk?si=_mqMvD-uUFKVvXkY) ## Prerequisites * A TestDino account with at least one project * Node.js installed (so `npx` works) * An MCP client — Claude Code, Cursor, or Claude Desktop ## Quick Start ### Install the MCP server **Try without installing:** ```bash npx -y testdino-mcp ``` **Or install globally:** ```bash npm install -g testdino-mcp ``` > **Note:** Your MCP client starts the server as a local process. `npx` is easiest for evaluation, and a global install is convenient for daily use. ### Create a Personal Access Token You will paste this token into your MCP client config as `TESTDINO_PAT`. 1. Sign in to [app.testdino.com](https://app.testdino.com) 2. Click your profile → **User settings** ![User settings navigation showing profile menu and settings link](https://testdinostr.blob.core.windows.net/docs/docs/testdino-mcp/user-settings.png) 3. Go to **Personal access tokens** → **Generate new token** 4. Set a **Token Name** (e.g., `mcp server`) and **Expiration** (30 to 365 days) ![Generate new token form with name and expiration fields](https://testdinostr.blob.core.windows.net/docs/docs/testdino-mcp/generate-token.png) 5. Grant access to at least one project with **Test runs**, **Manual tests**, or both 6. Click **Generate token** and copy it immediately [Video: Personal access token generation and management](https://testdinostr.blob.core.windows.net/docs/docs/testdino-mcp/pat-token.mp4) > **Warning:** The token is shown once and cannot be retrieved later. Store it in a password manager. Never commit tokens to Git. Use separate tokens for separate contexts (local MCP, CI, scripts). Revoke tokens you no longer use. ### Configure your MCP client Pick your client and add the server entry. **Claude Code:** Run in your terminal: ```bash claude mcp add testdino -- npx -y testdino-mcp --pat your-token ``` No restart needed. Verify with: ```bash claude mcp list ``` You should see `testdino` in the output. **Cursor:** 1. Open **Cursor settings** → **Tools & MCP** → **Add MCP server** 2. Create or edit the config file: Common locations: * Project level: `.cursor/mcp.json` * Windows: `%APPDATA%\Cursor\mcp.json` * macOS / Linux: `~/.cursor/mcp.json` ```json .cursor/mcp.json { "mcpServers": { "TestDino": { "command": "npx", "args": ["-y", "testdino-mcp"], "env": { "TESTDINO_PAT": "your-token" } } } } ``` 3. Restart Cursor fully (quit and reopen) 4. Confirm **TestDino** appears in **Settings → MCP** **Claude Desktop:** 1. Open the config file: * macOS: `~/Library/Application Support/Claude/claude_desktop_config.json` * Windows: `%APPDATA%\Claude\claude_desktop_config.json` * Linux: `~/.config/Claude/claude_desktop_config.json` 2. Add the TestDino server: ```json claude_desktop_config.json { "mcpServers": { "TestDino": { "command": "npx", "args": ["-y", "testdino-mcp"], "env": { "TESTDINO_PAT": "your-token" } } } } ``` 3. Restart Claude Desktop ### Validate the connection Ask your assistant: *"Run `health` and confirm it can see my organisations and projects."* `health` verifies three things: the server is running, the token is loaded, and TestDino is reachable. See [Troubleshooting](/mcp/troubleshooting) if it fails. ## Example Prompts [Video: TestDino MCP Example](https://www.youtube.com/embed/Qg4e0kEHVK0?si=y6cMo5bMupe2s16O) ### Test Run Analysis | Prompt | What it does | | :--- | :--- | | *Show test runs within the last hour in project xyz* | Lists recent runs with status and metadata | | *Which are the most flaky test cases from the recent 10 runs?* | Ranks tests by flaky rate across runs | | *Show me all failed tests from the last run* | Filters test cases by failure status | | *List flaky tests on main from the last 7 days* | Flaky tests filtered by branch and time window | | *Show tests tagged @auth, @smoke that failed in production* | Filters by tag, status, and environment | | *Debug the test case "visual.spec.js" on development* | Root cause analysis with fix suggestions | | *Summarize which test broke in branch "main"* | Quick failure summary for a branch | ### Manual Test Case Management | Prompt | What it does | | :--- | :--- | | *List all critical priority test cases in the checkout suite* | Filters manual cases by priority and suite | | *Find manual test cases tagged @auth that are not automated* | Filters by tag and automation status | | *Show me the steps for test case TC-456* | Full test case details including steps | | *Create a new test case for password reset in Authentication* | Creates a case in the specified suite | | *Update TC-789 to change its status to deprecated* | Updates fields on an existing case | | *Create a test suite called "Payment Processing"* | Creates a new suite for organizing cases | ## Next Steps - [Tools Reference](https://docs.testdino.com/mcp/tools-reference): Full parameters, input schemas, and video demos for all 12 tools - [Troubleshooting](https://docs.testdino.com/mcp/troubleshooting): Error messages, fixes, and editor-specific solutions ## TestDino MCP Tools Reference Guide > Source: https://docs.testdino.com/mcp/tools-reference > Description: Complete reference for all tools available in the TestDino MCP server. Includes input schemas, output formats, and usage examples for each tool. Parameters and usage examples for each TestDino MCP tool. ## Tool Index | Category | Tool | Description | | :--- | :--- | :--- | | **Connection** | [`health`](#health) | Verify server status and token access | | **Analysis** | [`list_testruns`](#list_testruns) | List and filter test runs | | | [`get_run_details`](#get_run_details) | Full report for one or more runs | | | [`list_testcase`](#list_testcase) | List and filter test cases across runs | | | [`get_testcase_details`](#get_testcase_details) | Full debug context for a single test case | | | [`debug_testcase`](#debug_testcase) | Root cause analysis and fix recommendations | | **Test Case Management** | [`list_manual_test_cases`](#list_manual_test_cases) | Search manual test cases | | | [`get_manual_test_case`](#get_manual_test_case) | Fetch a manual test case with steps | | | [`create_manual_test_case`](#create_manual_test_case) | Create a manual test case | | | [`update_manual_test_case`](#update_manual_test_case) | Update fields on a manual test case | | | [`list_manual_test_suites`](#list_manual_test_suites) | List suite hierarchy | | | [`create_manual_test_suite`](#create_manual_test_suite) | Create a new suite | --- ## Connection ### `health` Verifies the server is running and validates your API token. Returns PAT validation status, connection status, organisation and project access, and available modules (Test runs, Test case management). After running `health`, tell the assistant which organisation or project you are working on. The assistant resolves and stores the `projectId`, so you do not need to specify it in future tool calls. No parameters required for this tool. **Example** [Video: Health check demonstration showing PAT validation and project access](https://testdinostr.blob.core.windows.net/docs/docs/testdino-mcp/health.mp4) --- ## Analysis ### `list_testruns` Lists runs with filtering by branch, environment, time window, author, and commit. > **Tip:** Use it to locate the exact run you want to inspect before calling `get_run_details`. | Parameter | Type | Required | Description | | :--- | :--- | :--- | :--- | | `project-id/name` | string | Yes | Project ID or name to list runs from. | | `by_branch` | string | No | Git branch name, e.g., `main`, `develop`. | | `by_time_interval` | string | No | `1h`, `2h`, `6h`, `12h`, `1d`, `3d`, `weekly`, `monthly`, or date range `YYYY-MM-DD, YYYY-MM-DD`. | | `by_author` | string | No | Commit author name; case-insensitive partial match. | | `by_commit` | string | No | Commit hash (full or partial). | | `by_environment` | string | No | Environment, e.g., `production`, `staging`, `development`. | | `limit` | number | No | Results per page (default: 20, max: 1000). | | `page` | number | No | Page number for pagination (default: 1). | | `get_all` | boolean | No | Retrieve all results up to 1000. | > **Note:** Filters can be combined. Pagination uses `page` and `limit`. `get_all=true` fetches up to 1000 records. **Example** [Video: List test runs filtered by branch and time](https://testdinostr.blob.core.windows.net/docs/docs/testdino-mcp/list-testruns.mp4) ### `get_run_details` Returns a full report for one run, including suite breakdowns, test cases, failure categories, rerun metadata, and raw JSON. | Parameter | Type | Required | Description | | :--- | :--- | :--- | :--- | | `project-id/name` | string | No\* | Project ID or name. Not required if `testrun_id` is provided. | | `testrun_id` | string | No | Single ID or comma-separated IDs for batch lookup (max 20). | | `counter + projectId/name` | number | No | Sequential run counter number. Requires project ID or name. | > **Note:** Provide `testrun_id` when you have a stable run identifier. Provide `counter` with project ID/name when your team references runs by sequence number. **Example** [Video: Get detailed run report with failure categories](https://testdinostr.blob.core.windows.net/docs/docs/testdino-mcp/get-run-details.mp4) ### `list_testcase` Lists test cases across runs with both run-level and case-level filters. How it works: 1. Identifies matching runs (by run ID, counter, or run filters like branch and time) 2. Returns test cases from those runs 3. Applies case-level filters (status, tag, browser, error category, runtime, artifacts) | Parameter | Type | Required | Description | | :--- | :--- | :--- | :--- | | `by_testrun_id` | string | No\* | Single or multiple run IDs (comma-separated, max 20). | | `counter + projectId/name` | number + string | No\* | Run counter with project ID/name. Alternative to `by_testrun_id`. | | `by_status` | string | No | `passed`, `failed`, `skipped`, `flaky`. | | `by_spec_file_name` | string | No | Filter by spec file name. | | `by_error_category` | string | No | Error category name. | | `by_browser_name` | string | No | Browser name, e.g., `chromium`. | | `by_tag` | string | No | Tag or comma-separated tags. | | `by_total_runtime` | string | No | Time filter using operators, e.g., `<60`, `>100`. | | `by_artifacts` | boolean | No | `true` to only return cases with artifacts. | | `by_error_message` | string | No | Partial match on error message. | | `by_attempt_number` | number | No | Filter by attempt number. | | `by_branch` | string | No | Branch name; filters runs first, then returns cases. | | `by_time_interval` | string | No | `1d`, `3d`, `weekly`, `monthly`, or date range. | | `limit` | number | No | Results per page (default: 1000, max: 1000). | | `page` | number | No | Page number (default: 1). | | `get_all` | boolean | No | Get all results up to 1000. | \* Provide at least one: `by_testrun_id`, `counter + projectId/name`, or a run filter like `by_branch` with `by_time_interval`. **Example** [Video: List test cases filtered by status and tags](https://testdinostr.blob.core.windows.net/docs/docs/testdino-mcp/list-testcase.mp4) ### `get_testcase_details` Fetches full debug context for a single test case, including retries and artifacts. | Parameter | Type | Required | Description | | :--- | :--- | :--- | :--- | | `testcase_id` | string | No\* | Test case ID. Can be used alone. | | `testcase_name` | string | No\* | Test case name. Requires `testrun_id` or `counter + projectId/name`. | | `testrun_id` | string | No | Required when using `testcase_name` to identify the run. | | `counter + projectId/name` | number + string | No | Alternative to `testrun_id` when using `testcase_name`. | \* Provide either `testcase_id` alone, or `testcase_name` with `testrun_id` or `counter`. **Example** [Video: Get test case debug context with artifacts](https://testdinostr.blob.core.windows.net/docs/docs/testdino-mcp/get-testcase-details.mp4) ### `debug_testcase` Debugs a test case by aggregating historical execution and failure data across multiple runs. | Parameter | Type | Required | Description | | :--- | :--- | :--- | :--- | | `projectId` | string | Yes | The project ID containing the test case. | | `testcase_name` | string | Yes | The name of the test case to debug. | The tool provides: * **Root cause analysis** — analyzes error messages, artifacts, stack traces, and error categories across historical runs * **Failure patterns** — identifies common error categories, messages, and locations * **Fix recommendations** — suggests fixes based on historical analysis and failure patterns > **Warning:** Suggested fixes are recommendations, not final changes. If you do not have access to the application source code, validate suggestions manually before applying them. Use the recommendations to understand *why* the test is failing, then adjust based on what you observe in the product. --- ## Test Case Management ### `list_manual_test_cases` Searches manual test cases within a project. | Parameter | Type | Required | Description | | :--- | :--- | :--- | :--- | | `projectId` | string | Yes | The project ID that contains the test cases. | | `time` | string | No | `1h`, `2h`, `6h`, `12h`, `1d`, `3d`, `weekly`, `monthly`, or date range `YYYY-MM-DD, YYYY-MM-DD`. | | `search` | string | No | Match against title, description, or caseId. Example: `login` or `TC-123`. | | `suiteId` | string | No | Filter by suite ID. Use `list_manual_test_suites` to find IDs. | | `status` | string | No | `Active`, `Draft`, or `Deprecated`. | | `priority` | string | No | `Blocker`, `Critical`, `Major`, `Normal`, `Minor`, `Trivial`, or `Not set`. | | `severity` | string | No | `Blocker`, `Critical`, `Major`, `Normal`, `Minor`, `Trivial`, or `Not set`. | | `type` | string | No | `Functional`, `Smoke`, `Regression`, `Security`, `Performance`, `E2E`, `Integration`, `API`, `Unit`, `Accessibility`, `Compatibility`, `Acceptance`, `Exploratory`, `Usability`, or `Other`. | | `layer` | string | No | `E2E`, `API`, `Unit`, or `Not set`. | | `behavior` | string | No | `Positive`, `Negative`, `Destructive`, or `Not set`. | | `automationStatus` | string | No | `Manual`, `Automated`, or `To be automated`. | | `tags` | string | No | Comma-separated tags. Example: `smoke,regression`. | | `isFlaky` | boolean | No | `true` for flaky only, `false` for non-flaky. | | `limit` | number | No | Max results (max: 1000). | **Example** [Video: Search manual test cases with filters](https://testdinostr.blob.core.windows.net/docs/docs/testdino-mcp/list-manual-test-cases.mp4) ### `get_manual_test_case` Fetches one manual test case, including steps and custom fields. | Parameter | Type | Required | Description | | :--- | :--- | :--- | :--- | | `projectId` | string | Yes | The project ID that contains the test case. | | `caseId` | string | Yes | Internal `_id` or human-readable ID (e.g., `TC-123`). | **Example** [Video: Fetch manual test case with steps and custom fields](https://testdinostr.blob.core.windows.net/docs/docs/testdino-mcp/get-manual-test-cases.mp4) ### `create_manual_test_case` Creates a manual test case under a specific suite. | Parameter | Type | Required | Description | | :--- | :--- | :--- | :--- | | `projectId` | string | Yes | The project ID where the test case will be created. | | `title` | string | Yes | The test case title. | | `suiteId` | string | Yes | The suite ID. Use `list_manual_test_suites` to find IDs. | | `description` | string | No | Description of what the test covers. | | `status` | string | No | `Active`, `Draft`, or `Deprecated`. | | `stepsDeclarationType` | string | No | `Classic` or `Gherkin`. | | `preconditions` | string | No | Setup requirements before running this test. | | `postconditions` | string | No | Expected state after the test completes. | | `steps` | array | No | Classic: `{action, expectedResult, data}`. Gherkin: `{event, stepDescription}` where event is `Given`, `When`, `And`, `Then`, or `But`. | | `priority` | string | No | `Blocker`, `Critical`, `Major`, `Normal`, `Minor`, `Trivial`, or `Not set`. | | `severity` | string | No | `Blocker`, `Critical`, `Major`, `Normal`, `Minor`, `Trivial`, or `Not set`. | | `type` | string | No | `Functional`, `Smoke`, `Regression`, `Security`, `Performance`, `E2E`, `Integration`, `API`, `Unit`, `Accessibility`, `Compatibility`, `Acceptance`, `Exploratory`, `Usability`, or `Other`. | | `layer` | string | No | `E2E`, `API`, `Unit`, or `Not set`. | | `behavior` | string | No | `Positive`, `Negative`, `Destructive`, or `Not set`. | | `automationStatus` | string | No | `Manual`, `Automated`, or `To be automated`. | | `tags` | string | No | Comma-separated tags. | | `automation` | object | No | `{toBeAutomated, isFlaky, muted}` — all boolean. | | `attachments` | array | No | File attachments. Max 10 MB per file. | | `customFields` | object | No | Key-value pairs for project-specific custom fields. | **Example** [Video: Create a manual test case with steps and classification](https://testdinostr.blob.core.windows.net/docs/docs/testdino-mcp/create-manual-test-cases.mp4) ### `update_manual_test_case` Updates only the fields you provide. All other fields remain unchanged. | Parameter | Type | Required | Description | | :--- | :--- | :--- | :--- | | `projectId` | string | Yes | The project ID containing the test case. | | `caseId` | string | Yes | Internal `_id` or human-readable ID (e.g., `TC-123`). | | `updates` | object | Yes | Fields to update. Accepts all fields from `create_manual_test_case`. | **Example** [Video: Update manual test case fields](https://testdinostr.blob.core.windows.net/docs/docs/testdino-mcp/update-manual-test-cases.mp4) ### `list_manual_test_suites` Returns the suite hierarchy for a project. | Parameter | Type | Required | Description | | :--- | :--- | :--- | :--- | | `projectId` | string | Yes | The project ID to list suites from. | | `parentSuiteId` | string | No | Returns only child suites of this parent. Empty returns top-level suites. | **Example** [Video: List suite hierarchy with parent-child relationships](https://testdinostr.blob.core.windows.net/docs/docs/testdino-mcp/list-manual-test-suites.mp4) ### `create_manual_test_suite` Creates a new suite. Use `parentSuiteId` to nest it under an existing suite. | Parameter | Type | Required | Description | | :--- | :--- | :--- | :--- | | `projectId` | string | Yes | The project ID where the suite will be created. | | `name` | string | Yes | The name of the new test suite. | | `description` | string | No | Description of the test suite. | | `parentSuiteId` | string | No | Creates as a child of this parent. Empty creates a top-level suite. | **Example** [Video: Create a nested test suite](https://testdinostr.blob.core.windows.net/docs/docs/testdino-mcp/create-manual-test-suite.mp4) ## MCP Troubleshooting > Source: https://docs.testdino.com/mcp/troubleshooting > Description: Diagnose and fix common TestDino MCP server setup issues including connection failures, authentication errors, and tool registration problems. Resolve MCP server installation, editor configuration, authentication, and data lookup errors. ## Quick Reference | Symptom | Likely Cause | Jump to | | :--- | :--- | :--- | | `npm ERR! 404 Not Found` | Package name wrong or registry issue | [Installation](#installation-issues) | | `command not found: testdino-mcp` | Global bin not on PATH | [Installation](#installation-issues) | | Server not appearing in editor | Config file path or JSON syntax | [Editor Integration](#editor-integration) | | `health` returns "Invalid token" | Token expired, revoked, or missing scope | [Authentication](#authentication) | | `health` returns "No projects found" | Token has no project access | [Authentication](#authentication) | | `Run not found` or empty results | Wrong project, filters too narrow | [Data Lookup](#data-lookup) | | `ECONNREFUSED` or timeout | Network block or firewall | [Network](#network-errors) | ## Installation Issues **npm ERR! 404 Not Found - testdino-mcp** The package name is incorrect or your npm registry is misconfigured. Verify the package exists: ```bash npm view testdino-mcp ``` If this returns package info, the registry is fine. Check for typos in your MCP config. If it returns a 404, check your npm registry setting: ```bash npm config get registry ``` It should be `https://registry.npmjs.org/`. **command not found: testdino-mcp** Your npm global bin directory is not on your PATH. Find where npm installs global packages: ```bash npm config get prefix ``` Add the `bin` subdirectory to your PATH. On macOS/Linux: ```bash export PATH="$(npm config get prefix)/bin:$PATH" ``` Or skip the issue entirely by using `npx`: ```bash npx -y testdino-mcp ``` **EACCES: permission denied during global install** macOS and Linux may block global installs without elevated permissions. Fix by changing npm's default directory: ```bash mkdir ~/.npm-global npm config set prefix '~/.npm-global' export PATH=~/.npm-global/bin:$PATH ``` Or use `npx` instead of a global install — it requires no special permissions. ## Editor Integration **Claude Code: server not registered after claude mcp add** Verify the server was added: ```bash claude mcp list ``` If `testdino` does not appear, re-run the add command: ```bash claude mcp add testdino -- npx -y testdino-mcp --pat your-token ``` Check that your token is correct — Claude Code passes it directly to the server process. **Cursor: TestDino not showing in Settings → MCP** 1. Open your config file and validate JSON syntax (trailing commas, missing brackets): ```bash # macOS/Linux cat ~/.cursor/mcp.json | python3 -m json.tool # Or check project-level config cat .cursor/mcp.json | python3 -m json.tool ``` 2. Confirm the file path Cursor reads. Common locations: - Project: `.cursor/mcp.json` - macOS/Linux home: `~/.cursor/mcp.json` - Windows: `%APPDATA%\Cursor\mcp.json` 3. Restart Cursor fully (not just reload window — quit and reopen). 4. Go to **Settings → Tools & MCP** and check if TestDino appears. **Claude Desktop: server not detected after restart** 1. Confirm config file location for your OS: - macOS: `~/Library/Application Support/Claude/claude_desktop_config.json` - Windows: `%APPDATA%\Claude\claude_desktop_config.json` - Linux: `~/.config/Claude/claude_desktop_config.json` 2. Verify `npx` works in your terminal: ```bash npx -y testdino-mcp --help ``` 3. Restart Claude Desktop completely (quit from system tray/dock, reopen). ## Authentication **health returns: Invalid token or Unauthorized** The token is expired, revoked, or was copied incorrectly. 1. Go to **app.testdino.com → User Settings → Personal Access Tokens** 2. Check the token's expiration date and status 3. If expired or revoked, generate a new token 4. Copy the new token and update your MCP config 5. Restart your MCP client 6. Run `health` again **health returns: No projects found or empty project list** The token exists but has no project access granted. 1. Go to **app.testdino.com → User Settings → Personal Access Tokens** 2. Click the eye icon next to your token to view its scope 3. Ensure at least one project has **Test runs** or **Manual tests** access enabled 4. If you just created the project, you may need to regenerate the token with the new project included **health succeeds but tools return Permission denied** The token has project access but not for the specific module. - `list_testruns` and `get_run_details` require **Test runs** module access - `list_manual_test_cases` and related tools require **Manual tests** module access Generate a new token with both modules enabled if you need access to both. ## Data Lookup **list_testruns returns empty results** Filters may be too narrow. Try widening your query: - Remove `by_branch`, `by_environment`, or `by_author` filters - Extend `by_time_interval` to `weekly` or `monthly` - Confirm you are querying the correct project If the project is new and has no uploads yet, results will be empty. **get_run_details returns: Run not found** - The `testrun_id` may be from a different project than what the token can access - If using `counter`, provide the correct `projectId` or project name alongside it - Use `list_testruns` first to confirm the run exists and get its exact ID **debug_testcase returns no failure data** The `debug_testcase` tool aggregates historical execution data. If the test case has only run once or has no recorded failures, there is not enough data for root cause analysis. Run the test case across multiple runs to build a failure history before using `debug_testcase`. ## Network Errors **ECONNREFUSED or connection timeout** The MCP server cannot reach TestDino APIs. 1. Check internet connectivity: ```bash curl -I https://app.testdino.com ``` 2. If behind a corporate firewall or VPN, ensure outbound HTTPS (port 443) to `*.testdino.com` is allowed 3. Check the [Cloud Endpoints](/data-privacy/cloud-endpoints) page for the full list of domains to allowlist **Rate limiting errors (429)** Reduce query scope: - Use specific filters instead of `get_all=true` - Lower `limit` values - Avoid rapid sequential calls — add short delays between queries If rate limiting persists, contact [support@testdino.com](mailto:support@testdino.com). ## Still Stuck? - [Discord](https://docs.testdino.comhttps://discord.gg/hGY9kqSm58): Join the community for real-time help - [Email Support](https://docs.testdino.commailto:support@testdino.com): Email support@testdino.com ## Generate API Keys > Source: https://docs.testdino.com/guides/generate-api-keys > Description: Create and manage TestDino API keys to authenticate the Playwright reporter and upload test results from CI or local environments. Every upload needs a valid key. This guide covers how to create, configure, and manage API keys for your TestDino project. > **Note:** **MCP Server uses a different authentication method.** The MCP Server requires a Personal Access Token (PAT), not a Project API Key (See: TestDino PAT). ## Create a Key 1. Open your project in TestDino 2. Go to **Settings → API Keys** 3. Click **Generate Key** 4. Enter a Name and Expiration period (1 to 365 days) 5. Click **Create** The key is automatically copied to your clipboard. Store it in a password manager or your CI secrets. The key is shown once and cannot be retrieved later. ![API Keys management interface showing key list and generate button](https://testdinostr.blob.core.windows.net/docs/docs/faqs/api-keys.webp) ## Use your API key **Node.js:** Pass the key to the CLI with the `--token` flag: ```bash npx tdpw upload ./playwright-report --token="your-api-key" ``` Or set it as an environment variable: ```bash export TESTDINO_TOKEN="your-api-key" npx tdpw upload ./playwright-report ``` **Python:** Pass the key to the CLI with the `--token` flag: ```bash testdino upload ./test-results --token="your-api-key" ``` Or set it as an environment variable: ```bash export TESTDINO_TOKEN="your-api-key" testdino upload ./test-results ``` In CI workflows, store the key as a secret and reference it: ```yaml - name: Upload to TestDino run: npx tdpw upload ./playwright-report --token="${{ secrets.TESTDINO_TOKEN }}" ``` ## Set up CI/CD secrets Never hardcode API keys in your workflow files. Store them as secrets and reference them at runtime. **GitHub Actions:** 1. Go to your repository → **Settings → Secrets and variables → Actions** 2. Click **New repository secret** 3. Name it `TESTDINO_TOKEN` 4. Paste your API key 5. Click **Add secret** **GitLab CI:** 1. Go to your project → **Settings → CI/CD → Variables** 2. Click **Add variable** 3. Set Key to `TESTDINO_TOKEN` 4. Paste your API key in Value 5. Check the **Mask variable** to hide it in logs 6. Click **Add variable** **Jenkins:** 1. Go to **Manage Jenkins → Credentials** 2. Select your domain (or global) 3. Click **Add Credentials** 4. Choose **Secret text** 5. Set ID to `testdino-token` 6. Paste your API key in Secret **Azure DevOps:** 1. Go to **Pipelines → Library** 2. Create or open a variable group 3. Add a variable named `TESTDINO_TOKEN` 4. Paste your API key 5. Click the lock icon to make it secret **CircleCI:** 1. Go to **Project Settings → Environment Variables** 2. Click **Add Environment Variable** 3. Name it `TESTDINO_TOKEN` 4. Paste your API key ## Rotate a key When a key expires or you suspect it's been exposed: 1. Generate a new key in **Settings → API Keys** 2. Update your CI secrets with the new key 3. Run one upload to confirm the new key works 4. Revoke or delete the old key > **Tip:** Don't delete the old key until you've confirmed the new one works. This avoids downtime if something goes wrong during the switch. ## Security Practices * Use short expiration periods for CI keys * Create separate keys for different pipelines or environments * Rotate keys if you suspect exposure * Never commit keys to version control ## Key Limits | Plan | Keys per project | | :--- | :--- | | Community | 2 | | Pro | 5 | | Team | 10 | | Enterprise | Unlimited | ## Related GitHub Actions, CLI reference, and settings. - [Getting Started](https://docs.testdino.com/getting-started): Initial setup and first upload - [GitHub Actions](https://docs.testdino.com/guides/playwright-github-actions): CI workflow setup - [Node.js CLI](https://docs.testdino.com/cli/testdino-playwright-nodejs): Full CLI reference - [Project Settings](https://docs.testdino.com/platform/project-settings): All project configuration *Need help? Reach out on [Discord](https://discord.gg/hGY9kqSm58) or email support@testdino.com.* --- ## Playwright GitHub Actions Integration > Source: https://docs.testdino.com/guides/playwright-github-actions > Description: Upload Playwright results from GitHub Actions. Get per-PR failure summaries, flaky test detection, and trace links in CI. Upload Playwright test results to TestDino after each GitHub Actions workflow run. Results appear in TestDino within seconds of upload. ## Quick Reference After your Playwright tests finish in GitHub Actions, upload results to TestDino with a single command. This table shows the upload options and flags you need to get started. | Task | Command/Flag | | :--- | :--- | | Basic upload | `npx tdpw upload ./playwright-report --token="..."` | | With HTML report | `--upload-html` | | With all artifacts | `--upload-full-json` | | Set environment | `--environment="staging"` | | Always run | `if: always()` | ## Basic Workflow Add the upload step after your Playwright tests: ```yaml name: Playwright Tests on: [push, pull_request] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 with: fetch-depth: 0 - uses: actions/setup-node@v4 with: node-version: '20' - name: Install dependencies run: npm ci - name: Install Playwright run: npx playwright install --with-deps - name: Run tests run: npx playwright test - name: Upload to TestDino if: always() run: npx tdpw upload ./playwright-report --token="${{ secrets.TESTDINO_TOKEN }}" --upload-html ``` The `if: always()` condition ensures results upload even when tests fail. ## Store the API Key 1. Go to your GitHub repository 2. Open **Settings → Secrets and variables → Actions** 3. Click **New repository secret** 4. Name: `TESTDINO_TOKEN` 5. Value: Your TestDino API key 6. Click **Add secret** > **Note:** Need an API key? See [Generate API Keys](/guides/generate-api-keys) for step-by-step instructions. ## Configure Playwright Update `playwright.config.js` to generate the required reports: ```javascript export default { reporter: [ ['json', { outputFile: './playwright-report/report.json' }], ['html', { outputDir: './playwright-report' }] ] } ``` ## Upload Options | Flag | Description | | :--- | :--- | | `--upload-html` | Include HTML report for interactive viewing | | `--upload-images` | Include screenshots | | `--upload-videos` | Include video recordings | | `--upload-traces` | Include Playwright traces | | `--upload-full-json` | Include all artifacts | | `--environment` | Set target environment tag | ## Environment Tagging Tag uploads with an environment to organize results: ```yaml - name: Upload to TestDino if: always() run: | npx tdpw upload ./playwright-report \ --token="${{ secrets.TESTDINO_TOKEN }}" \ --upload-full-json \ --environment="staging" ``` Environment tags appear in TestDino dashboards and can be used for filtering. ## Full Artifacts Example Upload all evidence for debugging: ```yaml - name: Upload to TestDino if: always() run: | npx tdpw upload ./playwright-report \ --token="${{ secrets.TESTDINO_TOKEN }}" \ --upload-full-json ``` ## Rerun Failed Tests Cache test metadata to enable selective reruns: ```yaml - name: Run tests run: npx playwright test - name: Cache metadata if: always() run: npx tdpw cache --token="${{ secrets.TESTDINO_TOKEN }}" ``` Create a separate workflow to rerun only failed tests: ```yaml - name: Get failed tests id: failed run: | npx tdpw last-failed --token="${{ secrets.TESTDINO_TOKEN }}" > failed.txt echo "tests=$(cat failed.txt)" >> $GITHUB_OUTPUT - name: Rerun failed tests if: steps.failed.outputs.tests != '' run: npx playwright test ${{ steps.failed.outputs.tests }} ``` ## Sharded Tests with Smart Reruns Run tests in parallel shards with automatic failed test detection: ```yaml name: Playwright Tests on: [push, pull_request] jobs: test: runs-on: ubuntu-latest strategy: fail-fast: false matrix: shardIndex: [1, 2, 3, 4] shardTotal: [4] steps: - uses: actions/checkout@v4 with: fetch-depth: 0 - uses: actions/setup-node@v4 with: node-version: '20' - name: Install dependencies run: npm ci - name: Install Playwright run: npx playwright install --with-deps - name: Run Playwright Tests shell: bash env: TESTDINO_TOKEN: ${{ secrets.TESTDINO_TOKEN }} SHARD_INDEX: ${{ matrix.shardIndex }} SHARD_TOTAL: ${{ matrix.shardTotal }} run: | echo "GitHub run attempt: ${{ github.run_attempt }}" # Case 1: Re-run failed jobs → run only failed tests if [[ "${{ github.run_attempt }}" -gt 1 ]]; then echo "Detected re-run. Executing only last failed tests via TestDino." npx tdpw last-failed --shard=${{ matrix.shardIndex }}/${{ matrix.shardTotal }} > last-failed-flags.txt EXTRA_PW_FLAGS="$(cat last-failed-flags.txt)" if [[ -z "$EXTRA_PW_FLAGS" ]]; then echo "No failed tests found. Exiting." exit 0 fi echo "Running failed tests without sharding:" echo "$EXTRA_PW_FLAGS" # IMPORTANT: preserve quotes eval "npx playwright test $EXTRA_PW_FLAGS" exit 0 fi # Case 2: Normal execution (first run) echo "Running all Playwright tests" npx playwright test \ --shard=${{ matrix.shardIndex }}/${{ matrix.shardTotal }} - name: Cache rerun metadata if: always() run: npx tdpw cache --token="${{ secrets.TESTDINO_TOKEN }}" - name: Upload test reports if: always() run: npx tdpw upload ./playwright-report --token="${{ secrets.TESTDINO_TOKEN }}" --upload-full-json ``` This workflow automatically detects GitHub Actions reruns and executes only the tests that failed in the previous attempt, significantly reducing CI time. ## Related Create API keys, configure status checks, and optimize CI. - [Generate API Keys](https://docs.testdino.com/guides/generate-api-keys): Create upload tokens - [GitHub CI Checks](https://docs.testdino.com/guides/github-status-checks): Configure quality gates for PRs - [CI Optimization](https://docs.testdino.com/guides/playwright-ci-optimization): Reduce CI time with smart reruns - [Environment Mapping](https://docs.testdino.com/guides/environment-mapping): Map branches to environments --- ## Playwright Reporting with TeamCity > Source: https://docs.testdino.com/guides/playwright-teamcity > Description: Set up the TestDino TeamCity Recipe to upload Playwright test results automatically from TeamCity builds to the TestDino dashboard. This guide walks through the complete setup of the TestDino TeamCity Recipe, including installation methods, configuration options, examples, and troubleshooting. ## Quick Reference The TestDino TeamCity Recipe uploads Playwright results after each build. This table shows what artifacts each option includes. | Upload Option | What It Includes | | :--- | :--- | | JSON only | Test results, no artifacts | | [Image Attachments](#configuration-reference) | Screenshots | | [Video Attachments](#configuration-reference) | Video recordings | | [Trace Files](#configuration-reference) | Playwright traces | | [Full JSON Bundle](#configuration-reference) | Everything | ## Prerequisites Before you start: * A TeamCity project with Playwright tests * Playwright configured to generate reports (default output: `./playwright-report`) * A TestDino account with a project API key > **Note:** **Need an API key?** Go to your TestDino project, open **Settings → API Keys**, and generate one. ## Installation Methods **Install from Administration (Global):** This installs the plugin for all projects in your TeamCity instance. 1. Go to **TeamCity Administration** 2. Open **Plugins** 3. Click **Browse Plugins ([JetBrains Marketplace](https://plugins.jetbrains.com/plugin/29272-testdino--upload-playwright-report))** 4. Search for **"TestDino"** 5. Click **Install** 6. Restart TeamCity if prompted Best for administrators who want the recipe available across all projects. **Install While Adding a Build Step (Quick):** This installs the recipe when you need it. 1. Open your project's **Build Configuration** 2. Go to **Build Steps** 3. Click **Add Build Step** 4. Under *Runner Type*, click **Browse Marketplace** 5. Search for **"TestDino – Upload Playwright Report"** 6. Click **Download & Install** 7. The recipe appears immediately in the Runner dropdown Best for developers who want to get started fast. ## Adding the Build Step ### Open Build Steps 1. Go to your TeamCity **Build Configuration** 2. Click **Build Steps** 3. Click **Add build step** ![Adding a build step in TeamCity](https://testdinostr.blob.core.windows.net/docs/docs/integrations/teamcity/adding-build-step.webp) ### Select the Runner From the **Runner Type** dropdown, select **TestDino - Upload Playwright Report**. The recipe configuration form appears. ![Select TestDino runner in TeamCity](https://testdinostr.blob.core.windows.net/docs/docs/integrations/teamcity/select-runner.webp) ### Configure and Save Fill in the fields based on your setup (see Configuration Reference below), then click **Save**. ## Configuration Reference ![TeamCity configuration reference for TestDino](https://testdinostr.blob.core.windows.net/docs/docs/integrations/teamcity/configuration-reference.webp) | Field | Description | Default | When to Use | | ----- | ----- | ----- | ----- | | **Report Directory** | Folder containing your Playwright reports (JSON + HTML + artifacts) | `./playwright-report` | Change this if your reports live somewhere else | | **TestDino API Token** | Your project API key for authentication. Can also be set as `TESTDINO_TOKEN` environment variable | Required | Always needed for secure uploads | | **HTML Reports** | Include the interactive HTML report | Checkbox | Turn on when you want the full Playwright report UI in TestDino | | **Image Attachments** | Include screenshots captured during test runs | Checkbox | Helpful for visual debugging and failure analysis | | **Video Attachments** | Include video recordings of test executions | Checkbox | Great for debugging flaky tests or complex UI flows | | **Trace Files** | Include Playwright trace archives | Checkbox | Use when you need step-by-step debugging with the trace viewer | | **File Attachments** | Include extra files like `.md`, `.pdf`, `.log`, or `.txt` | Checkbox | Useful if your tests generate logs or documentation | | **Full JSON Bundle** | Upload everything: JSON + HTML + images + videos + traces + files | Checkbox | Best option for complete analytics in TestDino | | **Custom JSON Report Path** | Override the default JSON file location | Empty | Set this if your JSON report has a non-standard path | | **Custom HTML Report Path** | Override the default HTML report folder | Empty | Set this if your HTML output is in a custom folder | | **Custom Trace Directory** | Override the default trace folder | Empty | Set this if traces are stored separately | | **Verbose Logging** | Show detailed debug output in build logs | Checkbox | Turn on when troubleshooting upload issues | ## Configuration Examples **Basic Setup (JSON Only):** Upload just the JSON report for quick analytics: | Field | Value | | :--- | :--- | | Report Directory | `./playwright-report` | | TestDino API Token | `your-api-key` | | All checkboxes | Unchecked | **Full Analytics Setup:** Upload everything for complete debugging capabilities: | Field | Value | | :--- | :--- | | Report Directory | `./playwright-report` | | TestDino API Token | `your-api-key` | | Full JSON Bundle | Checked | **Custom Paths Setup:** When your reports use non-standard locations: | Field | Value | | :--- | :--- | | Report Directory | `./test-output` | | TestDino API Token | `your-api-key` | | Custom JSON Report Path | `./test-output/results/report.json` | | Custom HTML Report Path | `./test-output/html-report` | | Custom Trace Directory | `./test-output/traces` | | HTML Reports | Checked | | Trace Files | Checked | **Screenshots and Videos Only:** When you want visual artifacts without traces: | Field | Value | | :--- | :--- | | Report Directory | `./playwright-report` | | TestDino API Token | `your-api-key` | | HTML Reports | Checked | | Image Attachments | Checked | | Video Attachments | Checked | ## Using Environment Variables Instead of entering your API token directly in the build step, use an environment variable for better security. ### Setting the Token 1. Go to your TeamCity **Build Configuration** 2. Click **Parameters** 3. Add a new parameter: - **Name**: `env.TESTDINO_TOKEN` - **Value**: Your API key - **Type**: Password The recipe automatically reads `TESTDINO_TOKEN` from the environment. This keeps your key out of build logs and makes rotation easier. ### Benefits - Token stays hidden in build logs - Easy to rotate without editing build steps - Shareable build configurations without exposing secrets ## Running Your Build With the recipe configured: 1. Your Playwright tests run as usual 2. Reports and artifacts are generated 3. The TestDino recipe collects everything 4. Files are uploaded to your TestDino project 5. A confirmation message and link appear in the build log > **Tip:** Click the link to view your test run in TestDino. ## Viewing Results in TestDino ![TeamCity TestDino](https://testdinostr.blob.core.windows.net/docs/docs/guides/playwright-teamcity/test-runs-attempts.webp) **After the upload:** 1. Open the link from your build log, or go to your TestDino project 2. Click **Test Runs** in the sidebar 3. Find your latest run with commit message, branch, and test counts 4. Click the run to see the full breakdown **From there, you can:** - Check screenshots, videos, and traces - See flaky test patterns - Track trends over time - Create Jira, Linear, or Asana tickets from failures ## Troubleshooting **1. Upload Fails with Authentication Error** **Check your API token:** * Verify the token is correct and hasn't expired * Confirm the token belongs to the right TestDino project * Make sure you've entered it in the recipe or set `TESTDINO_TOKEN` in the environment **2. Report Directory Not Found** **Verify your paths:** * Check that your Playwright tests actually ran and generated output * Confirm the path matches your `playwright.config.js` settings * Try using an absolute path if relative paths aren't working **3. Upload Succeeds, but Run Doesn't Appear** **Wait and refresh:** * Uploads can take a few seconds to process * Check the build log for error messages * Verify your API token has access to the project * Click the **Sync** button in the TestDino Test Runs view **4. Missing Artifacts (Screenshots, Videos, Traces)** **Enable the right checkboxes:** * Screenshots → **Image Attachments** * Videos → **Video Attachments** * Traces → **Trace Files** Or enable **Full JSON Bundle** to upload everything at once. **5. Build Step Runs Before Tests Finish** **Reorder your build steps:** * The TestDino step must run after your Playwright test step * Drag and drop to reorder in the Build Steps list **6. Need Detailed Logs?** Turn on **Verbose Logging** in the recipe settings. This adds extra output to your build log that helps pinpoint where things go wrong. ## Best Practices **Keep Your API Token Secure** * Store the token in TeamCity parameters with password type * Avoid pasting tokens directly in build step fields * Rotate tokens periodically **Upload What You Need** * For quick feedback, JSON-only is fastest * For debugging failures, add screenshots and traces * For full analysis, use the Full JSON Bundle option **Run TestDino After Tests Complete** Make sure the TestDino step runs after your Playwright tests finish. Consider setting the step to run **"Even if build steps fail"** so you can see failures in TestDino even when tests don't pass. **Match Your Report Paths** If you customize Playwright's output directories in `playwright.config.js`, update the recipe's custom path fields to match. **Use Verbose Logging During Setup** Turn on verbose logging when first configuring the recipe. Once everything works, you can turn it off to keep build logs clean. ## **Related Documentation** - [TeamCity Integration Overview](https://docs.testdino.com/integrations/ci-cd/teamcity): Learn about TestDino's TeamCity integration features and capabilities - [Getting Started](https://docs.testdino.com/getting-started): New to TestDino? Start here for a complete introduction - [API Keys](https://docs.testdino.com/guides/generate-api-keys): Create upload tokens - [Test Runs](https://docs.testdino.com/platform/playwright-test-runs): Explore how to view and analyze your test runs in the platform ## **Need Help?** - [Discord](https://discord.gg/hGY9kqSm58): Join the TestDino community. - [Email](mailto:support@testdino.com): Contact us at support@testdino.com --- ## Environment Mapping > Source: https://docs.testdino.com/guides/environment-mapping > Description: Map Git branches to named environments using exact match or regex. Automatically route Playwright test results to the correct environment in TestDino. Branch patterns map your Git branches to environments (Dev, Staging, Production). Test results automatically route to the correct environment based on the branch that triggered the run. [Video: Environment Mapping video](https://www.youtube.com/embed/oVaYPIsYrJA?si=tXfVEvWOoxtLy1zg) ## Quick Reference | Symbol | Meaning | Example | | :--- | :--- | :--- | | `^` | Start of name | `^dev` → `dev/test` ✓ | | `$` | End of name | `main$` → `main` ✓ | | `(?i)` | Case-insensitive | `(?i)^main$` → `MAIN` ✓ | | `\d` | Any digit | `v\d` → `v1` ✓ | | `\|` | OR | `dev\|qa` → `dev` or `qa` ✓ | | `( )` | Group | `^(dev\|qa)/` → `dev/` or `qa/` ✓ | See [full symbol reference](#regex-symbols-reference) below. ## Pattern Types ### Exact Match Match the branch name exactly as written. | Pattern | Matches | Does not match | | :--- | :--- | :--- | | `main` | `main` | `main-backup`, `feature/main` | Use exact match for specific branch names like `main`, `master`, `production`. ### Regex Patterns Use regular expressions for flexible matching. | Pattern | Description | Matches | | :--- | :--- | :--- | | `^dev/` | Starts with `dev/` | `dev/feature-123`, `dev/bug-fix` | | `^main$` | Exactly `main` | `main` | | `^(main\|master)$` | Either `main` or `master` | `main`, `master` | | `^release/v\d+` | Release with version | `release/v1`, `release/v2.0` | ## Common Patterns ### Branch Prefixes | Pattern | Description | Matches | | :--- | :--- | :--- | | `^feature/` | Feature branches | `feature/login`, `feature/payment` | | `^hotfix/` | Hotfix branches | `hotfix/critical-bug` | | `^release/` | Release branches | `release/v1.0`, `release/2024-01` | | `^pull/\d+` | GitHub PR branches | `pull/123/merge`, `pull/456/head` | ### Version Numbers | Pattern | Description | Matches | | :--- | :--- | :--- | | `^release/v\d+` | Version with `v` prefix | `release/v1`, `release/v2.0` | | `^release/\d+\.\d+` | Semantic version | `release/1.0`, `release/2.5` | | `^release/v\d+\.\d+\.\d+$` | Exact semver | `release/v1.0.0` | ### Case-Insensitive Use `(?i)` when your team uses inconsistent casing: | Pattern | Matches | | :--- | :--- | | `(?i)^main$` | `main`, `MAIN`, `Main` | | `(?i)^release/` | `release/`, `RELEASE/`, `Release/` | ## Common Use Cases ### Git Flow | Environment | Pattern | | :--- | :--- | | Production | `^(main\|master)$` | | Staging | `^(staging\|stage)$` | | Development | `^(dev\|develop)$` | | Features | `^feature/` | | Hotfixes | `^hotfix/` | | Releases | `^release/` | ### Environment Prefixes | Environment | Pattern | | :--- | :--- | | Production | `^prod/` | | Staging | `^stg/` | | QA | `^qa/` | | Development | `^dev/` | ### Version Releases | Environment | Pattern | | :--- | :--- | | Production | `^release/v\d+\.\d+\.\d+$` | | Release Candidates | `^release/v\d+\.\d+\.\d+-rc\d+$` | | Beta | `^release/v\d+\.\d+\.\d+-beta$` | ## Regex Symbols Reference | Symbol | Meaning | Example | Matches | | :--- | :--- | :--- | :--- | | `^` | Start of branch name | `^dev` | `dev/feature` ✓, `my-dev` ✗ | | `$` | End of branch name | `main$` | `main` ✓, `main-old` ✗ | | `(?i)` | Case-insensitive | `(?i)^main$` | `main`, `MAIN` ✓ | | `.` | Any single character | `dev.` | `dev/`, `dev-`, `dev1` | | `*` | Zero or more of previous | `dev.*` | `dev`, `dev/feature` | | `+` | One or more of previous | `dev.+` | `dev/feature` ✓, `dev` ✗ | | `\d` | Any digit (0-9) | `v\d` | `v1`, `v2`, `v9` | | `\|` | OR | `dev\|qa` | `dev`, `qa` | | `[ ]` | Any character in brackets | `[0-9]` | `0`, `1`, `2`, ... `9` | | `( )` | Group patterns | `^(dev\|qa)/` | `dev/test`, `qa/test` | | `[^ ]` | NOT in brackets | `[^0-9]` | `a`, `b`, `-` (not digits) | ## Best Practices ### Do | Practice | Good | Bad | | :--- | :--- | :--- | | Anchor at start | `^dev/` | `dev/` (matches anywhere) | | Use `$` for exact | `^main$` | `^main` (matches `main-old`) | | Test patterns first | Use [regex101.com](https://regex101.com/) | Deploy untested | | Use `(?i)` for case | `(?i)^release/` | `^[Rr][Ee][Ll]...` | ### Avoid | Pattern | Problem | | :--- | :--- | | `.*` | Matches everything | | `.+` | Matches any branch | | `dev*` | Matches `d`, `de`, `dev` | | `dev;echo` | Security risk (special characters) | ## Validation ### Errors (blocks saving) - Invalid characters: `;`, `&`, `` ` ``, `"`, `'`, `<`, `>`, `%` - Invalid regex syntax: unclosed brackets, invalid escapes ### Warnings (allows saving) - Unanchored patterns that might match unintentionally - Suggestion provided to fix ## Testing Patterns ### Example: `^dev/` | Branch | Match | Reason | | :--- | :--- | :--- | | `dev/feature-login` | ✓ | Starts with `dev/` | | `dev/bug-fix` | ✓ | Starts with `dev/` | | `development/test` | ✗ | Starts with `development/` | | `my-dev/branch` | ✗ | Does not start with `dev/` | ### Example: `^release/v\d+\.\d+` | Branch | Match | Reason | | :--- | :--- | :--- | | `release/v1.0` | ✓ | Matches pattern | | `release/v2.5.3` | ✓ | Matches pattern (and more) | | `release/version1.0` | ✗ | Missing `v` before number | | `release/beta` | ✗ | No version number | ## CLI Override [Video: CLI Environment Override video](https://www.youtube.com/embed/2jUSi6EZEqw?si=Tkos9cRpbp_5p0dn) Bypass branch mapping with the `--environment` flag: ```bash npx tdpw upload ./playwright-report --token="..." --environment="staging" ``` The CLI flag takes priority over branch mapping rules. See [CLI reference](/cli/testdino-playwright-nodejs) for details. ## Related GitHub Actions, status checks, and CI optimization. - [Project Settings](https://docs.testdino.com/platform/project-settings): Configure branch mapping - [GitHub Actions](https://docs.testdino.com/guides/playwright-github-actions): CI workflow setup - [Node.js CLI](https://docs.testdino.com/cli/testdino-playwright-nodejs): Upload options reference - [Analytics by Environment](https://docs.testdino.com/platform/analytics/environment): View results per environment --- ## Playwright Flaky Tests > Source: https://docs.testdino.com/guides/playwright-flaky-test-detection > Description: Auto-detect flaky Playwright tests with root cause classification. Track trends, quarantine unstable tests, restore CI trust. A flaky test produces different results across runs without code changes. It passes on one execution and fails on the next, or passes only after a retry. ## Quick Reference | View | Path | Best for | | :--- | :--- | :--- | | [Dashboard](#dashboard) | Dashboard → Most Flaky Tests | Top flaky tests in the selected period | | [Analytics Summary](#analytics-summary) | Analytics → Summary | Flakiness trends | | [Test Run Summary](#test-run-summary) | Test Runs → Summary | Per-run flaky breakdown | | [Test Case History](#test-case-history) | Test Case → History | Single test stability | | [Test Explorer](#test-explorer) | Test Explorer | Flaky rate by file and test case | | [Environment Analytics](#environment-analytics) | Analytics → Environment | Flaky rates per environment | ## How Detection Works Flaky test detection activates automatically when [retries are enabled](https://playwright.dev/docs/test-retries) in Playwright. No additional configuration required. ```typescript playwright.config.ts export default defineConfig({ retries: process.env.CI ? 2 : 0, }); ``` TestDino detects flaky tests in two ways: **Within a single run.** A test that fails initially but passes on retry is marked flaky. The retry count appears in the test details. ![Test marked as flaky after passing on retry](https://testdinostr.blob.core.windows.net/docs/docs/guides/playwright-flaky-test-detection/retry-attempts.webp) **Across multiple runs.** Tests with inconsistent outcomes on the same code are flagged. TestDino tracks pass/fail patterns and calculates a stability percentage. ![Stability percentage showing inconsistent test results across runs](https://testdinostr.blob.core.windows.net/docs/docs/guides/playwright-flaky-test-detection/test-metrics.webp) > **Note:** Both detection methods indicate that the test result depends on something other than your code. ## Flaky Test Categories TestDino classifies flaky tests by root cause: | Category | Description | | :--- | :--- | | Timing Related | Race conditions, order dependencies, and insufficient waits | | Environment Dependent | Fails only in specific environments or runners | | Network Dependent | Intermittent API or service failures | | Assertion Intermittent | Non-deterministic data causes occasional mismatches | | Other | Unstable for reasons outside the above | ### Common causes - Fixed waits instead of waiting for the page to be ready - Missing `await` causes steps to run out of order - Weak selectors that match more than one element - Tests share data and affect each other - Parallel runs collide on the same user or record - Slow or unstable network or third-party APIs - CI setup differs from local environment ## Where to Find Flaky Tests ### Dashboard Open the [Dashboard](/platform/playwright-test-dashboard). The **Most Flaky Tests** panel lists tests with the highest flaky rates in the selected period. Each entry shows the test name, spec file, flaky rate percentage, and a link to the latest run. Click any test to open its most recent execution. ![Most Flaky Tests panel showing test names with flaky percentages](https://testdinostr.blob.core.windows.net/docs/docs/dashboard/dashboard-most-flaky.webp) ### Analytics Summary Open **Analytics → Summary**. The **Flakiness & Test Issues** chart shows the flaky rate trend over time and a list of flaky tests with spec file and execution date. A rising trend indicates increasing instability in your test suite. ![Flakiness trend chart with percentage over time and list of affected tests](https://testdinostr.blob.core.windows.net/docs/docs/analytics/summary/flakiness.webp) ### Test Run Summary Open any test run. The **Summary** tab shows flaky test counts grouped by category: Timing Related, Environment Dependent, Network Dependent, Assertion Intermittent, and Other Flaky. ![Test run summary showing flaky test counts by category](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/summary/kpi-tiles.webp) Click a category to filter the detailed analysis table. ### Test Case History Open a specific test case and go to the **History** tab. The stability percentage shows how often the test passes: Stability = (Passed Runs / Total Runs) x 100 A test with 100% stability has never failed or been flaky. Any value below 100% indicates inconsistent behavior. The **Last Flaky** tile links to the most recent run where the test was marked flaky. ![Test case history showing stability percentage and last flaky run](https://testdinostr.blob.core.windows.net/docs/docs/test-cases/history/kpi-tiles.webp) ### Test Explorer Open **Test Explorer** from the sidebar. The **Flaky Rate** column shows the percentage of executions with flaky results for each spec file or test case. Sort by flaky rate to find the most unstable specs. Expand a spec row to see flaky rates for individual test cases, or switch to Flat view to compare across files. [Video: Test Explorer showing flaky rates for spec files](https://testdinostr.blob.core.windows.net/docs/specs.mp4) ### Environment Analytics Open [**Analytics → Environment**](/platform/analytics/environment). The **Flaky Rate** row shows flaky percentages per environment, so you can compare stability across staging, production, and other environments. > **Note:** High flaky rates in specific environments suggest environment-dependent issues like resource constraints or service availability. ## CI Check Behavior GitHub CI Checks handle flaky tests in two modes: | Mode | Behavior | Use case | | :--- | :--- | :--- | | Strict | Flaky tests count as failures | Production branches where stability matters | | Neutral | Flaky tests excluded from pass rate | Development branches to reduce noise | See [GitHub CI Checks](/guides/github-status-checks) for configuration details. ## Export Flaky Test Data Use the TestDino MCP server to query flaky tests programmatically: ```bash List flaky tests from the last 7 days on the main environment ``` [Video: TestDino MCP - List Flaky Tests](https://www.youtube.com/embed/Qg4e0kEHVK0?si=QQJEjeyGz3ShnPZx&start=64) > **Tip:** The MCP server returns test names, flaky rates, and run IDs for further analysis. See [TestDino MCP](/mcp/overview) for more details. ## Related Test Explorer, CI checks, MCP, and analytics. - [Test Explorer](https://docs.testdino.com/platform/playwright-test-explorer): Analyze flaky rates across spec files and test cases - [GitHub CI Checks](https://docs.testdino.com/guides/github-status-checks): Configure flaky handling in CI - [TestDino MCP](https://docs.testdino.com/mcp/overview): Query flaky data with AI - [Analytics](https://docs.testdino.com/platform/playwright-test-analytics): Project-wide test analytics --- ## Debug Playwright Test Failures > Source: https://docs.testdino.com/guides/debug-playwright-test-failures > Description: Debug Playwright test failures in TestDino with embedded trace viewer, screenshots, video playback, and error grouping. TestDino collects evidence from each test execution: screenshots, videos, traces, console logs, and error details. Understand failures without re-running tests locally. ## Evidence Types | Evidence | What it shows | When to use | Guide | | :--- | :--- | :--- | :--- | | Screenshots | Visual state at failure point | UI layout issues, missing elements | [Visual Evidence](/guides/debug-playwright-failures/visual-evidence) | | Video | Full test execution recording | Timing issues, unexpected interactions | [Visual Evidence](/guides/debug-playwright-failures/visual-evidence) | | Trace | Step-by-step execution with network and DOM | Complex failures, race conditions | [Trace Viewer](/guides/playwright-trace-viewer) | | Console | Browser console output | JavaScript errors, API failures | - | | Error details | Error message and stack trace | Assertion failures, exceptions | [Error Grouping](/guides/playwright-error-grouping) | ## Where to Find Evidence Open a test run and click on any failed test. The test case details page shows: 1. **KPI tiles:** Status, runtime, retry attempts 2. **Evidence tabs:** One tab per attempt (Run, Retry 1, Retry 2) ![Test case KPI tiles showing status, runtime, and retry count](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/test-case/overview-kpi-tiles.webp) Each evidence tab contains: * Error details with stack trace * Test steps with timing * Screenshots * Console output * Video player * Trace viewer link * Visual comparison (for screenshot tests) ![Evidence panel showing error details, screenshots, console, and video tabs](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/test-case/evidence-panel.webp) ## Debugging Workflow ### Check the error message Start with the error details. The message often points to the issue: assertion mismatch, element not found, or timeout. ### Look at screenshots Screenshots show the UI state at failure. Compare with expected behavior. ### Watch the video Videos reveal timing issues and unexpected interactions that screenshots miss. ### Open the trace [Traces](/guides/playwright-trace-viewer) provide the most detail: every action, network request, and DOM change. Use traces for complex failures. ### Check console logs Console output shows JavaScript errors, failed API calls, and application logs. ## Next Steps - [Trace Viewer](https://docs.testdino.com/guides/playwright-trace-viewer): Step-by-step execution analysis - [Visual Evidence](https://docs.testdino.com/guides/debug-playwright-failures/visual-evidence): Screenshots and videos - [Error Grouping](https://docs.testdino.com/guides/playwright-error-grouping): Find patterns across failures - [Test Cases](https://docs.testdino.com/platform/playwright-test-cases): Full test case documentation --- ## Playwright trace viewer online > Source: https://docs.testdino.com/guides/playwright-trace-viewer > Description: Step through Playwright test execution with the embedded trace viewer. Inspect DOM, network, console - no download needed. Playwright traces capture every action, network request, and DOM state during test execution. TestDino displays these traces so you can debug failures without re-running tests locally. ## Quick Reference The Trace Viewer has several panels, each displaying different execution details. Use this table to find the appropriate panel for your investigation. | Trace Panel | What It Shows | | :--- | :--- | | [Actions](#actions-panel) | Each test step with timing and result | | [Call](#navigate-the-trace) | Arguments and return values for the action | | [Console](#console-tab) | Browser console output at each step | | [Network](#network-tab) | HTTP requests and responses | | [Source](#source-tab) | Test code with current line highlighted | | [DOM Snapshot](#dom-snapshot) | Page structure at each action | See [Playwright trace documentation](https://playwright.dev/docs/docs/trace-viewer) for more options. ## Enable Traces Configure Playwright to record traces: ```typescript // playwright.config.ts export default defineConfig({ use: { trace: 'on-first-retry', // Capture on retries only } }); ``` Options: | Value | Behavior | | :--- | :--- | | `'off'` | No traces | | `'on'` | Trace every test | | `'on-first-retry'` | Trace only on retry (recommended) | | `'retain-on-failure'` | Keep traces for failed tests | Traces add overhead. Use `'on-first-retry'` to balance debugging capability with test speed. ## Upload Traces Include traces in your upload: ```bash npx tdpw upload ./playwright-report --token="your-api-key" --upload-full-json ``` > **Warning:** The `--upload-full-json` flag includes trace files. Without it, trace links in TestDino will not work. ## When to Use Traces Traces are most useful for: * **Race conditions:** See if an element appeared after the test tried to interact * **Timing issues:** Check how long each step took * **Network failures:** Inspect API requests and responses * **Complex flows:** Step through multi-page interactions For simple failures, start with screenshots or error messages. Open traces when you need more detail. ## Open the Trace Viewer ### Open a test run Navigate to the test run in TestDino ### Click a failed test Select the test you want to debug ### Select the attempt tab Choose Run, Retry 1, or Retry 2 ### Open the trace Click **View Trace** or the trace viewer link ![Playwright Trace Viewer showing timeline, actions, network, and DOM panels](https://testdinostr.blob.core.windows.net/docs/docs/faqs/trace.webp) > **Tip:** The trace opens in a new tab with the full Playwright trace UI. ## Navigate the Trace ### Actions Panel The left sidebar lists every action in execution order: * `page.goto` * `locator.click` * `expect.toBeVisible` Click any action to see its details. Failed actions are highlighted in red. ### Timeline The timeline at the top shows action duration. Long bars indicate slow steps. Gaps may indicate waiting or idle time. ### DOM Snapshot Each action captures a DOM snapshot. Use it to see: * Whether the element existed * What the page looked like at that moment * If other elements were blocking the target Toggle between **Before** and **After** to see DOM changes from the action. ### Network Tab View all HTTP requests during the test: * Request URL, method, and headers * Response status, headers, and body * Timing breakdown (DNS, connect, SSL, wait, download) Failed requests appear in red. Look for 4xx/5xx responses or timeouts. ### Console Tab Shows browser console output at each action: * `console.log` from application code * JavaScript errors and warnings * Failed resource loads Console errors often reveal issues not visible in the UI. ### Source Tab Displays the test code with the current action highlighted. Useful for correlating trace steps with your test file. [Video: Trace Viewer demonstration](https://testdinostr.blob.core.windows.net/docs/docs/guides/playwright-flaky-test-detection/trace-video.mp4) ## Debug Common Failures | Pattern | What to check | | :--- | :--- | | [Element not found](#element-not-found) | DOM snapshot at failing step | | [Timeout](#timeout) | Timeline for long gaps, network for slow responses | | [Assertion mismatch](#assertion-failure) | Expected vs actual in DOM snapshot | | [Race condition](#race-condition) | Action order across runs | **Element Not Found** 1. Go to the failed action 2. Check the DOM snapshot 3. Look for the target element If the element exists but the locator did not match, the selector is wrong. If the element does not exist, the page state was different than expected. **Timeout** 1. Check the action duration in the timeline 2. Look at the DOM snapshot before timeout 3. Check the network tab for pending requests Timeouts often mean the page was still loading or an element was not yet visible. **Assertion Failure** 1. Go to the failed `expect` action 2. Check the Call panel for expected vs actual values 3. Review the DOM snapshot to see the actual state **Race Condition** 1. Compare DOM snapshots before and after the action 2. Check if elements appeared or disappeared between steps 3. Look at network timing to see if responses arrived late ## Related Error grouping, visual evidence, and debug overview. - [Visual Evidence](https://docs.testdino.com/guides/debug-playwright-failures/visual-evidence): Screenshots and videos - [Error Grouping](https://docs.testdino.com/guides/playwright-error-grouping): Find patterns across failures - [Flaky Tests](https://docs.testdino.com/guides/playwright-flaky-test-detection): Identify and manage flaky tests - [Node.js CLI](https://docs.testdino.com/cli/testdino-playwright-nodejs): Upload options reference --- ## Visual Evidence for Test Failures > Source: https://docs.testdino.com/guides/debug-playwright-failures/visual-evidence > Description: View screenshots, videos, and visual diffs attached to failed Playwright tests directly in TestDino to identify UI regressions and layout issues fast. * **Screenshots** capture the UI state at specific moments. * **Videos** record the full test execution. Use them to identify visual issues, timing problems, and unexpected behavior. ## Quick Reference Screenshots and videos require specific Playwright config and upload flags. Use this table as a checklist for your setup. | Artifact | Config Option | Upload Flag | | :--- | :--- | :--- | | Screenshots | `screenshot: 'only-on-failure'` | [`--upload-images`](/cli/testdino-playwright-nodejs) | | Videos | `video: 'on-first-retry'` | [`--upload-videos`](/cli/testdino-playwright-nodejs) | | All artifacts | N/A | [`--upload-full-json`](/cli/testdino-playwright-nodejs) | ## When to use each **Use screenshots when:** * The error mentions a missing or incorrect element * You need to verify the UI layout * You want quick visual confirmation **Use videos when:** * Screenshots look correct, but the test still failed * You suspect timing or animation issues * You need to see the sequence of events ## Enable screenshots Playwright captures screenshots on failure by default. Configure additional options: ```typescript import { defineConfig } from '@playwright/test'; export default defineConfig({ use: { screenshot: 'only-on-failure', // or 'on' for all tests }, }); ``` Screenshot Options: * `'off'`: No screenshots * `'on'`: Screenshot after every test * `'only-on-failure'`: Screenshot only when test fails (recommended) ## Enable video recording Configure video recording in your Playwright config: ```typescript import { defineConfig } from '@playwright/test'; export default defineConfig({ use: { screenshot: 'only-on-failure', }, }); ``` Video options: * `'off'`: No video * `'on'`: Always record * `'on-first-retry'`: Record only on retry * `'retain-on-failure'`: Keep video only for failed tests > **Tip:** Videos add overhead and file size. Use `'on-first-retry'` to capture evidence for flaky tests without slowing every run. ### Video size settings Control video dimensions: ```typescript use: { video: { mode: 'on-first-retry', size: { width: 1280, height: 720 } } } ``` > **Note:** Smaller dimensions reduce file size but may result in missing details. ## Upload Visual Evidence | What to upload | Command | | :--- | :--- | | Screenshots only | `npx tdpw upload ./playwright-report --token="..." --upload-images` | | Videos only | `npx tdpw upload ./playwright-report --token="..." --upload-videos` | | Both | `npx tdpw upload ./playwright-report --token="..." --upload-images --upload-videos` | | All artifacts | `npx tdpw upload ./playwright-report --token="..." --upload-full-json` | ## View screenshots 1. Open a test run 2. Click a failed test 3. Go to the Screenshots section in the evidence panel Screenshots show the page state at the moment of failure. For tests with retries, each attempt has its own screenshots. ![Evidence panel showing screenshots section](https://testdinostr.blob.core.windows.net/docs/docs/guides/debug-failure/screenshots.webp) ### What to look for | Issue | What You See | | :--- | :--- | | Missing element | Element not present in the expected location | | Wrong content | Text or data does not match the assertion | | Layout shift | Elements in unexpected positions | | Loading state | Spinners, skeletons, or placeholders are visible | | Modal blocking | Overlay covering the target element | | Wrong page | Navigation did not complete | ## View videos 1. Open a test run in TestDino 2. Click a failed test 3. Go to the Video section in the evidence panel The video player shows the full test execution. Use the timeline to jump to specific moments. ![Video player showing test execution with timeline controls](https://testdinostr.blob.core.windows.net/docs/docs/guides/debug-failure/videos.webp) ### What to look for | Issue | What You See | | :--- | :--- | | Race condition | Element appears after the test tries to interact | | Animation issue | Test clicks during transition | | Slow load | Page is still loading when the test acts | | Unexpected popup | Modal or alert blocks interaction | | Wrong element clicked | Visible in the video but not obvious from the screenshot | ## Visual Comparison For tests using `toHaveScreenshot()`, TestDino shows visual diffs: | Mode | What it shows | | :--- | :--- | | Actual | What the test captured | | Expected | The baseline image | | Diff | Highlighted differences | | Side by Side | Both images together | | Slider | Interactive comparison | [Video: Visual comparison modes](https://testdinostr.blob.core.windows.net/docs/visual-comparison.mp4) ## Console Logs The Console section shows browser console output: * JavaScript errors * Application logs * Network errors * Warnings Console logs correlate visual issues with underlying errors. ## Debugging with Visual Evidence ### Screenshots * Check if expected elements are visible * Look for error messages or loading states * Compare with the expected UI ### Videos * Watch the sequence of actions * Identify timing issues * See animations and transitions that screenshots miss ### Visual comparison * Spot unintended UI changes * Verify layout consistency * Detect rendering differences across environments ## Storage Limits Visual artifacts increase storage usage. Configure retention based on your plan: | Plan | Artifact Storage | | :--- | :--- | | Community | 1 GB | | Pro | 5 GB | | Team | 10 GB | | Enterprise | Custom | > **Tip:** Use `retain-on-failure` options to reduce storage while keeping evidence for failures. ## Related Trace viewer, error grouping, and debug overview. - [Trace Viewer](https://docs.testdino.com/guides/playwright-trace-viewer): Step-by-step execution analysis - [Error Grouping](https://docs.testdino.com/guides/playwright-error-grouping): Find patterns across failures - [Visual Testing](https://docs.testdino.com/guides/playwright-visual-testing): Screenshot comparison setup - [Node.js CLI](https://docs.testdino.com/cli/testdino-playwright-nodejs): Upload options reference --- ## Playwright Error Grouping in TestDino > Source: https://docs.testdino.com/guides/playwright-error-grouping > Description: Group Playwright test failures by root cause. Fix 1 error, resolve 10+ failing tests. TestDino groups errors by message to help identify the common cause and fix it efficiently. ## Quick Reference TestDino groups errors at two levels. Use this table to understand how grouping works and locate each view. | Grouping Level | What It Shows | | :--- | :--- | | [Error message](#view-error-groups) | Tests that failed with the same error text | | [Error category](#view-error-categories) | Tests grouped by error type | ## How Error Grouping Works When multiple tests fail with similar errors, TestDino groups failures by error message. TestDino matches errors by: * Error message text * Stack trace patterns * Failure location in code Similar errors appear as a single group with a count of affected tests. ## View Error Groups 1. Open a test run 2. Go to the **Errors** tab 3. Expand an error group to see all affected tests ![Error grouping view showing errors grouped by message with affected test counts](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/error/error-all.webp) ## View Error Categories ### Go to Analytics Navigate to the Analytics section ### Open the Errors tab Click on the Errors tab to view error patterns ### View error patterns View error patterns across multiple runs ![Analytics errors view showing error patterns across runs](https://testdinostr.blob.core.windows.net/docs/docs/analytics/error/error-tiles.webp) > **Note:** Analytics shows error trends over time. A new error group appearing after a deployment indicates a regression. TestDino classifies errors into categories: | Category | Description | | :--- | :--- | | Assertion Failures | Expected values did not match actual values | | Timeout Issues | Actions or waits exceeded time limits | | Element Not Found | Locators did not resolve to elements | | Network Issues | HTTP requests failed or returned errors | | JavaScript Errors | Runtime errors in browser or test code | | Browser Issues | Browser launch, context, or rendering problems | | Other Failures | Errors outside the above categories | Filter by category to focus on specific error types. ## Error Analytics Open **Analytics → Errors** for trends over time: [**Error Message Over Time**](/platform/analytics/errors#error-message-over-time)**:** Line graph showing error frequency by category. Identify spikes and trends. ![Error trends chart showing error frequency over time by category](https://testdinostr.blob.core.windows.net/docs/docs/analytics/error/error-message-over-time.webp) [**Error Categories Table**](/platform/analytics/errors#error-categories)**:** Breakdown of errors by type with occurrence counts, affected tests, and first/last detected dates. ![Error categories table showing breakdown by type with occurrence counts](https://testdinostr.blob.core.windows.net/docs/docs/analytics/error/error-message.webp) Click any error to see all affected test cases. ## Common Error Patterns **Element Not Found** `Error: locator.click: Error: strict mode violation` Multiple tests targeting the same element fail when the selector breaks. Fix the selector once. **Timeout** `Error: Timeout 30000ms exceeded` Often indicates a shared dependency: slow API, missing service, or environment issue. Check what the affected tests have in common. **Assertion Failure** `Error: expect(received).toBe(expected)` Same assertion failing across tests may indicate a data issue or application bug affecting multiple pages. **Network Error** `Error: net::ERR_CONNECTION_REFUSED` Service unavailable. All tests depending on that service fail together. ## Create Tickets from Error Groups When an error group needs attention: ### Open the error group Navigate to the error group you want to address ### Raise an issue Click **Raise Bug** or **Raise Issue** ### Select your tool Select Jira, Linear, Asana, or monday ### Review and submit TestDino pre-fills the ticket with error details and the affected test count > **Tip:** The ticket links back to the error group for context. ## Related Trace viewer, visual evidence, and debug overview. - [Trace Viewer](https://docs.testdino.com/guides/playwright-trace-viewer): Step-by-step execution analysis - [Visual Evidence](https://docs.testdino.com/guides/debug-playwright-failures/visual-evidence): Visual evidence - [Error Analytics](https://docs.testdino.com/platform/analytics/errors): Error trends over time --- ## Playwright CI Optimization with TestDino > Source: https://docs.testdino.com/guides/playwright-ci-optimization > Description: Cut Playwright CI time and cost. Use caching, selective reruns, and smart parallelization across GitHub Actions, GitLab, Azure. TestDino optimizes Playwright test execution in CI pipelines by caching test results and enabling selective reruns. ## Quick Reference | Strategy | Description | | :--- | :--- | | [Rerun Only Failed Tests](/guides/rerun-failed-playwright-tests) | Cache results and rerun only failed tests. Reduces CI time by 40-60%. | ## The Problem Running full test suites on every commit increases time and cost: * **Long feedback loops** - Full suites take 10-30+ minutes, delaying developer feedback * **High CI costs** - CI minutes accumulate quickly across teams and branches * **Slow deployments** - Waiting for full test runs slows release velocity * **Flaky test noise** - Re-running everything amplifies flaky test impact and blocks merges ## How TestDino Extends Playwright Playwright `@playwright/test@1.50+` includes native `--last-failed` support. TestDino adds: - **Cross-runner caching** - Results persist across different CI runners - **Shard awareness** - Works with parallelized test execution - **Workflow-level persistence** - Cache survives job restarts - **Branch/commit tracking** - Results tied to specific code changes ## Related Re-run failed tests, GitHub Actions, and status checks. - [GitHub Actions](https://docs.testdino.com/guides/playwright-github-actions): Set up Playwright tests in GitHub Actions - [GitHub Status Checks](https://docs.testdino.com/guides/github-status-checks): Configure PR status checks - [TeamCity](https://docs.testdino.com/guides/playwright-teamcity): Integrate with TeamCity CI - [Flaky Tests](https://docs.testdino.com/guides/playwright-flaky-test-detection): Detect and fix flaky tests --- ## Re-run Failed Tests > Source: https://docs.testdino.com/guides/rerun-failed-playwright-tests > Description: Re-run only failed Playwright tests instead of the full suite using TestDino. Works with GitHub Actions, GitLab CI, and Azure DevOps pipelines. When a CI run fails, re-running the entire job wastes time on tests that have already passed. This workflow runs only the failed tests on subsequent attempts. **Example** Your test suite has **500 E2E tests**. - A full run takes **1 hour** - **50 tests fail** and take about **6 minutes** to run Re-running the entire suite means executing **450 passing tests again**. With this approach, only the **50 failed tests** are re-run. **Result:** You save approximately **54 minutes per re-run**. | Scenario | Full suite rerun time | Failed only rerun time | Time saved | Cost saved on Linux | Cost saved on Windows | Cost saved on macOS | | --- | --- | --- | --- | --- | --- | --- | | Per rerun | 60 min | 6 min | 54 min | $0.324 | $0.54 | $3.348 | | Per day (10 reruns) | 600 min | 60 min | 540 min (9 hrs) | $3.24 | $5.40 | $33.48 | | Per 30-day month (10 reruns daily) | 18,000 min | 1,800 min | 16,200 min (270 hrs) | $97.20 | $162.00 | $1,004.40 | **Note:** Pricing is based on GitHub's official [Actions runner pricing](https://docs.github.com/en/billing/managing-billing-for-github-actions/about-billing-for-github-actions#per-minute-rates) for GitHub-hosted runners: - **Linux (2-core):** $0.006 per minute - **Windows (2-core):** $0.010 per minute - **macOS:** $0.062 per minute This guide covers: * Sharded Playwright execution * Re-run failed jobs in GitHub Actions * Failed-tests-only re-runs when failure metadata exists * Full-suite fallback when no test metadata exists ## Quick Reference | Task | Command/Action | Link | | :--- | :--- | :--- | | Run full test suite | `npx playwright test` | - | | Cache failed test metadata | `npx tdpw cache` | [Setting up cache](#step-4-confirm-caching) | | Get last failed tests | `npx tdpw last-failed` | [How it works](#how-it-works) | | Re-run detection | Check `github.run_attempt` | [Workflow logic](#how-the-workflow-logic-works) | | Upload HTML report | `actions/upload-artifact@v4` | [Quick start](#quick-start) | ## How it works This workflow uses two TestDino CLI commands: * `npx tdpw cache` stores failed test metadata after a run * `npx tdpw last-failed` returns Playwright arguments for the last failed tests The workflow detects re-runs using `github.run_attempt`: | Condition | Behavior | | :--- | :--- | | `run_attempt` = 1 | Run full test suite | | `run_attempt` > 1, flags found | Run failed tests only | | `run_attempt` > 1, no flags | Run full test suite (fallback) | > **Note:** **Playwright compatibility:** Since [`@playwright/test@1.50+`](https://playwright.dev/docs/release-notes#version-150), Playwright has native `--last-failed` support. TestDino extends this with cross-runner caching, shard awareness, and workflow-level persistence. ## Quick start Add this logic to your Playwright step: ```yaml - name: Run Playwright with rerun logic env: TESTDINO_TOKEN: ${{ secrets.TESTDINO_TOKEN }} run: | echo "Run attempt: ${{ github.run_attempt }}" if [[ "${{ github.run_attempt }}" -gt 1 ]]; then npx tdpw last-failed > flags.txt FLAGS="$(cat flags.txt)" if [[ -n "$FLAGS" ]]; then echo "Re-running only failed tests" eval "npx playwright test $FLAGS" exit 0 fi fi echo "Running full test suite" npx playwright test - name: Upload HTML report uses: actions/upload-artifact@v4 with: name: Playwright Test Report path: ./playwright-report retention-days: 14 - name: Cache failed test metadata if: always() env: TESTDINO_TOKEN: ${{ secrets.TESTDINO_TOKEN }} run: npx tdpw cache ``` The `cache` step runs with the `if: always()` condition, so failures are recorded even when tests fail. > **Tip:** Need to set up your API token? See [Generate API Keys](/guides/generate-api-keys) for instructions. ## How to re-run failed tests in GitHub Actions? ### Open the failed job In the GitHub Actions run, find the failed job for Playwright: 1. Go to your GitHub repository's **Actions** tab. 2. Select the failed workflow run. 3. Under **Jobs**, open the job where Playwright tests get executed & fail. > **Note:** This is the main job that executes Playwright and sharding. ![Step 1: Open failed job](https://testdinostr.blob.core.windows.net/docs/docs/guides/re-run-failed/step-1.webp) ### Check the run attempt Inside the job, open the Playwright execution step. In the logs, you can see: * `GitHub run attempt: ` * Re-run detection output ![Step 2: Check run attempt](https://testdinostr.blob.core.windows.net/docs/docs/guides/re-run-failed/step-2.webp) ### Identify the failed tests Scroll to the end of the Playwright output. GitHub Actions show: * Total failed test count * Failed test titles ![Step 3: Identify failed tests](https://testdinostr.blob.core.windows.net/docs/docs/guides/re-run-failed/step-3.webp) ### Confirm caching Open the **Cache failed test metadata** step. Look for: ``` Cache data submitted successfully ``` This confirms TestDino stored the failure metadata. ![Step 4: Confirm caching](https://testdinostr.blob.core.windows.net/docs/docs/guides/re-run-failed/Step-4.webp) ### Confirm Re-run Re-run the failed pipeline and confirm the execution of only the last failed tests. * The workflow detects `run_attempt > 1` * `tdpw last-failed` retrieves the failed tests * Only those tests are executed ![Step 5: Confirm re-run part 1](https://testdinostr.blob.core.windows.net/docs/docs/guides/re-run-failed/step-5.1.webp) ![Step 5: Confirm re-run part 2](https://testdinostr.blob.core.windows.net/docs/docs/guides/re-run-failed/step-5.2.webp) **Example: Sharded test execution** **First run:** 12 tests across 3 shards. Two shards fail. | Shard | Tests | Result | | :--- | :--- | :--- | | Shard 1 | 4 tests | 2 Passed, 2 Failed | | Shard 2 | 4 tests | Passed | | Shard 3 | 4 tests | 3 Passed, 1 Failed | **Re-run comparison:** **Without TestDino:** ``` ├── Shard 1: runs 4 tests ├── Shard 3: runs 4 tests └── Total: 12 tests ``` Re-runs all tests in failed shards, including tests that already passed. **With TestDino:** ``` ├── Shard 1: runs 2 failed tests ├── Shard 3: runs 1 failed test └── Total: 3 tests ``` Re-runs only the 3 tests that failed, not all 8 tests in those shards. ## Full workflow For a complete workflow with sharding and report merging, see the [example repository](https://github.com/testdino-hq/playwright-sample-tests-javascript). **View full workflow YAML** **File name:** `.github/workflows/playwright.yml` **Repository:** `github.com/testdino-hq/playwright-sample-tests-javascript` ```yaml name: Run Playwright tests on: push: pull_request: schedule: - cron: '30 3 * * 1-5' # 11:00 AM IST workflow_dispatch: jobs: run-tests: name: Run Playwright tests ${{ matrix.shardIndex }}/3 runs-on: ubuntu-latest strategy: fail-fast: false matrix: shardIndex: [1, 2, 3] shardTotal: [3] steps: - uses: actions/checkout@v4 # REQUIRED: Node 20 - name: Setup Node.js 20.x uses: actions/setup-node@v3 with: node-version: '20' - name: Create .env file run: | echo "USERNAME=${{ secrets.USERNAME }}" >> .env echo "PASSWORD=${{ secrets.PASSWORD }}" >> .env echo "NEW_PASSWORD=${{ secrets.NEW_PASSWORD }}" >> .env echo "FIRST_NAME=${{ secrets.FIRST_NAME }}" >> .env echo "STREET_NAME=${{ secrets.STREET_NAME }}" >> .env echo "CITY=${{ secrets.CITY }}" >> .env echo "STATE=${{ secrets.STATE }}" >> .env echo "COUNTRY=${{ secrets.COUNTRY }}" >> .env echo "ZIP_CODE=${{ secrets.ZIP_CODE }}" >> .env - name: Cache npm dependencies uses: actions/cache@v3 with: path: ~/.npm key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }} restore-keys: | ${{ runner.os }}-node- - name: Install deps + browsers run: | npm ci npx playwright install --with-deps chromium firefox webkit # FULL + RERUN LOGIC - name: Run Playwright (rerun failed tests if applicable) env: TESTDINO_TOKEN: ${{ secrets.TESTDINO_TOKEN }} SHARD_INDEX: ${{ matrix.shardIndex }} SHARD_TOTAL: ${{ matrix.shardTotal }} run: | echo "GitHub run attempt: ${{ github.run_attempt }}" if [[ "${{ github.run_attempt }}" -gt 1 ]]; then echo "Detected re-run. Checking failed test metadata from TestDino." npx tdpw last-failed \ --shard=${{ matrix.shardIndex }}/${{ matrix.shardTotal }} \ > last-failed-flags.txt EXTRA_PW_FLAGS="$(cat last-failed-flags.txt)" if [[ -n "$EXTRA_PW_FLAGS" ]]; then echo "Running only failed tests for this shard:" echo "$EXTRA_PW_FLAGS" # IMPORTANT: JSON + BLOB BOTH REQUIRED # Ensure playwright-report directory exists mkdir -p ./playwright-report eval "npx playwright test $EXTRA_PW_FLAGS" exit 0 fi echo "No failed test metadata found. Falling back to full shard." fi # First run (full shard) # Ensure playwright-report directory exists mkdir -p ./playwright-report npx playwright test \ --grep="@chromium|@firefox|@webkit|@android|@ios" \ --shard=${{ matrix.shardIndex }}/${{ matrix.shardTotal }} - name: Upload blob report if: ${{ !cancelled() }} uses: actions/upload-artifact@v4 with: name: blob-report-${{ matrix.shardIndex }} path: ./blob-report retention-days: 1 # THIS WILL SHOW "Metadata cached successfully" WHEN JSON EXISTS - name: Cache tdpw last failed metadata if: always() env: TESTDINO_TOKEN: ${{ secrets.TESTDINO_TOKEN }} SHARD_INDEX: ${{ matrix.shardIndex }} SHARD_TOTAL: ${{ matrix.shardTotal }} run: | npx tdpw cache --verbose merge-reports: name: Merge Reports needs: run-tests if: always() runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 # Node 20 here as well - name: Setup Node.js 20.x uses: actions/setup-node@v3 with: node-version: '20' - name: Cache npm dependencies uses: actions/cache@v3 with: path: ~/.npm key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }} restore-keys: | ${{ runner.os }}-node- - name: Install deps + browsers run: | npm ci npx playwright install --with-deps - name: Download all blob reports uses: actions/download-artifact@v4 with: path: ./all-blob-reports pattern: blob-report-* merge-multiple: true - name: Merge HTML & JSON reports run: npx playwright merge-reports --config=playwright.config.js ./all-blob-reports - name: Upload combined report uses: actions/upload-artifact@v4 with: name: Playwright Test Report path: ./playwright-report retention-days: 14 - name: Send TestDino report run: | npx --yes tdpw ./playwright-report \ --token="${{ secrets.TESTDINO_TOKEN }}" \ --upload-html \ --upload-traces \ --verbose ``` ## How the workflow logic works The workflow uses a conditional check based on `github.run_attempt`: ```bash if [[ "${{ github.run_attempt }}" -gt 1 ]]; then # Re-run: get failed tests from TestDino else # First run: execute full shard fi ``` * On the first attempt (`run_attempt == 1`), the full shard is executed. * On subsequent attempts, the workflow attempts to re-run only previously failed tests. **What `last-failed` returns** The `tdpw last-failed` command outputs test filters formatted for Playwright: ```bash -g "Verify that a New User Can Successfully Complete the Journey from Registration to a Single Order Placement @chromium|test @chromium" ``` These flags can be passed directly to `npx playwright test`. **Why `eval` is used** The command uses `eval` to handle quoted arguments in the test filter: ```bash eval "npx playwright test $EXTRA_PW_FLAGS" ``` This keeps the `-g "pattern"` quoting intact when passed to Playwright. ## Edge cases **Pipeline fails before tests run** If a job fails during dependency installation or setup, no test metadata exists. **On re-run:** 1. `tdpw last-failed` returns nothing 2. The workflow detects the empty result 3. The job runs the full shard **No manual action required**. The workflow handles this case automatically: ```bash if [[ -n "$EXTRA_PW_FLAGS" ]]; then # Failed tests found, run only those eval "npx playwright test $EXTRA_PW_FLAGS" else echo "No failed test metadata found." echo "Running full shard instead." fi ``` And then, it will run the normal execution: ```bash mkdir -p ./playwright-report npx playwright test \ --grep="@chromium|@firefox|@webkit|@android|@ios" \ --shard=${{ matrix.shardIndex }}/${{ matrix.shardTotal }} ``` **Skipped tests due to `--max-failures`** Playwright's `--max-failures` option stops test execution after N failures. For example: ```bash npx playwright test --max-failures=2 ``` Playwright stops after 2 failures. **Tests that did not run are not recorded.** On re-run, only the failed tests execute. Skipped tests are not included as of now. ## Related CI overview, GitHub Actions, status checks, and flaky tests. - [CI Optimization Overview](https://docs.testdino.com/guides/playwright-ci-optimization): Learn about CI optimization strategies - [GitHub Actions](https://docs.testdino.com/guides/playwright-github-actions): Set up Playwright tests in GitHub Actions - [GitHub Status Checks](https://docs.testdino.com/guides/github-status-checks): Configure PR status checks - [Flaky Tests](https://docs.testdino.com/guides/playwright-flaky-test-detection): Detect and fix flaky tests --- ## Playwright Real-Time Reporting > Source: https://docs.testdino.com/guides/playwright-real-time-test-streaming > Description: Stream Playwright test results live to TestDino over WebSocket. Monitor test progress, pass/fail counts, and worker activity in real time during CI runs. Real-time streaming delivers Playwright test results to the TestDino dashboard as each test completes. A WebSocket connection pushes live progress, pass/fail counts, and per-worker activity directly to the Test Runs page. > **Warning:** Real-time streaming is in **Experimental**. Data loss may occur under certain conditions. Use the stable [`tdpw upload`](/cli/testdino-playwright-nodejs) CLI for production pipelines. ## Quick Reference | Topic | Link | | :--- | :--- | | [Enable streaming](#enable-real-time-streaming) | Toggle on the Test Runs page | | [Setup](#setup) | Install reporter and stream tests | | [WebSocket status](#websocket-status) | Connection indicator and states | | [Multi-tab support](#multi-tab-support) | BroadcastChannel coordination | | [Limitations](#known-limitations) | Experimental caveats | | [FAQ](#faq) | Common questions | ## Enable Real-Time Streaming A toggle switch on the Test Runs page header controls the streaming mode. Flip it to enable live updates. | Setting | Detail | | :--- | :--- | | Default state | OFF | | Persistence | Saved to localStorage, persists across sessions | | Experimental badge | Displays next to the toggle when enabled | When streaming is enabled: - The **Active Test Runs** section renders at the top of the Test Runs page, showing runs currently executing. - A **WebSocket status badge** indicates connection state. - The onboarding setup guide switches to the **Real-time** tab. ## Setup Install the `@testdino/playwright` reporter and run tests with streaming enabled. ### Install the reporter ```bash npm install @testdino/playwright ``` ### Run tests with streaming ```bash npx tdpw test -t "your-api-token" ``` The `tdpw test` command wraps `npx playwright test`, opens a WebSocket connection, and streams each result to TestDino. All Playwright CLI options pass through. ### Monitor on the dashboard Open the [Test Runs](/platform/playwright-test-runs) page. Active runs appear at the top with a live progress bar, pass/fail/skip counts, and per-worker detail. ## WebSocket Status When real-time streaming is enabled, a status badge appears on the Test Runs page indicating the connection state. | Status | Meaning | | :--- | :--- | | **Connecting** | Establishing WebSocket connection to TestDino | | **Online** | Connected and receiving live updates | | **Offline** | Connection lost; updates are paused | | **Disconnected** | WebSocket closed; toggle streaming off and on to reconnect | The badge is hidden when streaming is disabled. ## Active Test Runs With streaming enabled, the Test Runs page displays a collapsible **Active Test Runs** section. Each active run shows a progress bar, live result counts, commit, branch, and CI source. ![Active test runs section showing live progress and result counts](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/sharded-runs.webp) For sharded runs, the run is labeled **SHARDED** with tabs for each shard. Select a tab to view its workers and currently executing tests. ## Multi-Tab Support Only one browser tab opens a WebSocket connection to TestDino. This tab acts as the **primary tab**. Additional tabs receive updates through the browser's [BroadcastChannel API](https://developer.mozilla.org/en-US/docs/Web/API/BroadcastChannel). - If you close the primary tab, another open tab promotes itself and opens a new WebSocket. - All tabs display the same live data regardless of which tab holds the connection. ## Known Limitations | Limitation | Detail | | :--- | :--- | | Experimental status | Some edge cases in connection recovery are still being refined | | Browser support | Requires a browser that supports WebSocket and BroadcastChannel (all modern browsers) | | Single project | Each WebSocket connection is scoped to one project at a time | | LocalStorage | Streaming preference is per-browser, not synced across devices | ## FAQ **Will my test data be lost if I toggle streaming off?** No. Toggling streaming off does not affect historical data. All previously recorded runs, test cases, and analytics remain intact. The toggle only changes how new results are delivered to the dashboard. **What happens if the WebSocket disconnects mid-run?** The reporter continues sending results. When the dashboard reconnects, it catches up to the current state. No test data is lost on the server side. ## Related CLI reference, test runs, and CI integration. - [Node.js CLI](https://docs.testdino.com/cli/testdino-playwright-nodejs): Full CLI reference and configuration options - [Test Runs](https://docs.testdino.com/platform/playwright-test-runs): View and filter completed and active test runs - [CI Integration](https://docs.testdino.com/guides/playwright-github-actions): Configure GitHub Actions for automated test runs - [Getting Started](https://docs.testdino.com/getting-started): Set up TestDino and run your first tests ## Playwright Code Coverage > Source: https://docs.testdino.com/guides/playwright-code-coverage > Description: Collect, visualize, and track Playwright code coverage over time in TestDino. Monitor coverage trends across branches, environments, and test runs. Code coverage shows how much of your application code runs during tests. TestDino collects coverage from your Playwright tests, merges data across shards, and displays a per-file breakdown on the dashboard. > **Warning:** Code coverage requires the `@testdino/playwright` streaming reporter, which is currently in **Experimental**. The standard `tdpw upload` CLI does not support coverage collection. See [Real-Time Streaming](/guides/playwright-real-time-test-streaming) for setup. ## Quick Reference | Topic | Link | | :--- | :--- | | [Coverage metrics](#coverage-metrics) | Statements, branches, functions, lines | | [Instrument your app](#instrument-your-application) | babel-plugin-istanbul, nyc, vite-plugin-istanbul | | [Enable coverage](#enable-coverage) | CLI flags and reporter config | | [Coverage fixture](#use-the-coverage-fixture) | Auto-fixture and manual fixture | | [Sharded runs](#sharded-runs) | Merge coverage across CI shards | | [Data handling](#data-handling) | What gets uploaded and stored | | [Troubleshooting](#troubleshooting) | Common issues and fixes | ## Coverage Metrics TestDino tracks four standard coverage metrics: | Metric | What it measures | | :--- | :--- | | **Statements** | How many individual code statements ran | | **Branches** | How many `if`/`else` paths were taken (both the true and false side) | | **Functions** | How many functions were called at least once | | **Lines** | How many source lines ran | ## Prerequisites - `@testdino/playwright` installed ([CLI reference](/cli/testdino-playwright-nodejs)) - TestDino API token ([generate one](/guides/generate-api-keys)) - Your application instrumented with [Istanbul](https://istanbul.js.org/) so `window.__coverage__` is available in the browser ## Instrument Your Application Instrumentation adds small tracking counters around every statement, branch, and function in your code. When your app runs, these counters record what was executed. The result is stored in a global `window.__coverage__` object that TestDino reads after each test. > **Warning:** Instrumented builds are for testing only. Never deploy them to production. Instrumentation slows performance by 10-30%, increases bundle size by 2-3x, and exposes your source code file structure. Pick the method that matches your build tool: **babel-plugin-istanbul:** Best for React apps using Babel (Create React App, Next.js with Babel, etc.). Install the plugin: ```bash npm install -D babel-plugin-istanbul ``` Add it to your Babel config so it only runs during test builds: ```json babel.config.json { "env": { "test": { "plugins": ["istanbul"] } } } ``` Build with instrumentation enabled: ```bash NODE_ENV=test npm run build ``` **vite-plugin-istanbul:** Best for Vite-based apps (Vite + React, Vite + Vue, etc.). Install the plugin: ```bash npm install -D vite-plugin-istanbul ``` Add it to your Vite config. The `requireEnv: true` option ensures instrumentation only activates when you set `VITE_COVERAGE=true`: ```typescript vite.config.ts import istanbul from 'vite-plugin-istanbul'; export default defineConfig({ plugins: [ istanbul({ include: 'src/*', exclude: ['node_modules", "test/"], extension: ['.js", ".ts", ".tsx"], requireEnv: true, }), ], }); ``` Build with instrumentation enabled: ```bash VITE_COVERAGE=true npm run build ``` **nyc instrument:** Best when you need a standalone tool without modifying your build config. Install nyc: ```bash npm install -D nyc ``` Instrument your source into a separate directory: ```bash npx nyc instrument src instrumented-src ``` Serve the `instrumented-src` directory for your test runs. ### Recommended Build Strategy Only use instrumented builds for test environments. All other environments should use regular builds. | Environment | Build Type | `NODE_ENV` | Purpose | | :--- | :--- | :--- | :--- | | Development | Regular | `development` | Local development | | Testing / QA | Instrumented | `test` | E2E tests with coverage | | Staging | Regular | `production` | Pre-production validation | | Production | Regular | `production` | Live users | ## Enable Coverage There are two ways to enable coverage: CLI flags or reporter config. Both produce the same result. ### CLI Flags Pass coverage flags directly to `tdpw test`: ```bash npx tdpw test --coverage ``` Generate a local HTML report alongside the dashboard upload: ```bash npx tdpw test --coverage --coverage-report --coverage-report-dir=./coverage-report ``` | Flag | Description | | :--- | :--- | | `--coverage` | Enable coverage collection | | `--coverage-report` | Generate a local Istanbul HTML report | | `--coverage-report-dir` | Output directory for the local report (default: `./coverage`) | CLI flags override reporter config options. See the [Node.js CLI reference](/cli/testdino-playwright-nodejs) for all available flags. ### Reporter Config Add the `coverage` option to the `@testdino/playwright` reporter in your Playwright config: ```typescript playwright.config.ts import { defineConfig } from '@playwright/test'; export default defineConfig({ reporter: [ ['list"], ['@testdino/playwright', { token: process.env.TESTDINO_TOKEN, coverage: { enabled: true, localReport: true, localReportDir: './coverage-report', }, }], ], projects: [ { name: 'chromium', use: { browserName: 'chromium' } }, { name: 'firefox', use: { browserName: 'firefox' } }, { name: 'webkit', use: { browserName: 'webkit' } }, ], use: { baseURL: 'http://localhost:3000', }, }); ``` ### Coverage Options | Option | Type | Default | Description | | :--- | :--- | :--- | :--- | | `enabled` | `boolean` | `false` | Turn on coverage collection | | `localReport` | `boolean` | `false` | Generate a local Istanbul HTML report | | `localReportDir` | `string` | `./coverage` | Output directory for the local report | ## Use the Coverage Fixture The fixture is the piece that reads `window.__coverage__` from the browser after each test finishes. The reporter then merges all the collected data and sends it to TestDino. **Auto-fixture (Recommended):** Change your import from `@playwright/test` to `@testdino/playwright`. Everything else stays the same: ```typescript tests/example.spec.ts import { test, expect } from '@testdino/playwright'; test('homepage loads', async ({ page }) => { await page.goto('/'); await expect(page.locator('h1')).toBeVisible(); }); ``` Coverage collection runs automatically after each test. No other code changes needed. **Manual fixture:** If you already have a custom test fixture setup, extend it with coverage: ```typescript tests/fixtures.ts import { test as base } from '@playwright/test'; import { coverageFixtures } from '@testdino/playwright'; export const test = base.extend(coverageFixtures); export { expect } from '@playwright/test'; ``` Then use your extended test in spec files: ```typescript tests/example.spec.ts import { test, expect } from './fixtures'; test('homepage loads', async ({ page }) => { await page.goto('/'); await expect(page.locator('h1')).toBeVisible(); }); ``` ## Run Tests ### Start the instrumented application Run your app using the instrumented test build: ```bash NODE_ENV=test npm start ``` ### Set your API token ```bash export TESTDINO_TOKEN="your-api-token" ``` ### Run Playwright tests ```bash npx playwright test ``` ### Review results After tests complete, the console prints a coverage summary table. If `localReport` is enabled, open `./coverage-report/index.html` for the full Istanbul HTML report. Open the test run in [TestDino](https://app.testdino.com) and select the **Coverage** tab to see overall metrics and a per-file breakdown. ## Sharded Runs When you split tests across multiple CI shards, each shard collects coverage only for the tests it runs. TestDino merges all shard data into one combined report. **Step 1:** Set a CI run ID so all shards group into one test run: ```bash export TESTDINO_CI_RUN_ID=$CI_PIPELINE_ID ``` **Step 2:** Run each shard: ```bash npx playwright test --shard=1/3 npx playwright test --shard=2/3 npx playwright test --shard=3/3 ``` **How merging works:** The server takes the union of covered lines from every shard. If shard 1 covers lines 1-50 of `auth.ts` and shard 2 covers lines 30-80, the merged report shows lines 1-80 as covered. ### Example CI Workflow ```yaml .github/workflows/coverage.yml name: Playwright Coverage on: [push, pull_request] jobs: test: runs-on: ubuntu-latest strategy: fail-fast: false matrix: shard: [1/3, 2/3, 3/3] steps: - uses: actions/checkout@v4 with: fetch-depth: 0 - uses: actions/setup-node@v4 with: node-version: '20' - name: Install dependencies run: npm ci - name: Install Playwright run: npx playwright install --with-deps - name: Build for coverage run: NODE_ENV=test npm run build - name: Run tests env: TESTDINO_TOKEN: ${{ secrets.TESTDINO_TOKEN }} run: npx playwright test --shard=${{ matrix.shard }} ``` ## Data Handling TestDino uploads only coverage metrics (percentages and hit counts per file). No source code, raw coverage maps, or `window.__coverage__` payloads are stored on the server. | Data | Stored on server | Purpose | | :--- | :--- | :--- | | Per-run summary (statements, branches, functions, lines) | Yes | Dashboard summary tiles | | Per-file metrics (path, percentages, hit counts) | Yes | File-by-file breakdown and trends | | Source code | No | Never leaves your machine | | Raw Istanbul coverage maps | No | Processed locally, then discarded | > **Note:** When `window.__coverage__` is not present in the browser, the reporter skips coverage collection and proceeds normally. Tests run without any performance impact. ## Troubleshooting **No coverage data collected** Open your app in the browser and type `window.__coverage__` in the DevTools console. If it returns `undefined`, your app is not instrumented. Check that: - Your build command uses `NODE_ENV=test` or `VITE_COVERAGE=true` - `coverage.enabled` is set to `true` in the reporter config - Your test files import `test` from `@testdino/playwright` (not `@playwright/test`) **Coverage only from one browser** This is expected when `coverage.projects` is set to a single browser (for example, `['chromium"]`). To collect coverage from all browsers, remove the `projects` option. **Missing files in coverage report** Only files that run during tests appear in the report. If a file shows 0% or is missing, no test triggered the code path that imports it. To include all source files (even untouched ones), configure your Istanbul tool's `include`/`exclude` settings to cover the full source directory. **Branch coverage is lower than line coverage** This is normal. Branch coverage counts both sides of every `if`/`else`. If your tests only exercise one side (for example, the success path but not the error path), branch coverage drops while line coverage stays higher. **Debug coverage collection** Enable debug logging to see what the reporter collects: ```typescript ['@testdino/playwright', { token: process.env.TESTDINO_TOKEN, debug: true, coverage: { enabled: true }, }] ``` Check the console for messages prefixed with `[TestDino]`. ## Related Set up coverage, view per-run reports, and configure CI. - [Test Run Coverage](https://docs.testdino.com/platform/playwright-test-runs/coverage): Per-run coverage breakdown by file - [Coverage Analytics](https://docs.testdino.com/platform/analytics/playwright-code-coverage): Coverage trends across runs and environments - [Node.js CLI](https://docs.testdino.com/cli/testdino-playwright-nodejs): Reporter configuration and CLI options - [CI Integration](https://docs.testdino.com/guides/playwright-github-actions): Set up Playwright in GitHub Actions ## Playwright Test Annotations in TestDino > Source: https://docs.testdino.com/guides/playwright-test-annotations > Description: Use Playwright annotations to attach metadata, trigger Slack alerts, and report custom metrics directly from test code in TestDino dashboards. Annotations let you attach metadata directly to your Playwright tests. You can tag each test with a priority, feature area, owner, related ticket link, a Slack channel or user to notify on failure, and custom metrics like page load time or API latency. TestDino picks up these annotations and displays them in the UI next to each test case. ## Quick Reference | Topic | Link | | :--- | :--- | | [Supported annotations](#supported-annotations) | All annotation types TestDino recognizes | | [Add annotations](#add-annotations-to-tests) | How to write annotations in your test code | | [Custom metrics](#custom-metrics) | Track performance and business metrics per test | | [View in TestDino](#view-annotations-in-testdino) | Where annotations show up in the UI | | [Slack notifications](#annotation-based-slack-notifications) | How `testdino:notify-slack` triggers alerts | | [Configure Slack mapping](#configure-annotation-slack-mapping) | Connect annotation targets to Slack channels or users | ## Supported Annotations Annotations use the standard Playwright `annotation` array. All types use the `testdino:` prefix. | Annotation Type | Example Value | What it does | | :--- | :--- | :--- | | `testdino:priority` | `p0`, `p1`, `p2`, `p4` | Tags the test with a priority level | | `testdino:feature` | `Navbar`, `Cart`, `Checkout` | Tags the feature area this test covers | | `testdino:link` | Jira, Linear, or any URL | Links to a related ticket or document | | `testdino:owner` | `qa-team`, `@ashish` | Identifies who owns or maintains this test | | `testdino:notify-slack` | `#e2e-alerts`, `@ashish` | Notifies a Slack channel or user when this test fails | | `testdino:context` | Free-text description | Adds context that other testers need to know | | `testdino:flaky-reason` | `Upload feature depends on file size` | Documents a known reason the test is flaky | | `testdino:metric` | JSON with `name`, `value`, `unit` | Tracks a custom numeric metric per test run ([details](#custom-metrics)) | > **Note:** `testdino:notify-slack` triggers Slack notifications when configured. `testdino:metric` tracks numeric values over time with charts. All other annotation types display in the TestDino UI for reference. ## Add Annotations to Tests Add the `annotation` array to any Playwright test. Each entry has a `type` (the annotation name) and a `description` (the value): ```typescript tests/navbar.spec.ts import { test, expect } from '@playwright/test'; test('Verify navbar', { annotation: [ { type: 'testdino:priority', description: 'p0' }, { type: 'testdino:feature', description: 'Navbar' }, { type: 'testdino:link', description: 'https://jira.example.com/NAVBAR-1' }, { type: 'testdino:owner', description: 'qa-team' }, { type: 'testdino:notify-slack', description: '@ashish' }, ], }, async ({ page }) => { await page.goto('/'); await expect(page.locator('nav')).toBeVisible(); }); ``` You can notify multiple channels and users from a single annotation by separating them with commas: ```typescript { type: 'testdino:notify-slack', description: '#e2e-alerts,#qa-channel,@ashish,@vishwas' } ``` For better readability, use separate entries: ```typescript annotation: [ { type: 'testdino:notify-slack', description: '#e2e-alerts' }, { type: 'testdino:notify-slack', description: '#qa-channel' }, { type: 'testdino:notify-slack', description: '@ashish' }, ], ``` ### Slack Notification Targets | Format | Example | What happens | | :--- | :--- | :--- | | `#channel-name` | `#e2e-alerts` | Sends the failure alert to that Slack channel | | `@username` | `@ashish` | Sends the failure alert directly to that Slack user | | Comma-separated | `#e2e-alerts,@ashish` | Notifies multiple targets from one entry | > **Tip:** Separate entries per target are recommended for readability and easier maintenance. ## Custom Metrics The `metric` annotation type tracks custom numeric values across test runs. Unlike other annotations that store text, metrics store structured data (name, value, unit, optional threshold) and render as time-series charts in TestDino. Use metrics to track anything you measure during a test: page load time, API latency, memory usage, bundle size, Lighthouse scores, or business numbers like conversion rate. ### Metric Format The `testdino:metric` annotation uses a JSON string as the `description`: ```typescript { type: 'testdino:metric', description: JSON.stringify({ name: 'page-load-time', // Metric name (keep consistent across runs) value: 1250, // Numeric value for this run unit: 'ms', // Display unit threshold: 2000, // Optional: threshold line on the chart }), } ``` | Field | Required | Description | | :--- | :--- | :--- | | `name` | Yes | Identifier for the metric. Use the exact same name across runs to build a trend line. | | `value` | Yes | Numeric value recorded in this test run | | `unit` | Yes | Display unit shown on the chart and labels | | `threshold` | No | Reference line drawn on the chart. Marks a performance budget or target. | ### Supported Units | Unit | Example use | | :--- | :--- | | `ms` | Page load time, API latency | | `s` | Full test duration, timeout values | | `mb` | Memory usage, bundle size | | `gb` | Large asset sizes | | `%` | Conversion rate, pass rate | | `count` | Error count, API calls per test | | `score` | Lighthouse score, accessibility score | ### Static vs. Runtime Metrics You can set metric values in two ways depending on your use case. **Static values** go in the annotation array at test declaration. Use this for values you know ahead of time or compute before the test: ```typescript tests/static-metric.spec.ts import { test, expect } from '@playwright/test'; test('Homepage check', { annotation: [ { type: 'testdino:metric', description: JSON.stringify({ name: 'lighthouse-score', value: 94, unit: 'score', threshold: 90, }), }, ], }, async ({ page }) => { await page.goto('/'); await expect(page.locator('h1')).toBeVisible(); }); ``` **Runtime values** are measured during the test and pushed with `test.info().annotations.push()`. Use this for performance timings, API latency, or anything captured at execution time: ```typescript tests/performance.spec.ts import { test, expect } from '@playwright/test'; test('Homepage loads within budget', async ({ page }) => { const start = Date.now(); await page.goto('/'); await expect(page.locator('h1')).toBeVisible(); const loadTime = Date.now() - start; test.info().annotations.push({ type: 'testdino:metric', description: JSON.stringify({ name: 'page-load-time', value: loadTime, unit: 'ms', threshold: 2000, }), }); }); ``` > **Tip:** Use `test.info().annotations.push()` for any metric that depends on runtime measurement. The annotation array on the test declaration runs before the test body, so it cannot access runtime values. ### Example: Track Multiple Metrics at Runtime A single test can report multiple metrics. Push each one after you capture the value: ```typescript tests/checkout.spec.ts import { test, expect } from '@playwright/test'; test('Checkout flow performance', async ({ page }) => { const start = Date.now(); await page.goto('/checkout'); await expect(page.locator('[data-testid="order-summary"]')).toBeVisible(); const flowTime = Date.now() - start; // Track the checkout flow duration test.info().annotations.push({ type: 'testdino:metric', description: JSON.stringify({ name: 'checkout-flow-time', value: flowTime, unit: 'ms', threshold: 5000, }), }); // Track the number of API calls made during the test test.info().annotations.push({ type: 'testdino:metric', description: JSON.stringify({ name: 'api-calls', value: 12, unit: 'count', }), }); }); ``` ### Common Metric Examples These show the annotation format for different categories. Replace the `value` with your actual measurement. **Performance:** ```typescript // Page load time { type: 'testdino:metric', description: JSON.stringify({ name: 'page-load-time', value: loadTime, unit: 'ms', threshold: 2000 }) } // API latency { type: 'testdino:metric', description: JSON.stringify({ name: 'api-latency', value: latency, unit: 'ms', threshold: 200 }) } // Memory usage { type: 'testdino:metric', description: JSON.stringify({ name: 'memory-usage', value: memoryMb, unit: 'mb' }) } ``` **Quality:** ```typescript // Lighthouse score { type: 'testdino:metric', description: JSON.stringify({ name: 'lighthouse-score', value: 94, unit: 'score', threshold: 90 }) } // Accessibility score { type: 'testdino:metric', description: JSON.stringify({ name: 'accessibility-score', value: 87, unit: 'score', threshold: 80 }) } // Error count { type: 'testdino:metric', description: JSON.stringify({ name: 'error-count', value: errorCount, unit: 'count' }) } ``` **Resources:** ```typescript // Bundle size { type: 'testdino:metric', description: JSON.stringify({ name: 'bundle-size', value: 2.4, unit: 'mb', threshold: 3.0 }) } // API calls per test { type: 'testdino:metric', description: JSON.stringify({ name: 'api-calls', value: callCount, unit: 'count' }) } // Conversion rate { type: 'testdino:metric', description: JSON.stringify({ name: 'conversion-rate', value: rate, unit: '%' }) } ``` ### How Metrics Display in TestDino Metric values appear on the test case detail page. TestDino plots a time-series chart for each metric name, with the X-axis showing test run timestamps and the Y-axis showing the metric value. If a `threshold` is set, a reference line is drawn on the chart. Filter by metric name to focus on a specific measurement. The chart updates as new test runs report values for that metric. > **Tip:** Keep metric names consistent across runs. Use the exact same `name` string every time (for example, always `page-load-time`, not sometimes `pageLoadTime`). This ensures all data points appear on the same trend line. ## View Annotations in TestDino Once your tests run, annotations appear in two places in TestDino. ### Test Case Detail Open any test case from a test run. Below the KPI tiles, the **Annotations** panel lists every annotation on that test: priority, feature, link, owner, Slack targets, context, and flaky reason. Metric values also appear with their name, value, and unit. ### Detailed Analysis Table In the **Test Runs > Summary > Detailed Analysis** table, each test row has an **Annotations** badge. Click it to expand and see annotation chips (priority, feature, owner, Slack targets) inline with the test result. This makes it easy to scan annotations across all tests in a run without opening each one. ## Annotation-Based Slack Notifications When a test with a `testdino:notify-slack` annotation fails, TestDino sends a Slack alert to the mapped channel or user. This works independently from [test run alerts](/integrations/slack-playwright-test-alerts), which notify on every run completion regardless of annotations. The notification flow: 1. Your test has `testdino:notify-slack` set to `@ashish` or `#e2e-alerts`. 2. The test fails during a run. 3. TestDino looks up the Annotation-Slack mapping in your Slack App configuration. 4. If there is a mapping for that target, the alert goes to the configured Slack destination. > **Warning:** Annotation-based Slack notifications require the [Slack App](/integrations/slack-playwright-test-alerts) to be connected to your project. The Slack Webhook integration does not support annotation-based alerts. ## Configure Annotation-Slack Mapping The mapping connects the `testdino:notify-slack` values you write in your test code to actual Slack channels and users in your workspace. ### Connect the Slack App Go to **Project Settings > Integrations > Communication > Slack App** and connect your Slack workspace. See [Slack App setup](/integrations/slack-playwright-test-alerts) if you have not connected yet. ### Open Annotation Alerts tab In the **Slack Notification Configuration** dialog, switch to the **Annotation Alerts** tab. This is where you define which annotation targets map to which Slack destinations. ### Add your mappings For each annotation target in your test code, add a row and pick the Slack channel or user it should notify: | Annotation Target (from your test code) | Slack Channel / User | | :--- | :--- | | `@ashish` | `@ashi-deve` | | `@vishwas` | `@Vishwas Tiwari` | | `#e2e-alerts` | `#td-stage` | Type in the search box and select from the dropdown. The dropdown lists all channels and users from your connected Slack workspace. ### Save the configuration Click **Save**. From now on, when a test with a matching `testdino:notify-slack` annotation fails, the alert is sent to the mapped Slack destination. ### Things to Know - **Mapping is stored at the integration level**, not at the project level. - **Disconnecting Slack removes all mappings.** If you disconnect the Slack App, all Annotation-Slack mappings are deleted. You need to set them up again after reconnecting. - **One test can notify multiple targets.** Add separate `testdino:notify-slack` entries for each channel or user you want to alert. ## Example: Full Annotation Setup This test uses all supported annotation types, including a runtime metric: ```typescript tests/order.spec.ts import { test, expect } from '@playwright/test'; test('New user can place and cancel order', { annotation: [ { type: 'testdino:priority', description: 'p0' }, { type: 'testdino:feature', description: 'Registration to Order' }, { type: 'testdino:link', description: 'https://jira.example.com/PROJ-123' }, { type: 'testdino:owner', description: 'qa-team' }, { type: 'testdino:notify-slack', description: '#ch-td-extra' }, { type: 'testdino:notify-slack', description: '@ashish' }, { type: 'testdino:context', description: 'Uses fixture user: test+fixture-user@example.com' }, { type: 'testdino:flaky-reason', description: 'Delay in API call response' }, ], }, async ({ page }) => { const start = Date.now(); // Test steps: navigate, place order, cancel order await page.goto('/orders/new'); await expect(page.locator('[data-testid="order-confirmed"]')).toBeVisible(); const flowTime = Date.now() - start; test.info().annotations.push({ type: 'testdino:metric', description: JSON.stringify({ name: 'order-flow-time', value: flowTime, unit: 'ms', threshold: 5000, }), }); }); ``` When this test runs: - TestDino shows all annotations in the test case Annotations panel. - The `order-flow-time` metric appears on the test detail page and is plotted on a trend chart across runs. - If the test fails, Slack alerts go to `#ch-td-extra` and `@ashish` (if mapped in the Slack App configuration). - The Detailed Analysis table shows annotation chips for quick scanning across all tests in the run. ## Related - [Slack App](https://docs.testdino.com/integrations/slack-playwright-test-alerts): Connect Slack and configure notification channels - [Test Cases](https://docs.testdino.com/platform/playwright-test-cases): View test case details and annotations - [Test Runs Summary](https://docs.testdino.com/platform/test-runs/playwright-failure-summary): Detailed analysis with annotation chips - [Node.js CLI](https://docs.testdino.com/cli/testdino-playwright-nodejs): Reporter configuration and CLI options --- ## Automated Playwright Reports in TestDino > Source: https://docs.testdino.com/guides/automated-playwright-reports > Description: Schedule and deliver automated PDF reports summarizing Playwright test execution, failure trends, and flakiness across runs directly to your team. Automated Reports deliver PDF summaries of test execution data to specified recipients on a recurring schedule. Configure reports per project from **Project Settings > Automated Reports**. [Video: Automated Reports video](https://www.youtube.com/embed/mrGd3prPA-g?si=eBxjpwQVrWAl9Slv) ## Quick Reference | Setting | Default | Options | | :--- | :--- | :--- | | Frequency | Weekly | Daily, Weekly, Monthly | | Time (UTC) | 08:00 | 00:00 to 23:00 | | Day of Week | Monday | Sunday to Saturday (Weekly only) | | Lookback Period | 7 days | 1 to 30 days | | Recipients | None | To, CC, BCC per address | ## Report Contents Each generated PDF includes the following sections. [View sample report (PDF)](https://testdinostr.blob.core.windows.net/docs/docs/setting/reports/Test%20Report-preview.pdf) | Section | Description | | :--- | :--- | | Executive Summary | Pass rates, failure trends, and test result breakdowns | | Test Case Analysis | Slowest, most-failing, and flaky tests | | Branch Statistics | Performance grouped by branch and contributor | | Trend Graphs | Visual charts of test health over the lookback period | ## Set Up a Report ### Open Automated Reports Go to **Project Settings** and scroll to **Automated Reports**. Click **Create Automated Report**. ![Create Automated Report dialog showing name, recipients, schedule, and filter fields](https://testdinostr.blob.core.windows.net/docs/docs/setting/reports/create-automated-reports.webp) ### Name the report Enter a descriptive name. This appears in the email subject line and PDF header. ### Add recipients Enter one or more email addresses. Use the type selector to assign each recipient as **To**, **CC**, or **BCC**. Click the add button or press **Enter** to add each address. Remove a recipient with the **X** button next to their entry. ### Configure the schedule Select the report frequency, time (UTC), and lookback period. | Setting | Detail | | :--- | :--- | | Frequency | Daily, Weekly, or Monthly | | Time (UTC) | Hour the report generates (local timezone shown in the dropdown) | | Day of Week | Visible only for Weekly frequency | | Report Time Period | Number of days to include (1 to 30) | ### Apply filters (optional) Narrow the report scope by adding tag or environment filters. - **Tags**: Type a tag name and click **Add** or press **Enter**. Multiple tags are supported. - **Environment**: Select from environments defined in your [branch mapping](/platform/project-settings#branch-mapping). ### Create Click **Create**. The report runs on its next scheduled time. ## Manage Reports ### Preview Click **Preview** to download a sample PDF before the first scheduled send. The PDF uses a timestamped filename. ### Edit Click **Edit** on any report to update recipients, schedule, filters, or the lookback period. All fields are pre-populated with the current configuration. ### Pause and Resume Use **Pause** to temporarily stop a report from generating. Click **Resume** to re-enable it. Paused reports retain their configuration. ### Delete Click **Delete** to permanently remove a report configuration. This action cannot be undone. ## Schedule Behavior - All schedule times use UTC. The dropdown displays the equivalent local time in parentheses. - The lookback period defines how many days of data the report covers. A 7-day lookback on a weekly report covers the previous full week. - Reports generate at the configured hour and are delivered shortly after. > **Warning:** Values outside the 1 to 30 day range for the lookback period are automatically clamped to the nearest valid value. - [Project Settings](https://docs.testdino.com/platform/project-settings): Configure project identity, API keys, integrations, and branch mapping. - [Analytics](https://docs.testdino.com/platform/playwright-test-analytics): Explore interactive test execution analytics in the platform. --- ## Playwright Visual Testing in TestDino > Source: https://docs.testdino.com/guides/playwright-visual-testing > Description: Compare Playwright screenshots with visual diffs. Inspect baseline vs actual for every failing test directly in TestDino. Upload Playwright snapshot screenshots to TestDino to review diffs, baselines, and CI context for visual test failures [Video: Playwright Visual Testing](https://testdinostr.blob.core.windows.net/docs/docs/guides/visual-testing/visual-comparison.mp4) ## Quick Reference | Step | Command / Action | | :--- | :--- | | Add assertion | `await expect(page).toHaveScreenshot()` | | Run tests | `npx playwright test` | | Upload with images | `npx tdpw upload ./playwright-report --upload-images` | | [Update baselines](#update-baselines-after-an-intentional-ui-change) | `npx playwright test --update-snapshots` | ### Prerequisites * Playwright Test and at least one test using `toHaveScreenshot()` ([Playwright Docs](https://playwright.dev/docs/docs/test-snapshots)) * A Playwright report directory to upload (example: `./playwright-report`) * A TestDino token available as an environment variable or CI secret * Upload is configured to include images ## Quick Start Steps ### Add a visual assertion Start with a single `toHaveScreenshot()` assertion. ```javascript import { test, expect } from '@playwright/test'; test('homepage looks correct', async ({ page }) => { await page.goto('/'); await expect(page).toHaveScreenshot(); }); ``` > **Note:** TestDino can only show visual diffs for tests that generate screenshot comparisons. ### Run your tests Run Playwright as usual. ```bash npx playwright test ``` [Playwright](https://playwright.dev/docs/docs/test-snapshots) must generate the screenshots and snapshot comparison output for the run. ### Upload the report with images Upload screenshots so TestDino can render the Visual Comparison panel. **Upload images:** ```bash npx tdpw upload ./playwright-report --token="your-api-key" --upload-images ``` **Upload full JSON and artifacts:** ```bash npx tdpw upload ./playwright-report --token="your-api-key" --upload-full-json ``` ### Configure CI upload Example GitHub Actions workflow. ```yaml - name: Run Playwright tests run: npx playwright test - name: Upload to TestDino if: always() run: npx tdpw upload ./playwright-report --token="${{ secrets.TESTDINO_TOKEN }}" --upload-full-json ``` > **Tip:** `if: always()` ensures uploads happen even when the test job fails, which is when you need the artifacts most. ## Examples ### View a failed visual test 1. Open the failing run in TestDino 2. Open the failing test case 3. Use the Visual Comparison panel to switch between: * Diff * Actual * Expected If you do not see the panel, check both: * The test uses `toHaveScreenshot()` * Your upload command includes `--upload-images` or `--upload-full-json` ### Update baselines after an intentional UI change If the UI change is expected, update snapshots locally and commit the new baseline. ```bash npx playwright test --update-snapshots ``` > **Warning:** Updating baselines changes what Playwright considers correct for future runs. Review the git diff before committing. --- ## Playwright Component Testing in TestDino > Source: https://docs.testdino.com/guides/playwright-component-testing > Description: Report Playwright component test results with embedded trace viewer, screenshots, and video. Same experience as E2E tests. Playwright component testing mounts UI components in a real browser without a full application. TestDino supports component tests with traces, screenshots, and videos, the same as E2E tests. > **Warning:** Component testing requires the `@testdino/playwright` streaming reporter, which is currently in **Experimental**. The standard `tdpw upload` CLI does not support component test reporting. See [Real-Time Streaming](/guides/playwright-real-time-test-streaming) for setup. ## Quick Reference | Topic | Link | | :--- | :--- | | [Setup](#setup) | Install the component testing package and TestDino reporter | | [Configuration](#configure-the-testdino-reporter) | Add the reporter to `playwright-ct.config.ts` | | [Run tests](#run-tests) | CLI and reporter methods | | [CI integration](#ci-integration) | GitHub Actions workflow | | [Limitations](#limitations) | Known constraints | ## Supported Frameworks | Framework | Package | | :--- | :--- | | React | `@playwright/experimental-ct-react` | | Vue | `@playwright/experimental-ct-vue` | | Svelte | `@playwright/experimental-ct-svelte` | > **Note:** Playwright component testing is experimental. The API may change between Playwright versions. See the [Playwright component testing docs](https://playwright.dev/docs/test-components) for full API details. ## Setup ### Initialize component testing Run the Playwright scaffolding command to create a `playwright/` directory with `index.html` and `index.ts` files: **npm:** ```bash npm init playwright@latest -- --ct ``` **yarn:** ```bash yarn create playwright --ct ``` **pnpm:** ```bash pnpm create playwright --ct ``` ### Install the TestDino reporter ```bash npm install @testdino/playwright ``` ### Write a component test Create a test file next to your component. Import `test` and `expect` from the framework-specific package, and use the `mount` fixture to render the component: **React:** ```tsx src/App.spec.tsx import { test, expect } from '@playwright/experimental-ct-react'; import App from './App'; test('renders the homepage', async ({ mount }) => { const component = await mount(); await expect(component).toContainText('Welcome'); }); ``` **Vue:** ```ts src/App.spec.ts import { test, expect } from '@playwright/experimental-ct-vue'; import App from './App.vue'; test('renders the homepage', async ({ mount }) => { const component = await mount(App); await expect(component).toContainText('Welcome'); }); ``` **Svelte:** ```ts src/App.spec.ts import { test, expect } from '@playwright/experimental-ct-svelte'; import App from './App.svelte'; test('renders the homepage', async ({ mount }) => { const component = await mount(App); await expect(component).toContainText('Welcome'); }); ``` ## Configure the TestDino Reporter Add `@testdino/playwright` to the `reporter` array in your `playwright-ct.config.ts`. The configuration is the same as E2E tests: ```typescript playwright-ct.config.ts import { defineConfig } from '@playwright/experimental-ct-react'; export default defineConfig({ testDir: './src', reporter: [ ['list"], ['@testdino/playwright', { token: process.env.TESTDINO_TOKEN, }], ], use: { trace: 'on-first-retry', screenshot: 'only-on-failure', video: 'retain-on-failure', }, }); ``` All standard reporter options work with component tests: `debug`, `ciRunId`, `artifacts`, and `coverage`. See the [Node.js CLI reference](/cli/testdino-playwright-nodejs#configuration) for the full list. > **Tip:** Enable `trace`, `screenshot`, and `video` in the config. These artifacts upload to TestDino and appear in the test case detail page for debugging. ## Run Tests ### Option 1: TestDino CLI Pass `--ct` to run component tests instead of E2E tests: ```bash npx tdpw test --ct ``` All Playwright options pass through: ```bash npx tdpw test --ct --project=chromium --workers=4 npx tdpw test --ct --headed npx tdpw test --ct src/Button.spec.tsx ``` ### Option 2: Playwright CLI If you configured the reporter in `playwright-ct.config.ts`, run Playwright directly: ```bash npx playwright test --config=playwright-ct.config.ts ``` Both methods stream results to TestDino in real time. ## CI Integration Component tests run in CI the same way as E2E tests. The only difference is the `--ct` flag or the component testing config file. **GitHub Actions (tdpw CLI):** ```yaml .github/workflows/component-tests.yml name: Component Tests on: [push, pull_request] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: '20' - name: Install dependencies run: npm ci - name: Install Playwright browsers run: npx playwright install --with-deps - name: Run component tests env: TESTDINO_TOKEN: ${{ secrets.TESTDINO_TOKEN }} run: npx tdpw test --ct ``` **GitHub Actions (reporter):** ```yaml .github/workflows/component-tests.yml name: Component Tests on: [push, pull_request] jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-node@v4 with: node-version: '20' - name: Install dependencies run: npm ci - name: Install Playwright browsers run: npx playwright install --with-deps - name: Run component tests env: TESTDINO_TOKEN: ${{ secrets.TESTDINO_TOKEN }} run: npx playwright test --config=playwright-ct.config.ts ``` ## Limitations - **Experimental API.** The component testing API may change between Playwright releases. - **Plain data only for props.** Complex objects like class instances do not serialize across the browser boundary. Pass plain objects, strings, numbers, and dates only. - **Callbacks are async.** Event handler callbacks run in Node.js while the component runs in the browser. Synchronous return values from callbacks do not work. For full API details on props, events, slots, and hooks, see the [Playwright component testing documentation](https://playwright.dev/docs/test-components). ## Supported TestDino Features All TestDino features work with component tests the same way as E2E tests. - [Annotations](https://docs.testdino.com/guides/playwright-test-annotations): Add metadata and Slack alerts to tests - [Code Coverage](https://docs.testdino.com/guides/playwright-code-coverage): Track coverage per test run - [Flaky Test Detection](https://docs.testdino.com/guides/playwright-flaky-test-detection): Identify and track flaky tests - [Real-Time Streaming](https://docs.testdino.com/guides/playwright-real-time-test-streaming): Monitor live test execution ## GitHub Status Checks > Source: https://docs.testdino.com/guides/github-status-checks > Description: Configure GitHub status checks from Playwright test results in TestDino. Block PR merges on test failures, flakiness thresholds, or coverage drops. GitHub CI Checks are automated quality gates that block merges when Playwright test results do not meet your configured rules. [Video: Configure CI Checks](https://testdinostr.blob.core.windows.net/docs/docs/integrations/github/ci-checks.mp4) ## Quick Reference GitHub CI Checks have several configurable settings. Use this table to understand the defaults and what each setting controls. | Setting | Default | Purpose | | :--- | :--- | :--- | | [Pass Rate](#2-pass-rate) | 90% | Minimum percentage of tests that must pass | | [Flaky Handling](#4-flaky-handling) | Neutral | How flaky tests affect the check (Strict or Neutral) | | [Mandatory Tags](#3-mandatory-tags) | None | Tags that must pass regardless of the overall rate | | [Environment Overrides](#environment-overrides) | None | Custom rules per branch environment | ## What are GitHub CI Checks? GitHub CI Checks are automated quality gates that run on your pull requests and commits. TestDino GitHub Checks show a clear pass or fail signal in GitHub based on the test rules you set in TestDino. If a required check fails, GitHub will block the merge. * When your tests finish, TestDino compares the run against your quality gate settings. * It then posts a **green check** (passed) or a **red check** (failed) directly on the PR or commit. * This gives your team fast feedback on whether the code meets your quality standards. ### Why do CI checks matter? * Stop unstable or failing code from being merged * Enforce strict rules for critical branches (like main) * Use different rules for PROD, STAGE, and DEV * See failures instantly inside GitHub * Combine real test signals with GitHub's protection rules ## Quality Gate Settings These rules determine whether TestDino marks a check as pass/fail. ### 1. Default Settings By default, the **Pass %** and **Flaky** settings apply to all branches. You can override them later for specific environments such as PROD, STAGE, or DEV. ### 2. Pass Rate Minimum percentage of tests that must pass for the check to succeed. * **Range**: 0-100% * **Default**: 90% * **Example**: If set to 90%, at least 90% of your tests must pass for the check to be green ### 3. Mandatory Tags All tests with these specific tags must pass. If even one fails, the entire check fails, regardless of the overall pass rate. If any test with a mandatory tag fails, the entire check fails * Use the `@` prefix (for example `@critical`, `@payment`, `@auth`) * If even one test with a mandatory tag fails, the entire check fails * Useful for login, payments, security, or any flow that you cannot risk breaking #### Example: If you set `@critical` as mandatory and one critical test fails, the check is red even if everything else passes. ### 4. Flaky Handling How flaky tests are treated: **1. Strict** * Flaky tests count as failures. * Use this for production branches where stability is critical **2. Neutral** * Flaky tests are excluded from the pass rate calculation (default). * Use this for development branches to focus on actual failures ## Environment Overrides [Video: Environment Overrides](https://www.youtube.com/embed/2jUSi6EZEqw?si=U67BhK6zxi0hL_q_) You can create different quality gates for different environments. ### How does it work? 1. Set up branch environments in your project (such as Production, Staging, and Development). 2. Each environment appears as a row in the CI Checks settings table. 3. For each environment, set a custom Pass %, Flaky Handling, and whether Tags apply. 4. If you do not override a setting, the default applies. ### Example Configuration | Environment | Pass Rate | Flaky Handling | | :--- | :--- | :--- | | Default | 80% | Neutral | | Development | 70% | Neutral | | Staging | 85% | Neutral | | Production | 95% | Strict | ### What this means: * **Development**: Only 70% of tests need to pass. Flaky tests don't matter. * **Staging**: Needs 85% passing. Good for pre-deploy testing. * **Production**: Strictest rules. Needs 95% passing and no flaky failures. * **All other branches**: Follow the default 80% rule. This gives you full control over different parts of your workflow. ## Understanding Check Results Each GitHub check will show green or red based on your rules. ### 1. Passed: Green Check Your code meets all quality gate requirements: * Pass rate meets or exceeds the threshold * All mandatory tag tests passed **You can merge your PR!** ### 2. Failed: Red Check Your code doesn't meet quality gate requirements because: * Pass rate is below the threshold, OR * One or more mandatory tag tests failed **Fix the failing tests before merging.** ## Check Details Click "Details" on the GitHub check to see: ### 1. Test Results Table Quick overview with clickable links to your TestDino dashboard. ![Test Results](https://testdinostr.blob.core.windows.net/docs/docs/guides/github-ci-check/test-results.webp) ### 2. Mandatory Tag Analysis Shows which mandatory tags passed or failed. ![Mandatory tag](https://testdinostr.blob.core.windows.net/docs/docs/guides/github-ci-check/mandatory-tag.webp) ### 3. Tags Not Found If you configured a mandatory tag that doesn't exist in your tests. ![Tags not found](https://testdinostr.blob.core.windows.net/docs/docs/guides/github-ci-check/tags-not-found.webp) **Note:** Tags not found don't fail the check - they're skipped. ## Making CI Checks Required This step tells GitHub which checks must pass before a pull request can be merged. [Video: Making CI Checks Required](https://testdinostr.blob.core.windows.net/docs/docs/guides/github-ci-check/ruleset.mp4) 1. Go to **Repository Settings → Rulesets** 2. Create or edit a rule 3. Enable **Require status checks to pass** 4. Click **Add checks** 5. Select **TestDino** 6. Set target branches (for example, main) 7. Save the rule When you open the **Require status checks to pass** section, you're choosing the exact checks GitHub should enforce. GitHub will now stop merges unless the TestDino CI Check is green. ## Common Scenarios ### 1. High Pass Rate, but the Check Failed **Situation:** 95% of tests passed, but the check is still red. **Reason:** A mandatory tag test failed. **Solution:** * Fix the mandatory tag test first. * Mandatory tags override the pass rate completely. ### 2. Flaky Tests Causing Failures **Situation:** Check fails because flaky tests are counted as failures. **Solution:** * Switch the environment to **Neutral** flaky handling, or * Fix the flaky tests, or * Use **Strict** only on stable branches like Production ### 3. Different Rule Requirements by Branch **Situation:** You want strict rules for Production but lighter rules for Development. **Solution:** Use Environment Overrides: * Development: 70% pass rate, Neutral flaky handling * Production: 95% pass rate, Strict flaky handling ## Best Practices ### 1. Start with Reasonable Defaults * Begin with an 80-90% pass rate * Use Neutral flaky handling initially * Add mandatory tags only for truly critical features ### 2. Use Mandatory Tags Wisely Apply mandatory tags to tests that cover: * Critical user flows (login, checkout, payment) * Security features (authentication, authorization) * Data integrity operations **Don't overuse** - if everything is mandatory, nothing is. ### 3. Configure Environment-Specific Rules * **Production**: Strict rules (95%+, strict flaky handling) * **Staging**: Moderate rules (85-90%) * **Development**: Relaxed rules (70-80%, neutral flaky handling) ### 4. Review Failed Checks Promptly * Failed checks block your PR for a reason * Review the failed tests in the check details * Click through to the TestDino dashboard for full error details ### 5. Keep Tests Stable * Fix flaky tests instead of just ignoring them * Flaky tests indicate underlying stability issues * A reliable test suite gives you confidence in your checks ## Troubleshooting ### 1. Check Not Appearing on PR **Possible causes:** * GitHub Checks are not enabled in settings * No commit SHA available in test run metadata * Repository mismatch between the TestDino project and the GitHub connection **Solution:** Verify GitHub connection and ensure CI Checks are enabled. ### 2. Check Always Failing **Possible causes:** * The pass rate is set too high * Mandatory tags not properly configured * Flaky handling is too strict **Solution:** Review your quality gate settings and adjust thresholds. ### 3. Mandatory Tags Not Working **Possible causes:** * Tag names don't match (case-sensitive) * Missing @ prefix in test tags * Tests don't actually have the tags **Solution:** * Check tag spelling and case * TestDino automatically adds @ if missing, but verify in your tests * Review the "Tags Not Found" section in the check details ### 4. Environment Override Not Applied **Possible causes:** * The branch pattern doesn't match * Environment not configured in project settings **Solution:** Verify branch environment mapping in project settings. --- ## Playwright Test Health Status Badges > Source: https://docs.testdino.com/guides/test-health-badges > Description: Embed live SVG badges in GitHub or GitLab READMEs showing real-time Playwright test health, pass rate, flakiness, and total test counts from TestDino. Status Badges are live SVG images that display test health, flakiness, and test counts from the latest completed test run. Embed them in GitHub or GitLab READMEs, or add them as GitLab project badges. ## Badge Types | Badge | What it shows | Data source | | :--- | :--- | :--- | | **Test Health** | Pass rate percentage | Passed / total from the latest completed run | | **Flaky** | Flaky test count or "None" | Flaky count from the latest completed run | | **Tests** | Passed and failed counts | Passed + failed from the latest completed run | ## Color Scale ### Test Health | Pass rate | Color | | :--- | :--- | | 90% or above | Bright green | | 75 - 89% | Light green | | 60 - 74% | Yellow | | 40 - 59% | Orange | | Below 40% | Red | ### Flaky | Flaky count | Color | | :--- | :--- | | 0 (shows "None") | Green | | 1 - 3 | Yellow | | 4 - 10 | Orange | | Above 10 | Red | ## Prerequisites - A TestDino project with at least one completed test run - A GitHub or GitLab repository where you can edit the README - Admin or Editor role on the TestDino project ## Get Badge URLs ### Open Status Badges Go to **Project Settings → Integrations → TestDino Add-ons → Status Badges**. The Preview section displays all three badges with live values from the latest run. ### Select your platform Switch between **GitLab** and **GitHub** tabs to get the correct snippet format. **GitHub:** ![GitHub status badges configuration](https://testdinostr.blob.core.windows.net/docs/docs/setting/integration/github-status-badges.webp) **GitLab:** ![GitLab status badges configuration](https://testdinostr.blob.core.windows.net/docs/docs/setting/integration/gitlab-status-badges.webp) ### Choose snippet type Select the format you need: | Format | Use case | | :--- | :--- | | **Link** | Project URL for the badge link target | | **Badge URL** | Raw SVG URL for the badge image | | **Markdown** | Ready-to-paste markdown for READMEs | ### Copy and paste Click the copy icon on any row. The icon changes to a checkmark for 2 seconds to confirm. ## Add to GitHub Copy the markdown snippets from the **GitHub** tab and paste them into your repository `README.md`. The GitHub tab provides individual markdown rows for each badge and an **All** row that combines all three badges into a single line. ```markdown [![Test Health](https://app.testdino.com/api/badge/your-project/health)](https://app.testdino.com/project/your-project) [![Flaky](https://app.testdino.com/api/badge/your-project/flaky)](https://app.testdino.com/project/your-project) [![Tests](https://app.testdino.com/api/badge/your-project/tests)](https://app.testdino.com/project/your-project) ``` > **Note:** Replace the example URLs above with the actual URLs from your Status Badges panel. ## Add to GitLab GitLab supports badges in two locations: the README and the project badge settings. ### README Copy the markdown snippets from the **GitLab** tab and paste them into your `README.md`. Each row provides the Link URL and Badge image URL separately. ### Project Badge ### Copy URLs From the **GitLab** tab, copy the **Link** and **Badge URL** for each badge. ### Open GitLab badge settings Go to your GitLab repository **Settings → General → Badges → Add badge**. ### Add the badge Paste the **Link** value into "Link URL" and the **Badge URL** value into "Badge image URL". Save. Project badges appear in the sidebar on the GitLab repository page. ## Badge Updates Badges reflect the latest completed test run. After a new run completes: | Location | Update time | | :--- | :--- | | Status Badges preview | Immediate on page reload | | GitHub / GitLab | Within 20 seconds | ## Related - [Project Settings](https://docs.testdino.com/platform/project-settings): Configure project integrations and add-ons - [GitHub Status Checks](https://docs.testdino.com/guides/github-status-checks): Enforce quality gates on pull requests --- ## TestDino Integrations Overview > Source: https://docs.testdino.com/integrations/overview > Description: Connect TestDino to your CI provider, issue tracker, and communication tools. Integrations include GitHub, GitLab, Jira, Slack, and more. Integrations connect TestDino to GitHub, GitLab, Azure DevOps, TeamCity, Jira, Linear, Asana, and Slack. Automate test reporting, create tickets from failures, post summaries to Slack, and enforce quality gates on pull requests. * **GitHub** posts test run summaries to commits and pull requests, and automatically records test runs. * **GitLab** posts test run summaries to merge requests and commits, and syncs MR state with TestDino. * **GitHub CI Checks** evaluate runs against quality gates and publish pass/fail status. * **Azure DevOps** displays test runs, failure trends, and flaky tests directly inside Azure DevOps. * **TeamCity** triggers test uploads and links results back to builds. * **Slack** sends branch-mapped run summaries to configured channels. * **Jira, Linear, Asana,** and **monday** create issues from failed or flaky tests with prefilled context. > **Note:** Integrations are project-scoped and configurable by environment or branch. This enables precise routing, controlled access per project, and consistent links across tools. ## How It Helps | Benefit | Description | | :---- | :---- | | Single source of truth | Each ticket links to the exact run and commit | | Prefilled issues | Test details, failure context, history, and links included | | File to the right place | Select the Jira project or Linear team once | | Automated comments | GitHub app connects test runs to commits and PRs with test run summaries | | Merge safety | CI Checks enforce test rules and block merges when key tests fail | | Team updates | Slack posts run summaries to channels mapped to each environment | | Project-scoped access | Connect or disconnect each project independently | ## What Integrations Provide **GitHub Comments:** Test run summaries posted to commits and pull requests include: * Summary table with passed, failed, flaky, and skipped counts, pass rate, and duration * A detailed Test failure analysis grouped by file with specific error messages **Slack Messages:** Run summaries sent to configured channels include: * Overall status (passed, failed, flaky, or skipped) * Success rate and counts per status * Duration, environment, and branch * Author and commit message * Link to the full report in TestDino **CI Checks:** [GitHub CI Checks](/guides/github-status-checks) add a pass or fail signal to each commit and pull request. Each check includes: * Final status based on quality gate settings * Pass rate and total test counts * Failed mandatory tag tests * Link to the TestDino run * Failure breakdown in the Check Details panel **Bug Reports:** When creating issues in Jira, Linear, Asana, or monday, TestDino includes: * Test name and file, branch, environment, run ID, duration, and attempts * Failure cluster and key error line with the failing step or locator * Short failure history for the selected window * Console excerpts that aid reproduction * Links to the TestDino run, Git commit, and CI job ## Available Integrations - [Github](https://docs.testdino.com/integrations/ci-cd/github) - [GitLab](https://docs.testdino.com/integrations/playwright-gitlab-ci) - [Azure DevOps](https://docs.testdino.com/integrations/playwright-azure-devops) - [TeamCity](https://docs.testdino.com/integrations/ci-cd/teamcity) - [Jira](https://docs.testdino.com/integrations/jira-playwright-test-failures) - [Linear](https://docs.testdino.com/integrations/issue-tracking/linear) - [Asana](https://docs.testdino.com/integrations/issue-tracking/asana) - [monday](https://docs.testdino.com/integrations/issue-tracking/mon) - [Slack App](https://docs.testdino.com/integrations/slack-playwright-test-alerts) - [Slack Webhook](https://docs.testdino.com/integrations/slack/webhook) --- ## GitHub Integration > Source: https://docs.testdino.com/integrations/ci-cd/github > Description: Connect GitHub repositories to TestDino for automated Playwright test reporting on every pull request and commit. Get PR-level test summaries and status checks. The [GitHub integration](https://github.com/apps/testdino-playwright-reporter) connects your repositories to TestDino for automated test reporting. ## How does it work? It provides visibility into test outcomes by posting test run summaries directly to commits and pull requests. This allows teams to review test health and identify failures without leaving the GitHub UI. Once configured, the integration works automatically: [Video: GitHub video](https://www.youtube.com/embed/7DIwD68lqB4) - Detects Playwright runs from GitHub Actions and attaches results to the commit/PR. - Comment includes a link back to the TestDino run. - Branch Mapping controls where comments appear. - Post GitHub CI Checks (pass/fail) based on your quality gate settings. - Now you can block merges until the required test conditions are met. With both comments and CI checks enabled, teams can see detailed summaries in the PR conversation and enforce quality rules directly through GitHub's status checks. (see: [GitHub CI Checks User Guide](/guides/github-status-checks).) ## Quick Start Steps ### 1. Install the app Open the GitHub Marketplace listing for [TestDino | Playwright Reporter](https://github.com/apps/testdino-playwright-reporter), then click **Install & Authorize**. ![Install the app](https://testdinostr.blob.core.windows.net/docs/docs/integrations/github/github-authorize.webp) ### 2. Select repositories Choose your Organization. Then, grant access to all or specific repositories. ![Select repositories](https://testdinostr.blob.core.windows.net/docs/docs/integrations/github/connect-repository.webp) ### 3. Configure Comments In **Settings → Integrations → GitHub**, customize the connection. Click the ⚙️ icon on the GitHub card to open the **GitHub Comment Settings** ![Configure Comments](https://testdinostr.blob.core.windows.net/docs/docs/integrations/github/github-comment-settings.webp) > **Warning:** **Note:** > > - Branch Mapping (in settings) must be configured before configuring the comments setting. > - In **GitHub Settings** (⚙️ on the GitHub card), map **branch patterns** per **environment** and toggle comments for **PR** and **Commits**. > - Environment overrides take precedence over global defaults. ### 4. Configure CI Checks Similarly, open the **CI Checks** tab. Here you can: - Toggle **Enable GitHub Checks** - Enter **Mandatory Tags** (optional) - Set a default **Pass %** - Choose **Flaky Handling** - Configure **Environment Overrides** for your own environments like PROD, STAGE, or DEV (see: [Environment Overrides](/guides/github-status-checks#environment-overrides)). Once saved, TestDino will start sending pass/fail status checks to GitHub for matching commits and pull requests. (see: [Quality Gate Settings](/guides/github-status-checks#quality-gate-settings)) [Video: Configure CI Checks](https://testdinostr.blob.core.windows.net/docs/docs/integrations/github/ci-checks.mp4) ### 4. Run CI Push code or open a PR; your Playwright workflow triggers as usual. TestDino receives the test results and posts both: - A PR comment (if enabled) - A GitHub CI Check (if enabled) ## Why this helps - **Review faster** with immediate test feedback directly in pull requests, eliminating the need to switch contexts. - **Accelerate debugging** with detailed failure context delivered into the relevant commit or PR. ## Related Status checks, API keys, and CI optimization. - [GitHub Status Checks](https://docs.testdino.com/guides/github-status-checks): Configure PR status checks and quality gates - [Azure DevOps](https://docs.testdino.com/integrations/playwright-azure-devops): View test runs inside Azure DevOps - [TeamCity Integration](https://docs.testdino.com/integrations/ci-cd/teamcity): Upload Playwright reports from TeamCity - [Getting Started](https://docs.testdino.com/getting-started): Initial TestDino setup --- ## Playwright GitLab CI Integration > Source: https://docs.testdino.com/integrations/playwright-gitlab-ci > Description: Connect GitLab to TestDino for automated Playwright reporting on merge requests and commits. View failures and flakiness. The GitLab integration connects your repositories to TestDino for automated test reporting. Test summaries post directly to merge requests and commits. ## How It Works TestDino detects Playwright runs from GitLab CI and attaches results to the corresponding merge request or commit. - Posts test run summaries as comments on merge requests and commits - Each comment includes a link back to the full TestDino run - Branch mapping controls where comments appear - Merge request sync keeps TestDino in sync with MR state (open, merged, closed) > **Note:** Only one Git provider (GitHub or GitLab) can be active per project at a time. ## Quick Start Steps ### Connect GitLab Go to **Project Settings → Integrations → CI/CD** and select **GitLab**. Authorize TestDino to access your GitLab account and select the repository. ### Configure comments Click the settings icon on the GitLab card to open **Comment Settings**. Map branch patterns per environment and toggle comments for merge requests and commits. > **Warning:** Branch mapping must be configured before enabling comments. Environment overrides take precedence over global defaults. ### Run CI Push code or open a merge request. Your Playwright pipeline triggers as usual. TestDino receives test results and posts a summary comment to the merge request or commit. ## Merge Requests in TestDino The [Pull Requests](/platform/pull-requests/summary) page displays GitLab merge requests alongside test results. Each row shows the MR title, author, state, latest test run, and pass/fail/flaky/skipped counts. Click any merge request to open the detail view with [Overview](/platform/pull-requests/overview), [Timeline](/platform/pull-requests/timeline), and [Files Changed](/platform/pull-requests/files-changed) tabs. ## CLI Compatibility The GitLab integration works with the current and previous versions of the TestDino CLI. No changes to `npx tdpw upload` or `playwright.config.ts` reporter configuration are needed. ## Related - [GitHub Integration](https://docs.testdino.com/integrations/ci-cd/github): Connect GitHub repositories to TestDino - [Pull Requests](https://docs.testdino.com/platform/pull-requests/summary): View merge requests with test results - [Environment Mapping](https://docs.testdino.com/guides/environment-mapping): Map branches to environments - [Node.js CLI](https://docs.testdino.com/cli/testdino-playwright-nodejs): Install and configure the TestDino CLI ## Playwright Azure DevOps Integration > Source: https://docs.testdino.com/integrations/playwright-azure-devops > Description: Integrate Playwright test results with Azure DevOps. View failures, flaky trends, and test health inside your DevOps pipeline. The TestDino Azure DevOps extension displays TestDino project test runs directly inside Azure DevOps. QA engineers, developers, and managers can track test execution results without switching tools. The extension connects your Azure DevOps project to TestDino using a Project Access Token and fetches data over HTTPS. **Extension link:** [TestDino on Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=testdino.testdino) ## Quick Reference | Task | Action | Link | | :--- | :--- | :--- | | Install extension | Visual Studio Marketplace | [Install](#install-the-extension) | | Generate token | Project → Settings → Integrations | [Generate token](#step-2-generate-a-testdino-api-token) | | Connect extension | Paste token in Azure DevOps | [Connect](#step-3-connect-using-api-token) | | View test runs | TestDino tab in Azure DevOps | [View runs](#viewing-test-runs) | | Filter runs | Time range and run type filters | [Filtering](#filtering-and-refreshing-data) | ## Key Features | Feature | Description | | :--- | :--- | | Test run visibility | View recent test runs with pass, fail, skipped, and flaky counts | | Execution metadata | Track duration, commit, branch, and environment per run | | Time filtering | Filter test runs by date range | | Secure authentication | Read-only Project token-based access | ## Prerequisites * An active Azure DevOps organization and project * A TestDino account with at least one project * Permission to install extensions in Azure DevOps * A valid TestDino Project Access Token ## Install the Extension ### Open the Marketplace Go to the [Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=testdino.testdino) and search for **TestDino**. ![Open the Visual Studio Marketplace](https://testdinostr.blob.core.windows.net/docs/docs/integrations/azure-devops/open-the-marketplace.webp) ### Install the extension Click **Install** and select the Azure DevOps organization. ### Confirm installation After installation, **TestDino** appears in the left navigation of your Azure DevOps project. ![Confirm installation in Azure DevOps](https://testdinostr.blob.core.windows.net/docs/docs/integrations/azure-devops/confirm-installation.webp) ## Connect Azure DevOps to TestDino After installation, connect the extension to TestDino. ### Open TestDino in Azure DevOps * Navigate to your Azure DevOps project * From the left sidebar, click **TestDino** You will see a **Connect to TestDino** screen prompting for a Project token. ![Connect to TestDino screen](https://testdinostr.blob.core.windows.net/docs/docs/integrations/azure-devops/connect-to-testdino.webp) ### Generate a TestDino API Token 1. Log in to **TestDino** and select your **Organization** 2. Go to **Project → Settings → Integrations → Project Access Token** ![Generate TestDino API token](https://testdinostr.blob.core.windows.net/docs/docs/integrations/azure-devops/generate-testdino-api-token.webp) 3. Click **Create Token** and copy the generated token ![Create personal access token](https://testdinostr.blob.core.windows.net/docs/docs/integrations/azure-devops/create-personal-access-token.webp) > **Warning:** Keep this token secure. It provides read access to your test run data. ### Connect Using API Token Paste the Project token into the input field in Azure DevOps and click **Connect**. ![Connect using API token](https://testdinostr.blob.core.windows.net/docs/docs/integrations/azure-devops/connect-using-api-token.webp) Once connected, TestDino will start loading your test runs. ## Viewing Test Runs After a successful connection, you can view TestDino test runs directly in Azure DevOps. Each test run displays: | Field | Description | | :--- | :--- | | Test Run ID | Unique identifier and execution duration | | Commit | Associated commit hash | | Triggered by | The user who initiated the run | | Branch & environment | Source branch and test environment | | Results summary | Passed, failed, skipped, flaky, and total test counts | This allows teams to quickly assess test health without leaving Azure DevOps. ![Viewing test runs in Azure DevOps](https://testdinostr.blob.core.windows.net/docs/docs/integrations/azure-devops/viewing-test-runs.webp) ## Filtering and Refreshing Data Use the built-in controls to refine displayed test runs: | Control | Function | | :--- | :--- | | Time range filter | Show runs from a specific period (e.g., Last 30 days) | | Test run filter | Filter by run type | | Refresh button | Fetch the latest data from TestDino | ## Removing or Updating the API Token If you need to change or revoke access: * Click **Remove Token** in the TestDino Azure DevOps view * Reconnect using a new Project Access Token This is useful when rotating credentials or switching TestDino projects. ## Permissions and Security The extension uses a read-only API token to fetch test run data. * No test execution or data modification is performed * Data is fetched securely over HTTPS * Tokens can be revoked anytime from the TestDino settings ## Troubleshooting **No test runs visible** * Ensure the API token belongs to the correct TestDino project * Confirm that test runs exist in TestDino **Authentication error** * Verify the API token is valid and not revoked * Remove and reconnect the token **Extension not visible** * Confirm the extension is installed for the correct Azure DevOps organization and project ## Related Integrations overview and other CI/CD options. - [GitHub Integration](https://docs.testdino.com/integrations/ci-cd/github): Connect TestDino with GitHub CI/CD - [TeamCity Integration](https://docs.testdino.com/integrations/ci-cd/teamcity): Upload Playwright reports from TeamCity - [Getting Started](https://docs.testdino.com/getting-started): Initial TestDino setup - [Generate API Keys](https://docs.testdino.com/guides/generate-api-keys): Create API tokens for integrations --- ## TeamCity Integration > Source: https://docs.testdino.com/integrations/ci-cd/teamcity > Description: Upload Playwright test reports from TeamCity builds directly to TestDino using the TestDino TeamCity Recipe. Track failures and trends across every build. The TestDino TeamCity Recipe uploads Playwright test reports directly from your TeamCity builds to the TestDino platform. ## How does it work? Once installed, the recipe runs as a build step after your Playwright tests complete. It collects reports, screenshots, videos, and traces, then uploads everything to TestDino automatically. - Detects Playwright reports in your build workspace - Bundles JSON, HTML, and all artifacts - Uploads to your TestDino project using your API key - Posts a direct link to the test run in your build log With each build, your test results flow into TestDino for flaky test detection and trend tracking. ## Quick Start Steps ### 1. Install the recipe In TeamCity, go to **Administration → Plugins → Browse Plugins**. Search for **"TestDino"** and click **Install**. Or install directly when adding a build step: click **Browse Marketplace** under Runner Type, search for **"TestDino – Upload Playwright Report"**, and click **Download & Install**. ### 2. Add the build step Open your **Build Configuration → Build Steps → Add build step**. Select **TestDino - Upload Playwright Report** from the Runner Type dropdown. ![Adding a build step in TeamCity](https://testdinostr.blob.core.windows.net/docs/docs/integrations/teamcity/adding-build-step.webp) ![Select TestDino runner in TeamCity](https://testdinostr.blob.core.windows.net/docs/docs/integrations/teamcity/select-runner.webp) ### 3. Configure the upload Enter your settings: - **Report Directory**: Path to your Playwright reports (default: `./playwright-report`) - **TestDino API Token**: Your project API key from **Settings → API Keys** - **Upload options**: Check the boxes for HTML reports, images, videos, traces, or use **Full JSON Bundle** for everything ![TeamCity configuration reference for TestDino](https://testdinostr.blob.core.windows.net/docs/docs/integrations/teamcity/configuration-reference.webp) ### 4. Run your build Trigger a build. After tests complete, the recipe uploads results and shows a link to view them in TestDino. ## What gets uploaded | Option | What it includes | | ----- | ----- | | **JSON Report** | Test results, pass/fail data, timing (always uploaded) | | **HTML Reports** | Interactive Playwright report UI | | **Image Attachments** | Screenshots from test runs | | **Video Attachments** | Video recordings of test executions | | **Trace Files** | Playwright trace archives for debugging | | **File Attachments** | Extra files like `.md`, `.pdf`, `.log`, `.txt` | | **Full JSON Bundle** | All of the above in one upload | ## Why this helps - **Zero manual work** - Reports upload automatically after every build - **Full context in TestDino** - Screenshots, videos, and traces travel with your results - **Historical tracking** - Compare runs across builds and branches ## Related Integrations overview and TeamCity guide. - [TeamCity Setup Guide](https://docs.testdino.com/guides/playwright-teamcity): Detailed setup, configuration options, and troubleshooting - [Azure DevOps](https://docs.testdino.com/integrations/playwright-azure-devops): View test runs inside Azure DevOps - [GitHub Integration](https://docs.testdino.com/integrations/ci-cd/github): Connect TestDino with GitHub CI/CD - [Getting Started](https://docs.testdino.com/getting-started): Initial TestDino setup --- ## Jira Integration for Playwright Failures > Source: https://docs.testdino.com/integrations/jira-playwright-test-failures > Description: Create Jira issues from Playwright test failures in TestDino. Link failing tests to tickets and track fixes to resolution. ## How Jira works with TestDino > **Warning:** The Jira integration is available on the TestDino **Pro**, **Team**, and **Enterprise** plans. [Video: Jira video](https://www.youtube.com/embed/ihDbH7p6h00?si=4qs_emDx1_XWUzxb) - Connect a Jira account and set a default app and project. - From a failed or flaky test, select **Raise bug** to open a prefilled issue. - Use **Sync** after Jira projects or fields change. Disconnect any time. ## Create a Jira bug report in TestDino ![Bug report](https://testdinostr.blob.core.windows.net/docs/docs/integrations/jira/bug-report.webp) - Open a failed or flaky test and choose **Raise bug**. - Review the prefilled form, edit fields if needed, and create the issue. ## What TestDino pre-fills | Section | Field | Pre-filled content | | :-------------- | :------------------ | :------------------------------------------------------------------------------------------------------ | | **Jira fields** | Project | Jira project for the ticket | | | Issue type | Bug, Task, or any type your Jira allows | | | Priority | Impact level for triage | | | Labels | Team or component tags | | | Assignee | Routing field for the responsible owner | | | Reporter | Routing field for the reporting user or system | | | Sprint | Planning field for the active sprint | | | Dates and points | Optional start date, due date, and estimate points | | | Summary | [`TestCase] - ` | | **Description** | Test details | Test name, file, branch, commit author/message, environment, run ID, execution date, duration, attempts | | | Failure information | Error type and key error message | | | Focused steps | Failing attempt with a code frame | | | Links | TestDino run, Git commit, CI job | | | Screenshots | Listed thumbnails; attach more if required | | **System note** | Origin | The issue was generated from an automated test failure | ## After you create the Issue ![created jira issue](https://testdinostr.blob.core.windows.net/docs/docs/integrations/jira/after-you-create-issue.webp) - Confirmation shows the Jira key and ID, plus a copyable URL. - Use **Sync** on the Integrations page if pickers look out of date. ## Why this helps - Complete, consistent bugs in seconds. - Developers land on proof and can reproduce faster. ## Quick Links - [Linear](https://docs.testdino.com/integrations/issue-tracking/linear) - [Asana](https://docs.testdino.com/integrations/issue-tracking/asana) - [Slack](https://docs.testdino.com/integrations/slack-playwright-test-alerts) --- ## Linear Integration > Source: https://docs.testdino.com/integrations/issue-tracking/linear > Description: Create Linear issues from Playwright test failures in TestDino. Triage, assign, and track test failure fixes directly in your Linear project workflow. ## How Linear works with TestDino > **Warning:** The Linear integration is available on the TestDino **Pro**, **Team**, and **Enterprise** plans. [Video: Linear video](https://www.youtube.com/embed/M7Hg4TpjOM8?si=N6YhjUHNPIrxCv_Q) - Connect your Linear workspace from **Project Settings** > **Integrations** and choose a default team. - From a failed or flaky test, select **Create Linear bug report**. - Select **Sync** to keep your teams, labels, and templates up to date. - Select **Disconnect** to remove the integration at any time. ## Create a Linear bug report in TestDino ![Bug report](https://testdinostr.blob.core.windows.net/docs/docs/integrations/linear/bug-report.webp) - Open a failed or flaky test and choose **Create Linear bug report**. - Review the prefilled composer, adjust fields, then create. ## What TestDino pre-fills | Section | Field | Pre-filled content | | :---------------: | -------------------- | ---------------------------------------------------------------------------- | | **Linear fields** | Workspace and team | Default from Settings is preselected | | | Issue type | Linear issue types | | | Priority | Linear priority values | | | Labels | Optional routing labels | | | Assignee | Optional routing assignee | | | Summary | Title based on test name with run context | | **Description** | Test details | File, branch, commit author/message, environment, run ID, duration, attempts | | | Why it failed | Failure cluster and the exact step or locator | | | Last attempt snippet | Short code context around the failing line | | | Recent history | Frequency in the selected period | | | Console tail | Recent relevant lines | | | Links | TestDino run, Git commit, CI job | | **Review** | Write/Preview | Verify formatting before submitting | | | Screenshots | Listed thumbnails; attach more if required | ## After you create the issue ![result](https://testdinostr.blob.core.windows.net/docs/docs/integrations/linear/after-you-create-the-issue.webp) - Confirmation shows the Linear key and internal ID with a copyable URL. - Use **View in Linear** to continue triage. ## Why this helps - Clean, uniform bug reports without retyping. - Faster triage with the same structure on every issue. ## Quick Links - [Jira](https://docs.testdino.com/integrations/jira-playwright-test-failures) - [Asana](https://docs.testdino.com/integrations/issue-tracking/asana) - [Slack](https://docs.testdino.com/integrations/slack-playwright-test-alerts) --- ## Asana Integration > Source: https://docs.testdino.com/integrations/issue-tracking/asana > Description: Create Asana tasks directly from Playwright test failures in TestDino. Link failing tests to Asana projects and track resolution without leaving your workflow. ## How Asana works with TestDino [Video: Asana video](https://www.youtube.com/embed/6a6FN6jFq5A?si=k-r3QNBA1JKA_g0B) > **Warning:** The Asana integration is available on the TestDino **Pro**, **Team**, and **Enterprise** plans. - Connect an Asana account from **Project Settings** > **Integrations**. - Authorize the TestDino app and set a default Workspace. - From a failed or flaky test, select **Create Asana Task** to open a prefilled task modal. - Select **Sync** to keep your workspace settings up to date. - Select **Disconnect** to remove the integration at any time. ## Create an Asana task in TestDino ![Asana task](https://testdinostr.blob.core.windows.net/docs/docs/setting/integration/asana/create-an-asana-task-in-testdino.webp) - Open a failed or flaky test and select **Create Asana Task**. - In the modal, select the Asana Workspace and Project. - Optionally, add Labels and an Assignee. - Review the prefilled form and select **Create**. ## What TestDino pre-fills | Section | Field | Pre-filled content | | :--------------- | :------------------ | :----------------------------------------------------------------------------------------------------------- | | **Asana Fields** | Workspace | Default from Settings is preselected | | | Project | User-selectable Asana project | | | Labels | Optional user-added labels | | | Assignee | Optional user-selected assignee | | | Title | [TestCase] \ - \ | | **Description** | Test Details | Test Name, File, Branch, Commit Author/Message, Environment, Run ID, Execution Date, Total Runtime, Attempts | | | Failure Information | Error Type, Error Message, Test History (failure pattern) | | | Test Steps | Failing attempt and step with duration and error | | | Context | Console Output, Links (TestDino Run, GitHub Commit) | | | Evidence | Test Case Screenshots, Attachments | ## After you create the issue ![After you create the issue](https://testdinostr.blob.core.windows.net/docs/docs/setting/integration/asana/after-you-create-the-issue.webp) - A confirmation modal shows the Asana Issue Key, Issue ID, and Issue URL. - Use **View in Asana** to open the newly created task. ## Why this helps - Creates complete, consistent tasks with no manual data entry. - Provides full context so developers can begin fixing issues immediately. - [Jira](https://docs.testdino.com/integrations/jira-playwright-test-failures) - [Linear](https://docs.testdino.com/integrations/issue-tracking/linear) - [Slack](https://docs.testdino.com/integrations/slack-playwright-test-alerts) --- ## monday.com Integration > Source: https://docs.testdino.com/integrations/issue-tracking/mon > Description: Create monday.com items from Playwright test failures in TestDino. View test run data in monday dashboards and track resolution across your team. monday is a work management platform for planning, tracking, and collaboration. The TestDino integration connects test execution data with your monday workflows. [Video: monday Integration](https://www.youtube.com/embed/uaiN6xyCK9c?si=0YR63lY5ogLLlfif) ## Quick Reference | Task | Action | Link | | :--- | :--- | :--- | | Install app | monday Marketplace | [Prerequisites](#prerequisites) | | Connect project | TestDino Widget or Integrations tab | [Setup](#integration-setup) | | Create issue | Raise Issue → monday | [Create issue](#create-a-monday-bug-report-in-testdino) | | Add widget | monday dashboard → Add Widget | [Widget setup](#testdino-widget-for-monday) | | View issue data | Check pre-filled content | [What TestDino pre-fills](#what-testdino-pre-fills) | ## Prerequisites > **Warning:** The monday integration is available on TestDino **Pro** and **Enterprise** plans. **1. Install the TestDino app** from the [monday Marketplace](https://monday.com/marketplace). * Search for "TestDino" or use the direct link (available after publish). **2. Connect your TestDino project** to the monday app. * Connect from the TestDino Widget in monday or the Integrations tab in [TestDino Project Settings](/platform/project-settings). **3. monday account with** workspace access & board creation/edit permissions ## What you can do | Feature | Description | | :--- | :--- | | Raise Issues | Create issues from failed/flaky tests with full context to monday board | | Dashboard Widget | View the latest test runs status inside monday dashboard | ## Create a monday bug report in TestDino Create monday items directly from failed or flaky test cases. Each item includes test metadata, so your team has full context without opening separate reports. > **Note:** Learn more about viewing test failures in [Test Runs Overview](/platform/playwright-test-runs) and [Test Cases Overview](/platform/playwright-test-cases). ### How to Create an Issue ### Open a Test Run Navigate to a **Test Run** in TestDino. ### Select a test case Choose a failed or flaky **test case** from the run. ### Raise Issue Click **Raise Issue** and select **monday**. ![Create monday.com issue from TestDino](https://testdinostr.blob.core.windows.net/docs/docs/integrations/monday/create-monday.com-issue.webp) ### Configure the issue Choose the **Workspace** and **Board**. ![Authorize monday.com integration](https://testdinostr.blob.core.windows.net/docs/docs/integrations/monday/authorize.webp) ### Confirm creation Review the pre-filled content and confirm issue creation. A confirmation modal shows the monday Item ID and direct URL. ![Issue created successfully](https://testdinostr.blob.core.windows.net/docs/docs/integrations/monday/issue-created-successfully.webp) Click **View in monday** to open the item. ### Supported Scenarios * Create issues for failed tests * Create issues for flaky tests * Create multiple issues from a single test run * Link issues back to TestDino test runs and test cases ### Issue Data Mapping When an issue is created, TestDino automatically maps test information to monday fields: | TestDino Data | monday Field | | :--- | :--- | | Test Name | Item Name | | Test Status (Failed / Flaky) | Status | | Test Run ID | Text / Link | | Failure Reason / Error | Long Text | | Environment | Label / Text | | Branch | Label / Text | | Execution Timestamp | Date | | Screenshots and Custom uploads | Images | > **Note:** Field mapping depends on board configuration and permissions. ## What TestDino pre-fills TestDino automatically adds rich test context to each monday item. | Section | Field | Pre-filled content | | :--- | :--- | :--- | | **monday fields** | Workspace | Default from Settings is preselected | | | Board | User-selectable monday board | | | Group | Optional board group for the item | | | Status | Item status column (e.g., Bug, To Fix) | | | Priority | Priority column value | | | Assignee | Optional person column assignment | | | Item Name | \[TestCase\] \ \- \ | | **Description** | Test Details | File, Branch, Commit Author/Message, Environment, Run ID, Execution Date, Total Runtime, Attempts | | | Failure Information | Error Type, Error Message, Test History (failure pattern) | | | Test Steps | Failing attempt and step with duration and error | | | Context | Console Output, Links (TestDino Run, GitHub Commit) | | | Evidence | Test Case Screenshots, Attachments | | **Review** | Write | Verify formatting before submitting | | | Screenshots | Thumbnails with the option to attach more | ## TestDino Widget for monday Display test run data directly within monday dashboards. Track test health alongside delivery work. ![TestDino widget for monday](https://testdinostr.blob.core.windows.net/docs/docs/integrations/monday/testdino-widget-for-monday.webp) ### What the Widget Shows Each test run card displays: | Field | Description | | :--- | :--- | | Test Run ID | Run identifier (e.g., \#14, \#13) | | Status Summary | Passed / Failed / Flaky counts | | Duration | Total execution time | | Environment | Environment where tests ran | | Branch | Git branch name | | Completion Timestamp | When the run is completed | ### Setup ### Install the app Install the TestDino app from [monday Marketplace](https://monday.com/marketplace) if not already installed. ### Add widget to dashboard Add the app to the preferred monday dashboard. * You can add this widget via the **Add Widget** button at the top left corner of monday dashboard. * Go to Apps **→** search for TestDino **→** Add widget (**TestDino TestRuns**) ### Connect to TestDino Connect to the TestDino project either by the **Connect to TestDino** button or from the **TestDino Integration** tab in [Project Settings](/platform/project-settings). ### View test runs View the latest test runs stats. The widget automatically fetches the latest test runs. Use the Sync button to refresh manually. ## Integration Setup **Prerequisites:** 1. Active TestDino account with [Pro or Enterprise plan](/pricing) 2. monday account with: - Workspace access - Board creation/edit permissions > **Tip:** Need help getting started? See [Getting Started](/getting-started) for initial TestDino setup. ## Why use this integration * **Faster defect triage:** Create items with full context in seconds. * **No context switching:** See test health where your team already works. * **Consistent reports:** Every item includes structured details for debugging. * **Real-time visibility:** Widget updates automatically when new runs complete. ## Related Integrations overview and other issue trackers. - [Jira](https://docs.testdino.com/integrations/jira-playwright-test-failures): Create Jira issues from failed tests - [Linear](https://docs.testdino.com/integrations/issue-tracking/linear): Create Linear issues from failed tests - [Asana](https://docs.testdino.com/integrations/issue-tracking/asana): Create Asana tasks from failed tests - [Slack](https://docs.testdino.com/integrations/slack-playwright-test-alerts): Receive test notifications in Slack --- ## Slack Integration > Source: https://docs.testdino.com/integrations/slack-playwright-test-alerts > Description: Get Playwright test failure alerts in Slack. Send run summaries, flaky test notifications, and custom annotation triggers. [Video: Slack video](https://www.youtube.com/embed/1OGY1AuIAPs?si=gC9axtc4l7H9jmyn) ## How does it work? > **Warning:** The Slack App integration is available on the TestDino **Pro**, **Team**, and **Enterprise** plans. It is not available on the Free plan. - Sends **run summaries** (status, counts, duration, environment/branch, author, commit). - Routes notifications **by environment** to specific channels; unmatched events fall back to a default channel. - Sends **annotation-based alerts** when tests with `testdino:notify-slack` fail, routed to specific channels or users. - Supports test posts for verification and quick reconfiguration. ## Quick Start Steps ### 1. Connect Slack In **Project → Integrations → Slack App**, click [**Connect to Slack**](https://app.testdino.com/connect/slack) and complete the OAuth flow. > **Tip:** Alternatively, find and install TestDino from the Slack App Marketplace. ### 2. Map channels In the **Slack Channel Configuration** settings, set a default channel for all alerts. ### 3. Add Environment mappings Under **Environment Alert Channel Mapping**, assign specific Slack channels to your project environments (e.g., PROD, STAGE). Notifications for runs in these environments will be routed accordingly. ### 4. Save configuration **Save the configurations** and use **Test** to send a sample message. ## Configuration Scenarios The integration supports both default and environment-based channel configurations, giving you control over how alerts are delivered. ![Slack](https://testdinostr.blob.core.windows.net/docs/docs/integrations/slack/slack-channel-configuration.webp) ### 1. Default Channel Only All test run alerts, regardless of the branch or environment, are sent to a single default channel. This is useful for centralizing all notifications. ### 2. Default + Environment-Specific Channels Alerts for mapped environments (e.g., PROD Alerts) are sent to their designated channel (e.g., #prod-alerts). All alerts from unmapped branches or environments automatically fall back to the default channel (e.g., #daily-updates). ## Why this helps - **Reduce notification noise** by routing alerts to environment-specific channels so teams see only relevant updates. - **Improve incident response** by sending critical failure alerts directly to the responsible team's channel. - **Enable faster triage** with real-time summaries that link directly to detailed test evidence. ## Annotation-Based Alerts Beyond run-level alerts, the Slack App can notify specific channels or users when individual tests fail. This is driven by the `testdino:notify-slack` annotation in your Playwright test code. For example, if a test has `testdino:notify-slack` set to `@ashish`, and that test fails, TestDino sends a Slack message directly to Ashish. This is different from test run alerts, which fire on every run completion regardless of which tests failed. ### How to set it up 1. Add a `testdino:notify-slack` annotation to your test with a channel (`#e2e-alerts`) or user (`@ashish`) as the target. 2. In the **Slack Notification Configuration** dialog, switch to the **Annotation Alerts** tab. 3. Map each annotation target to a Slack channel or user from your workspace. 4. Save the configuration. See the [Annotations guide](/guides/playwright-test-annotations) for full setup instructions, code examples, and all supported annotation types. > **Note:** Annotation-Slack mappings are stored at the integration level. If you disconnect the Slack App, all mappings are deleted and need to be set up again after reconnecting. ## How it's Different from Slack Webhook The Slack App provides two types of alerts: - **Test Run Alerts** send environment-aware run summaries when any run completes. - **Annotation-Based Alerts** notify specific channels or users when individual annotated tests fail. The [Slack Webhook](/integrations/slack/webhook) sends all notifications to a single channel. It does not support environment routing or annotation-based alerts. ## Related Test run alerts, annotations, and notification setup. - [Annotations Guide](https://docs.testdino.com/guides/playwright-test-annotations): Add metadata and Slack notification targets to tests - [Slack Webhook](https://docs.testdino.com/integrations/slack/webhook): Single-channel webhook notifications --- ## Slack Webhook Integration for Playwright > Source: https://docs.testdino.com/integrations/slack/webhook > Description: Send Playwright test results and failure alerts to Slack using webhooks. Configure custom payloads and triggers from TestDino test run events. [Video: Slack video](https://www.youtube.com/embed/1OGY1AuIAPs?si=gC9axtc4l7H9jmyn) ## What a Slack message contains > **Warning:** The Slack Webhook integration is available on the TestDino **Pro**, **Team**, and **Enterprise** plans. - Overall status - passed, failed, flaky, or skipped. - The counts and success rate for passed, failed, flaky, and skipped. - Duration, environment, and branch. - Author and commit message. - **View test run** button linking to evidence. ## Set up Slack ![Bug report](https://testdinostr.blob.core.windows.net/docs/docs/integrations/slack/slack-message.webp) In Slack, enable **Incoming Webhooks**, add a webhook to the workspace and channel, then copy the URL. 1. In **Settings → Integrations → Slack**, paste the webhook URL and **Connect**. 2. Use **Sync** to validate. Toggle **Active** to pause or resume notifications. Disconnect at any time. --- ## Playwright Test Case Management > Source: https://docs.testdino.com/test-management/playwright-test-case-management > Description: Create, organize, and maintain manual + automated test cases in TestDino. Pair with Playwright automation for full coverage. Test Case Management is a standalone workspace for creating, organizing, and maintaining manual and automated test cases within a project. Group test cases under suites and subsuites, manage classification and automation status, and track metadata. The layout includes a sidebar for suite hierarchy, a top bar for key actions, and a main panel for test cases in grid or list form. ## Quick Reference | Topic | Link | | :--- | :--- | | [Key concepts](#key-concepts) | Suites, views, custom fields, attachments, version history | | [Workspace overview](#workspace-overview) | KPI tiles, list/grid views, search, filters | | [Quick start](#quick-start-steps) | Create suites and add test cases | | [Permissions](#permissions) | Admin, Editor, Viewer roles | | [Limits](#limits) | Suite depth, attachments, field limits | ## Key Concepts | Concept | Description | | :--- | :--- | | **Suite Hierarchy** | Create nested suites (up to 6 levels) for structured test case grouping by module, feature, or team. | | **Viewing Modes** | Toggle between **List View** (table) and **Grid View** (cards), each supporting quick edits, inline actions, and filtering. | | **Custom Fields** | Add project-specific fields (text, textarea, number, dropdown, checkbox) beyond the built-in set. | | **Attachments** | Upload screenshots, documents, or test data files to test cases (up to 5 per case). | | **Version History** | Track changes, compare versions, and restore previous states of any test case. | | **Import & Export** | Bulk import from CSV or TestRail. Export filtered or selected test cases as CSV. | | **Bulk Operations** | Multi-select actions: update, move, tag, classify, or change statuses for many test cases at once. | | **Filter & Search** | Find test cases by title, ID, status, priority, type, automation status, or tags. | | **Automation Fields** | Track automation readiness by marking test cases as Manual, Automated, or To Be Automated. Flag tests as Flaky or Muted. | ## Workspace Overview ### KPI Tiles ![KPI Tiles](https://testdinostr.blob.core.windows.net/docs/docs/testcase-management/overview/kpi-tiles.webp) At the top of the Test Case Management tab, four KPI tiles summarize the state of all test cases in the workspace: * **Total:** The total number of test cases in the project. * **Active:** Test cases marked as 'Active' and ready for use. * **Draft:** Test cases in a draft state, not yet finalized. * **Deprecated:** Retired or outdated test cases kept for reference These metrics update dynamically as test cases are added, deleted, or reclassified. ### Views The **Test Cases** page shows all test suites listed in a collapsible hierarchy. Clicking a suite expands it to show its test cases, and if that suite contains subsuites, those appear as nested sections as well. You can switch between two ways to view all test cases: List View and Grid View. #### 1. List View (Table Layout) ![List View](https://testdinostr.blob.core.windows.net/docs/docs/testcase-management/overview/views/list-view.webp) This is a high-density table format, ideal for bulk operations and scanning. It includes the columns for the Key, Title & Priority, Type, Tags, Status, Automation, and Severity. > **Note:** Use checkboxes for bulk selection or the action menu on each test case row for individual actions. #### 2. Grid View (Card Layout) ![Grid View](https://testdinostr.blob.core.windows.net/docs/docs/testcase-management/overview/views/grid-view.webp) This is a visual, card-based layout. Each card shows the Key, Title, Priority, Severity, Status, Automation, and Tags ### Search Functionality You can search by **test case name (Title)** or by its **Key (ID)** (e.g., TC-6297). The list filters in real-time as you type. ### Available Filters To the right of the search bar, there are five filter dropdowns to refine the test cases shown: 1. **Status**: Filter by Active, Draft, or Deprecated. 2. **Automation**: Filter by Manual, Automated, or To Be Automated. 3. **Priority**: Filter by Critical, High, Medium, Low, or Not Set. 4. **Type**: Filter by any test type (Functional, Smoke, Regression, E2E, API, and more). 5. **Tags**: Filter by one or more user-defined labels. > **Tip:** Combine multiple filters for precise queries. For example, filter for: > \[Status: Active\] + \[Priority: High\] + \[Tags: smoke\] = See all high-priority, active smoke tests. ## Quick Start Steps ### 1. Open the Test Cases tab Go to your project and select **Test Cases** from the navigation bar. This opens the Test Case Management workspace. ### 2. Create your first suite In the Suite Sidebar, click the **New Suite** button to create a new suite or subsuite. Use suites to group related test cases under feature, module, or component names. ### 3. Add test cases Use the **New Test Case** button in the toolbar to open the full creation form. You can also add test cases directly within a suite or from the suite's context menu. > **Note:** Fill in Title, Priority, Status, and Tags. Add test steps or link to automated spec files. Save and organize into suites. For bulk operations, see [Import and Export](/test-management/import-export). ## Permissions | Action | Admin | Editor | Viewer | | :--- | :--- | :--- | :--- | | View test cases | Yes | Yes | Yes | | Create / edit test cases | Yes | Yes | No | | Delete test cases | Yes | Yes | No | | Import test cases | Yes | Yes | No | | Export test cases | Yes | Yes | Yes | | Manage settings (custom fields, attachments, version history) | Yes | No | No | ## Limits | Item | Limit | | :--- | :--- | | Suite nesting depth | 6 levels | | Attachments per test case | 5 files | | Custom field dropdown options | 50 per field | | Tags per test case | Unlimited | | Title length | 500 characters | | Description length | 5,000 characters | ## Related - [Test Suites](https://docs.testdino.com/test-management/suites): Create and manage suite hierarchy - [Test Case Structure](https://docs.testdino.com/test-management/test-case/structure): Fields, steps, custom fields, and attachments - [Creating & Editing](https://docs.testdino.com/test-management/test-case/creating-editing): Three creation methods and inline editing - [Import & Export](https://docs.testdino.com/test-management/import-export): Bulk import from CSV or TestRail --- ## TestDino Test Suite Management > Source: https://docs.testdino.com/test-management/suites > Description: Organize test cases into logical suites in TestDino. Group by feature, module, or team to make large test repositories easier to manage and navigate. Test Suites organize test cases into a nested hierarchy within a project. Each suite represents a module, feature, component, or test category. Suites hold both test cases and sub-suites, forming a tree structure that mirrors your application's architecture. ## Hierarchy Model TestDino allows you to organize test cases into suites, which are like folders. You can create **multiple nested hierarchies** by placing suites within other suites. This structure helps teams mirror their application's feature layout, component structure, or test types (e.g., Core Features > AuthModule > Password Reset). ## Create Suites and Subsuites You can create suites in two ways: ### 1. Root-Level Suite * Click the **New Suite** button in the top navigation. * The "Create Test Suite" dialog will open. * Enter a **Name** (required) and **Description** (optional). * Leave the **Parent Suite** dropdown set to "None (Root Level)". ![Suites](https://testdinostr.blob.core.windows.net/docs/docs/testcase-management/suites/root-level-suite.webp) ### 2. Subsuite (Nested Suite) * Click the action menu (...) on an existing suite (e.g., "Test"). * Select **Add Subsuite**. * The same dialog opens, but the **Parent Suite** is now pre-filled with the suite you selected ("Test"). ![Subsuite](https://testdinostr.blob.core.windows.net/docs/docs/testcase-management/suites/subsuite.webp) ## Edit, reorder, expand, or collapse Each suite includes an action menu (**...**) beside its name. From this menu, you can perform the following operations: | Operation | Description | | ----- | ----- | | **Edit** | Rename the suite or change its description. | | **Delete** | Permanently remove a suite. All test cases within it will be moved to **Unassigned**. | | **Expand/Collapse** | Toggle the visibility of nested subsuites. | | **Add Subsuite** | Create a new suite inside the selected one. | | **Reorder Test Cases** | Drag test cases or subsuites between suites to reorganize. | | **Reorder Suites** | Drag suites to reorganize. | > **Warning:** Deleted suites are not recoverable. Review test case placement before deletion. ## Default "Unassigned" Suite The **Unassigned** suite is automatically created when importing or creating test cases without suite information. It serves as a temporary holding area for uncategorized test cases until they're moved to a defined suite. > **Note:** During CSV import, any test cases without mapped suite data are placed in the Unassigned suite. If all test cases have valid suite information, the Unassigned suite is not created. ## Related - [Test Case Structure](https://docs.testdino.com/test-management/test-case/structure): Fields, steps, custom fields, and attachments - [Organizing at Scale](https://docs.testdino.com/test-management/test-case/organizing-at-scale): Suites, tags, and bulk operations - [Import & Export](https://docs.testdino.com/test-management/import-export): Bulk import from CSV or TestRail - [Overview](https://docs.testdino.com/test-management/playwright-test-case-management): Workspace layout, KPIs, and quick start --- ## Test Case Structure in TestDino > Source: https://docs.testdino.com/test-management/test-case/structure > Description: Define test case structure in TestDino: add steps, custom fields, attachments, and view version history. Standardize test case format across your team. A test case represents one functional check or validation in your system. It defines what to test, how to test it, and the expected result. Each test case includes metadata for classification, test steps for execution, and automation flags for tracking readiness. ![Test Case Structure](https://testdinostr.blob.core.windows.net/docs/docs/testcase-management/test-case/test-case-structure.webp) ## Quick Reference | Section | Details | | :--- | :--- | | [Core fields](#core-fields) | Title, description, key | | [Classification](#classification) | Status, priority, severity, type, behavior, layer | | [Automation fields](#automation-fields) | Manual/Automated, flaky, muted | | [Test steps](#test-steps) | Classic (Action/Data/Expected) or Gherkin (Given/When/Then) | | [Custom fields](#custom-fields) | Text, textarea, number, dropdown, checkbox | | [Attachments](#attachments) | Screenshots, documents, test data files | | [Version history](#version-history) | Change tracking, comparison, restore | ## Core Fields | Field | Type | Description | | :--- | :--- | :--- | | **Title** | Text (required) | The name of the test case | | **Description** | Text | A detailed explanation of what the test case validates | | **Key (ID)** | Auto-generated | A unique identifier (e.g., TC-6297) | ## Classification | Field | Options | | :--- | :--- | | **Status** | Active, Draft, Deprecated | | **Priority** | Critical, High, Medium, Low, Not Set | | **Severity** | Blocker, Critical, Major, Normal, Minor, Trivial, Not Set | | **Type** | Functional, Smoke, Regression, Integration, E2E, API, Unit, Performance, Security, Accessibility, Other | | **Behavior** | Positive, Negative, Destructive, Not Set | | **Layer** | E2E, API, Unit, Not Set | ## Automation Fields | Field | Type | Description | | :--- | :--- | :--- | | **Automation Status** | Dropdown | Manual, Automated, To Be Automated | | **Is Flaky** | Checkbox | Mark an unreliable or unstable test | | **Is Muted** | Checkbox | Silence or skip this test | ## Pre/Post-conditions | Field | Description | | :--- | :--- | | **Preconditions** | What must be true before the test runs | | **Postconditions** | The expected system state after the test finishes | ## Test Steps Test cases support one or more steps. Switch between **Classic** and **Gherkin** (BDD) formats using tabs in the step editor. **Classic:** Classic steps use three fields per step: | Field | Description | | :--- | :--- | | **Action** | What action to perform | | **Test Data** | Input data (optional) | | **Expected Result** | What should happen | Click **Add Step** to append additional steps. **Gherkin (BDD):** Gherkin steps use structured keywords for behavior-driven scenarios: | Keyword | Purpose | | :--- | :--- | | **Given** | Describe the initial context or precondition | | **When** | Describe the action or event | | **Then** | Describe the expected outcome | | **And** | Add additional conditions to Given, When, or Then | | **But** | Add a negative condition or exception | Each step maps to one keyword and its description. ## Tags Add keyword tags (e.g., `smoke`, `regression`, `login`) for cross-suite categorization. Tags are comma-separated and can be applied during creation, editing, or bulk operations. ## Custom Fields Create project-specific fields beyond the built-in set. Custom fields appear alongside standard fields in the creation form and editing sheet. | Type | Use for | Example | | :--- | :--- | :--- | | **Text** | Short information | Environment: "Staging", Build: "v2.1.5" | | **Textarea** | Long notes | Special instructions, test data details | | **Number** | Numeric data | Execution time: `5` (minutes) | | **Dropdown** | Pick one option | Browser: Chrome, Firefox, Safari, Edge | | **Checkbox** | Yes/No toggle | Requires VPN: Yes / No | ### Custom Field Limits | Limit | Value | | :--- | :--- | | Dropdown options | 50 per field | | Text field length | 500 characters | | Textarea field length | 5,000 characters | > **Note:** Manage custom fields in [Project Settings](/platform/project-settings). Unmapped columns during CSV import are automatically created as custom fields. ## Attachments Attach screenshots, documents, or test data files to any test case. Attachments provide additional context for manual test execution or review. | Detail | Value | | :--- | :--- | | Max attachments per test case | 5 files | | Supported actions | Upload, view, download | > **Note:** Attachments can be enabled or disabled at the project level in [Project Settings](/platform/project-settings). ## Version History Track all changes made to a test case over time. Version history records who changed what and when, providing a full audit trail. | Feature | Description | | :--- | :--- | | **Change log** | View a chronological list of all edits | | **Diff view** | Compare any two versions side by side | | **Restore** | Revert a test case to a previous version | > **Note:** Version history can be enabled or disabled at the project level in [Project Settings](/platform/project-settings). ## Metadata | Field | Description | | :--- | :--- | | **Created by** | Author name and timestamp | ## Related - [Creating & Editing](https://docs.testdino.com/test-management/test-case/creating-editing): Create test cases using forms, quick entry, or suite menus - [Organizing at Scale](https://docs.testdino.com/test-management/test-case/organizing-at-scale): Suites, tags, and bulk operations - [Import & Export](https://docs.testdino.com/test-management/import-export): Bulk import from CSV or TestRail - [Project Settings](https://docs.testdino.com/platform/project-settings): Configure custom fields, attachments, and version history --- ## Creating and Editing Test Cases > Source: https://docs.testdino.com/test-management/test-case/creating-editing > Description: Create test cases in TestDino using forms, quick entry mode, or suite context menus. Edit inline for quick changes or open full-screen for detailed updates. The Test Case Management tab provides multiple ways to create test cases. Each method supports a different workflow, from full detailed entry to quick creation within a suite. [Video: AI-Powered Manual Test Case Creation with TestDino MCP](https://www.youtube.com/embed/uxCpfPdgZPw?si=cowVqSsuUM49j_qN) ## Three Creation Methods You can create a new test case in three different ways, depending on the level of detail you want to add upfront. ### 1. New Test Case * Click the **New Test Case** button in the top navigation bar. * This opens the complete "Create Test Case" form with all properties: title, description, steps, and classifications. [Video: new-test-case.mp4](https://testdinostr.blob.core.windows.net/docs/docs/testcase-management/test-case/creating-editing/new-test-case.mp4) ### 2. Quick Test Creation Within Suite * At the bottom of the test case list within any suite, find the row labeled "**Create quick test**". * Type a title into the input field and press Enter. * This instantly creates a new test case in that suite with only a title. You can then double-click it to edit and add more details later. [Video: quick-test-creation-within-suite.mp4](https://testdinostr.blob.core.windows.net/docs/docs/testcase-management/test-case/creating-editing/quick-test-creation-within-suite.mp4) ### 3. From Suite Menu * Click the action menu (...) on any suite. * Select **Add Test Case**. * This opens the full creation form, but the **Test Suite** field is pre-selected with the suite you chose. [Video: from-suite-menu.mp4](https://testdinostr.blob.core.windows.net/docs/docs/testcase-management/test-case/creating-editing/from-suite-menu.mp4) > **Warning:** A test case must be assigned to a suite. This is a **required field**. If you import test cases without specifying a suite, they are automatically placed in the **"Unassigned"** suite. ## Inline Editing [Video: inline-editing.mp4](https://testdinostr.blob.core.windows.net/docs/docs/testcase-management/test-case/creating-editing/inline-editing.mp4) An individual test case can be viewed in two modes: **Sheet View** and **Full-Screen View.** ### 1. Sheet View (Default) Clicking on a test case once opens it in a **side sheet** on the right side of the screen. This sheet displays all test case information in a scrollable panel. Two quick action buttons appear at the top: * **Full Screen:** Expands the test case to occupy the full screen. * **Print:** Exports the test case details to a printable format (PDF), including all core information, steps, and metadata. #### Editing in Sheet View You can edit any field directly in the sheet: * Single click to open the sheet. * Double-click on any editable field to modify it or use the pencil icon next to editable fields. * After editing, confirm changes with the tick (checkmark) icon or discard with the cross (x) icon. This applies to text fields, dropdowns, and tags. All updates are saved instantly upon confirmation. ### 2. Full-Screen View The full-screen mode provides a detailed layout suitable for reviews, documentation exports, or test audits. Click the full-screen icon in the top-right corner of the sheet to open the test case in a wider layout. Editing options remain the same as in sheet view. ## Adding Test Steps When creating or editing a test case, switch between **Classic** and **Gherkin** step formats using the tabs in the step editor. **Classic:** Each step has three fields: 1. **Action** - What action to perform 2. **Test Data** - Input data (optional) 3. **Expected Result** - What should happen Click **Add Step** to append additional steps. **Gherkin (BDD):** Write behavior-driven steps using structured keywords: 1. **Given** - Describe the initial context 2. **When** - Describe the action or event 3. **Then** - Describe the expected outcome 4. **And / But** - Add conditions or exceptions Each step maps to one keyword and its description. See [Test Case Structure](/test-management/test-case/structure#test-steps) for the full field reference. ## Related - [Test Case Structure](https://docs.testdino.com/test-management/test-case/structure): Fields, steps, custom fields, and attachments - [Organizing at Scale](https://docs.testdino.com/test-management/test-case/organizing-at-scale): Suites, tags, and bulk operations - [Import & Export](https://docs.testdino.com/test-management/import-export): Bulk import from CSV or TestRail - [MCP Integration](https://docs.testdino.com/mcp/overview): Create and manage test cases with AI --- ## Organizing Test Cases at Scale > Source: https://docs.testdino.com/test-management/test-case/organizing-at-scale > Description: Scale your test case repository in TestDino using suites, tags, and bulk operations. Maintain structure as your test suite grows to hundreds of cases. Organize test cases using **suites** for hierarchical grouping and **tags** for cross-sectional categorization. ## Suite Assignment and Hierarchy The primary method of organization is the suite hierarchy. Suites represent logical areas such as features, modules, or workflows, and can contain nested **subsuites** for deeper structure. Create suites and subsuites that mirror your application's logical structure. Place new test cases into the most relevant suite. ### Using Tags for Cross-Suite Organization Tags provide a flexible, secondary organization system that works across suites. Tags solve problems that hierarchies cannot. * **For example**, you may have "Smoke" tests in many different feature suites (Login, Payments, Search). * A hierarchy cannot group them, but a smoke tag can. * Add tags like smoke, regression, p1, or v2-feature to test cases. * You can then use the **Tags filter** to find all test cases with that tag, regardless of their suite. > **Tip:** > * Each test case can have multiple tags. > * Tags can be created on the fly while editing a test case. > * You can apply tags in bulk or during import. ### Suite vs Tags: When to Use What Using both correctly keeps your test repository searchable, scalable, and intuitive. | Use Case | Choose | Why | | ----- | ----- | ----- | | Grouping tests by feature or component | **Suite** | Suites represent structure and ownership. | | Grouping across releases or builds | **Tag** | Tags cross suites and help track test coverage per version. | | Managing different test types | **Tag** | Add "Smoke", "Regression", or "End-to-End" tags to mix across suites. | | Permanent categorization | **Suite** | Suites persist as long-term organizational containers. | | Temporary or cross-cutting tracking | **Tag** | Ideal for temporary campaigns or sprints. | ## Bulk Operations Bulk operations apply actions to multiple test cases at once. Use the **checkboxes** beside test case rows in list/grid view to select more than one test case. Alternatively, you can also select all test cases in that particular suite from the table header. ### Suites When suites are selected from the sidebar, action buttons appear, such as Delete suites and Clear. The following operations can be performed: 1. **Reorder Suite:** Change the sequence of suites within the same hierarchy level. Suites cannot be moved across different parent suites. > **Note:** > * You can reorder root-level suites with each other. > * You can reorder sub-suites only within the same parent suite. > * You cannot drag a subsuite to a different parent suite. > * Similarly, if a subsuite contains nested suites, you cannot reorder it outside its immediate hierarchy. > * Each level (root, subsuite, nested subsuite) maintains its own independent reorder scope. 2. **Delete Suites:** Permanently delete the selected suites. Deleted suites cannot be recovered. 3. **Expand/Collapse:** Toggle the visibility of nested suites when managing multiple levels simultaneously. [Video: Bulk Operation Suites](https://testdinostr.blob.core.windows.net/docs/docs/testcase-management/test-case/organizing-at-scale/bulk-operations-suites.mp4) ### Test Cases When test cases are selected in List or Grid view, action buttons appear, such as Edit, Delete, and Clear. The following actions are available: 1. **Move to Suite**: Reassign selected test cases to another suite or subsuite. 2. **Change Description**: Update the description for all selected cases. 3. **Change Precondition / Postcondition**: Update these fields for all selected cases. 4. **Change Classifications**: Batch update Status, Severity, Type, Layer, or Behavior. 5. **Change Automation Status**: Batch update Manual/Automated status and flags (To Be Automated). > **Note:** Blank fields are ignored, and existing data remains unchanged. 6. **Add/Remove Tags**: Add new tags, remove all tags, or keep existing tags for the selected cases. 7. **Delete**: Permanently delete all selected test cases. Deleted test cases cannot be recovered. [Video: Bulk Operation Tests](https://testdinostr.blob.core.windows.net/docs/docs/testcase-management/test-case/organizing-at-scale/bulk-operation-tests.mp4) > **Warning:** In bulk operation, a maximum of 200 items can be edited/deleted at a time. ## Related - [Test Suites](https://docs.testdino.com/test-management/suites): Create and manage suite hierarchy - [Test Case Structure](https://docs.testdino.com/test-management/test-case/structure): Fields, steps, custom fields, and attachments - [Creating & Editing](https://docs.testdino.com/test-management/test-case/creating-editing): Three creation methods and inline editing - [Import & Export](https://docs.testdino.com/test-management/import-export): Bulk import from CSV or TestRail --- ## Import and Export Test Cases in TestDino > Source: https://docs.testdino.com/test-management/import-export > Description: Import test cases into TestDino from CSV files or TestRail. Export filtered test case sets for external use, reporting, or migration. Import test cases in bulk from a CSV file or migrate directly from TestRail. Export filtered or selected test cases as CSV for backups, sharing, or re-import. ## Import CSV The Import option uploads test cases in bulk using a CSV file. It follows a step-by-step guided process and accepts the same column structure as exported files, so you can export, modify, and re-import. ### 1. Upload CSV File * Click **Import** from the toolbar and select a CSV file. * A **Download Sample CSV** button is available to get a template and ensure correct formatting. ![Upload CSV File](https://testdinostr.blob.core.windows.net/docs/docs/testcase-management/import-export/upload-csv-file.webp) ### 2. Map Columns * TestDino attempts to match columns from your CSV to its internal fields. * You must map the Title field, which is required. * You can map other fields, such as Description, Priority, Severity, Status, Tags, and Suite Hierarchy/Path. * Any unmapped columns from your CSV will be automatically created as custom fields in TestDino. [Video: Import CSV File](https://testdinostr.blob.core.windows.net/docs/docs/testcase-management/import-export/map-columns.mp4) ### 3. Map Enum Values * Align CSV values for dropdowns such as Priority, Severity, Type, Behavior, Layer, Status, and Automation Status to TestDino's predefined options. * You can set defaults for any unmatched values. [Video: Map Enum Values](https://testdinostr.blob.core.windows.net/docs/docs/testcase-management/import-export/map-enum-values.mp4) ### 4. Configure Duplicate Handling Decide how TestDino handles identical titles already present in the project: * **Skip Duplicate**: Ignores the new test case from the CSV. * **Update Existing**: Overwrites the existing test case's data with the CSV data. * **Create Duplicate**: Creates a new test case, resulting in two tests with the same title. ![Configure Duplicate Handling](https://testdinostr.blob.core.windows.net/docs/docs/testcase-management/import-export/configure-duplicate-handling.webp) ### 5. Preview & Import * Review a sample preview of the first row to confirm your data is mapped correctly. * Click **Start Import** to begin the process. [Video: Preview & Import](https://testdinostr.blob.core.windows.net/docs/docs/testcase-management/import-export/preview-import.mp4) > **Note:** If any test case lacks suite information, TestDino automatically creates an Unassigned suite and adds the test case to it. ## Import from TestRail Migrate test cases from TestRail with automatic field mapping and suite structure preservation. ### Export from TestRail Export your test cases from TestRail as a CSV file. ### Upload to TestDino Click **Import** from the toolbar and select the TestRail CSV file. ### Review auto-mapping TestDino automatically maps TestRail fields to their TestDino equivalents. Review the mapping and adjust if needed. | TestRail field | TestDino field | | :--- | :--- | | Title | Title | | Section | Suite | | Priority | Priority | | Type | Type | | Custom fields | Auto-created as custom fields | ### Import Click **Import** to complete the migration. Suite hierarchy from TestRail is preserved. > **Tip:** Columns from TestRail that do not match a built-in TestDino field are automatically created as custom fields in your project. ## Download CSV Template Click **Download Sample CSV** in the import dialog to get a pre-formatted template. The template includes all supported columns with example data. Fill in your test cases and upload to import. ## Export CSV Export test cases as a CSV file directly from the Test Cases page. The export adapts to what is selected or filtered in your current view. Exports always contain the same data columns and structure, but the scope of data depends on what is filtered or selected before exporting. ### How to Export? 1. Open the Test Cases tab. 2. Apply filters or select what you want to export. 3. Click the Import/Export icon on the top-right of the toolbar. 4. Choose Export Selected (#) if test cases are selected, or Export All if exporting all visible results. 5. The CSV file downloads automatically and includes all visible columns and metadata. A confirmation banner appears on successful export, e.g., "4 test cases exported successfully." > **Warning:** **Best Practices:** > > * Apply filters beforehand to target only the relevant test cases in your export. > * Select just the cases you need to keep the CSV file concise. > * Export entire suites to back up all test cases for a module and its subsuites. > * Reuse exported CSVs as templates for streamlined importing or bulk updates. ### Export Scenarios Scenario | Export Result | | :--- | :--- | | No filters, no selections applied | Exports all test cases in the project. The number matches the Total shown in the KPI tile at the top. | | Filters applied, no selections made | Exports only the filtered test cases visible on the page, across all suites. The export respects your active filters regardless of the suite hierarchy. | | One or more test cases selected (no filters) | Exports only the selected test cases. The export button label updates to show the count, for example: Export Selected (4). | | Filters applied and individual test cases selected | Exports only the test cases you selected from the filtered list. The filtered set limits the selection. | | One entire suite selected | Exports all test cases and sub-suites under that suite. The export includes all nested content within the selected suite. | | Multiple suites selected | Exports all test cases contained within the selected suites and their nested subsuites. | ## Related - [Test Case Structure](https://docs.testdino.com/test-management/test-case/structure): Fields, steps, custom fields, and attachments - [Test Suites](https://docs.testdino.com/test-management/suites): Create and manage suite hierarchy - [Organizing at Scale](https://docs.testdino.com/test-management/test-case/organizing-at-scale): Suites, tags, and bulk operations - [MCP Integration](https://docs.testdino.com/mcp/overview): Create and manage test cases with AI --- ## TestDino Organizations Overview > Source: https://docs.testdino.com/platform/organizations/overview > Description: Manage your TestDino organization: configure account settings, create projects, manage users, and control access for your entire team. It is where you create new organizations, switch between existing ones, and enter each organization's workspace. Work, billing, users, and projects live inside an organization. Selecting the correct organization ensures you see the right projects, members, and settings. [Video: Organization overview](https://testdinostr.blob.core.windows.net/docs/docs/organization/organization.mp4) ## Quick Start Steps 1. **Create an organization** * Click **Add Organization** and enter the name, description, and website * Select **Create**. 2. **Open or switch organization**. Click an organization name in the list to enter its workspace 3. **Create your first project.** In the organization workspace, go to **Projects** and follow the given steps. --- ## TestDino Projects for Playwright Testing > Source: https://docs.testdino.com/platform/organizations/projects > Description: Create and manage TestDino projects to isolate Playwright test data, API keys, and analytics by team, repo, or environment. Projects isolate test data, keys, integrations, and analytics. Actions in one project do not affect others. [Video: Projects overview](https://testdinostr.blob.core.windows.net/docs/docs/organization/projects.mp4) ## Quick Start Steps 1. **Create a project** - Select **New Project**, provide a **Name** (3-30 characters) and an optional **Description** (10-350 characters). Paid plans allocate test executions per project via [Test Limits](/platform/billing-and-usage/test-limits). 2. **Get started** - After creation, run sample tests for a pre-populated dashboard, or skip to set up your own tests. 3. **Open a project** - Click a project card to enter the workspace. - [Dashboard](https://docs.testdino.com/getting-started): For high-level project health and key metrics - [Test Runs](https://docs.testdino.com/cli/overview): Review Test Runs for execution details and evidence - [Pull Requests](https://docs.testdino.com/integrations/ci-cd/github): Track Pull Requests linked to your test results - [Test Case Management](https://docs.testdino.com/platform/playwright-test-cases): Browse Test Cases for management and organization. - [Test Explorer](https://docs.testdino.com/platform/playwright-test-explorer): Analyze test health and history across spec files and test cases - [Analytics](https://docs.testdino.com/mcp/overview): Analyze trends and errors for your project - [Settings](https://docs.testdino.com/test-management/playwright-test-case-management): Manage Settings for API keys, integrations, and environment mapping --- ## TestDino Users, Roles, and Permissions > Source: https://docs.testdino.com/platform/organizations/users-roles > Description: Manage team members, assign roles, and control access permissions in TestDino. Invite users and define what each role can view or modify. In **Users & Roles**, membership is organization-level, so roles apply across all projects in the organization. Invites support two paths: - **Org users** join the organization and can be granted any role. - **Guest users** are time-bounded collaborators with limited access (when enabled by your plan). Assign roles to enforce least privilege and keep ownership, administration, and billing separate. ![User & role](https://testdinostr.blob.core.windows.net/docs/docs/organization/user-and-roles.webp) ## Role Capabilities | Organization Role | Can Invite | Can Update | Can Remove | | :---------------- | :-------------------- | :------------- | :------------- | | **Owner** | All | All | All | | **Admin** | Admin, Member, Viewer | Member, Viewer | Member, Viewer | | **Member** | Member, Viewer | Viewer | Viewer | | **Viewer** | Viewer | None | None | ## Quick Start Steps 1. **Invite a member** Click **Invite member**, enter email, choose a role, optionally mark **External user** (Guest User), then send. 2. **Change a role** Use the role dropdown in the table. 3. **Filter members** Use **All Roles** to show **Admin**, **Member**, or **Viewer** only. - **The owner** holds ultimate control. Keep this small and monitored. - **Admin** manages people and settings. - **Member** contributes to projects. - **The viewer** has read-only access for audits and stakeholders. --- ## TestDino Organization Settings > Source: https://docs.testdino.com/platform/organizations/settings > Description: Configure your TestDino organization profile, name, identifiers, and account-level preferences from the organization settings page. Keep this information accurate for branding, audit, and support. ![Settings](https://testdinostr.blob.core.windows.net/docs/docs/organization/settings.webp) - **The organization name** and **Organization website** can be edited. Edits enable **Save Changes**. - **Organization ID** and **Created on** are read-only for reference and support tickets. - **Organization logo** supports PNG or JPG, up to 5 MB. Use it to brand project dashboards and reports. ## Quick Start Steps 1. **Edit profile:** Update the name or website, then **Save Changes**. 2. **Upload a logo:** Click **Change Logo**, select an image, and save. 3. **Secure ownership:** - Keep one Admin to allow ownership transfer when needed. - Contact [support@testdino.com](mailto:support@testdino.com) for ownership disputes or deletion requests. > **Note:** Only owners and admins can update organization settings, manage members, or delete the organization. For account-wide issues, contact [support@testdino.com](mailto:support@testdino.com). --- ## TestDino Billing and Usage Overview > Source: https://docs.testdino.com/platform/billing-and-usage/overview > Description: Monitor your TestDino subscription status, plan limits, and test execution usage in real time. Understand usage before hitting plan thresholds. The Billing & Usage page displays your current subscription, usage metrics, and plan features. Access it from the organization sidebar. The page contains three tabs: **Usage**, **[Test Limits](/platform/billing-and-usage/test-limits)**, and **[Invoices](/platform/billing-and-usage/invoices)**. ## Usage ![Usage tab showing subscription plan, test case usage, projects, users, and plan features](https://testdinostr.blob.core.windows.net/docs/docs/billing-usage/usage-billing.webp) The Usage tab shows your active plan and current consumption at a glance. ### Subscription Card | Field | Description | | :--- | :--- | | Plan | Current plan name (Community, Pro, Team, Enterprise) and status | | Test Case Usage | Executions consumed vs. monthly limit | | Projects | Active projects vs. plan limit | | Users | Organization members vs. plan limit | | Data Retention | How long test run data is stored | | Billing Period Ends | Next billing cycle date | ### Plan Features A feature list shows capabilities included in your current plan. Feature availability varies by plan tier. | Feature Category | Examples | | :--- | :--- | | CI/CD Features | GitHub Actions, GitLab CI integration | | PR Features | Pull request summaries and status checks | | Debugging Features | Traces, screenshots, video playback | | Integrations | Jira, Linear, Asana, Slack | | Quality Metrics | Flaky test detection, error grouping | | Test Case Management | Manual test case creation and suites | ## Related Test limits, invoices, and manage billing. - [Test Limits](https://docs.testdino.com/platform/billing-and-usage/test-limits): Allocate and redistribute test execution limits across projects - [Invoices](https://docs.testdino.com/platform/billing-and-usage/invoices): View, download, and filter billing invoices - [Organization Settings](https://docs.testdino.com/platform/organizations/settings): Configure organization-level preferences --- ## Test Execution Limits > Source: https://docs.testdino.com/platform/billing-and-usage/test-limits > Description: View and redistribute monthly Playwright test execution limits across projects in your TestDino organization. Manage quotas by plan tier. The Test Limits tab displays your monthly test execution quota, how it is distributed across projects, and tools to redistribute limits. ![Test Limits tab showing monthly quota, available executions, project allocations, and auto-borrow controls](https://testdinostr.blob.core.windows.net/docs/docs/billing-usage/test-limits.webp) ## Overview Cards Three cards summarize your current allocation status. | Card | Description | | :--- | :--- | | Monthly Test Limit | Total executions included in your plan, with a usage bar showing consumed count | | Available Now | Unallocated executions available for assignment to projects | | Current Period | Days remaining until the quota resets, with the reset date | ## Project Allocations Each project displays its allocation with three values. | Field | Description | | :--- | :--- | | Used / Allocated | Executions consumed vs. total assigned to the project | | Remaining | Executions still available within the project's allocation | | Usage % | Percentage of allocated limit consumed | Projects are sorted by usage. ### Move Between Projects Redistribute executions between projects. 1. Select **Move Between Projects**. 2. Choose the source and target projects. 3. Enter the number of executions to transfer. 4. Confirm the transfer. Transfers take effect immediately. ### Auto-Borrow When a project exhausts its allocation, it borrows from the unallocated pool. Test runs continue without interruption even when a project exceeds its limit. - **Auto-Borrow: ON** allows projects to borrow unallocated executions automatically - **Auto-Borrow: OFF** blocks test streaming when a project reaches its limit Configure Auto-Borrow using the **Auto-Borrow Settings** button. Borrowed executions reduce the unallocated pool and affect availability for other projects. ## Related Billing overview, invoices, and retention. - [Usage Overview](https://docs.testdino.com/platform/billing-and-usage/overview): View subscription status and plan features - [Invoices](https://docs.testdino.com/platform/billing-and-usage/invoices): View, download, and filter billing invoices --- ## Manage TestDino Billing and Subscription > Source: https://docs.testdino.com/platform/billing-and-usage/manage-billing > Description: Upgrade, downgrade, or cancel your TestDino subscription. Manage payment methods, billing contacts, and plan changes from the billing settings. The Manage Billing tab controls plan changes, upgrades, downgrades, and cancellations for your organization. ![Manage Billing tab showing current plan and subscription options](https://testdinostr.blob.core.windows.net/docs/docs/billing-usage/manage-billing.webp) ## Change Plan To switch plans: 1. Open **Billing & Usage** from the organization sidebar. 2. Select the **Manage Billing** tab. 3. Click **View All Plans** to compare available options. 4. Select the target plan and confirm the change. Upgrades take effect immediately. Downgrades apply at the end of the current billing period. > **Warning:** Downgrading reduces plan limits, including data retention. Historical data beyond the new retention window is permanently removed after the downgrade takes effect. ## Subscription Types ### Monthly Billed each month on the renewal date. Usage resets on the organization's usage cycle. Overage is billed in the next invoice. ### Annual Prepaid for twelve months. Usage resets on the standard monthly cycle. ## Billing Cycle vs Usage Cycle - **Usage cycle** resets monthly on the organization's reset date. - **Billing cycle** is either monthly or annual, depending on the subscription. | Usage Cycle | Billing Cycle | Notes | | :--- | :--- | :--- | | Monthly | Monthly | Standard subscription | | Monthly | Annual | Annual prepay with monthly usage resets | | Annual | Annual | Enterprise contracts only | | Annual | Monthly | Not supported | ## Cancel Subscription To cancel, open the Manage Billing tab and select **Cancel Subscription**. | Behavior | Detail | | :--- | :--- | | Access | Continues until the current billing period ends | | Charges | No future charges after cancellation | | Plan | Organization moves to the Community plan automatically | | Data Retention | Reduced to Community plan limits | - [Billing Overview](https://docs.testdino.com/platform/billing-and-usage/overview): View subscription status and test limit allocation. - [Invoices](https://docs.testdino.com/platform/billing-and-usage/invoices): View, download, and filter billing invoices. --- ## TestDino Billing Invoices and History > Source: https://docs.testdino.com/platform/billing-and-usage/invoices > Description: Access, download, and filter past billing invoices for your TestDino organization. Keep records of all subscription charges and usage fees. The Invoices tab lists all billing records for your organization. View and download payment invoices for accounting and audits. ![Invoices tab showing invoice list with status, amount, and download actions](https://testdinostr.blob.core.windows.net/docs/docs/billing-usage/invoices.webp) ## Invoice List Each invoice row displays the following columns. | Column | Description | | :--- | :--- | | ID | Unique invoice identifier | | Customer | Organization or account name | | Payment ID | Payment processor reference | | Amount | Total billed amount | | Status | Paid, Pending, or Failed | | Date | Invoice issue date | ## Filters and Search Narrow the invoice list using the available controls. | Filter | Options | | :--- | :--- | | Status | Filter by Paid, Pending, or Failed | | Time Range | Restrict results to a specific date range | | Search | Find by invoice ID, customer name, or email | ## Actions Each invoice row provides quick actions. | Action | Description | | :--- | :--- | | View | Open the hosted invoice in a new tab | | Download PDF | Save a PDF copy for records | | Copy Link | Copy a secure link to the invoice | Click **Sync** at the top of the invoice list to pull the latest records from the billing provider. - [Billing Overview](https://docs.testdino.com/platform/billing-and-usage/overview): View subscription status and test limit allocation. - [Test Limits](https://docs.testdino.com/platform/billing-and-usage/test-limits): Allocate and redistribute test execution limits across projects --- ## Project Settings > Source: https://docs.testdino.com/platform/project-settings > Description: Configure all project settings in TestDino: API key management, usage limits, third-party integrations, and branch-to-environment mapping in one place. Update project metadata, track monthly execution quota, issue and rotate API keys, connect [Integrations](/integrations/overview), and map branches to environments. ## General ![Basic settings](https://testdinostr.blob.core.windows.net/docs/docs/setting/settings/general-setting.webp) Define the project identity and context shown across the product. * **Project ID.** Read-only identifier used by our platform or support. * **Project Name.** Display name in headers and menus. * **Description.** Short note to describe scope or ownership. * **Danger Zone.** Permanently delete this project and all of its data. This action cannot be undone. Click **Delete Project** to proceed. > **Warning:** Before deleting, revoke API keys, disconnect webhooks, and export any required reports. ## API Keys ![api keys](https://testdinostr.blob.core.windows.net/docs/docs/setting/api-keys/api-key-table.webp) Create and manage credentials for local or CI pipelines and tools that send data to TestDino. ### Create a Key 1. Select **Generate Key**. 2. Enter a **Key Name** and **Expiration (days)**, 1 to 365. 3. Create the key and store the secret in your secret manager. ### Manage Keys * **Copy or view details.** Use row actions to retrieve metadata as allowed. * **Rotate.** Create a replacement key, update CI, then revoke the old key. * **Revoke or delete.** Immediately invalidates the key. > **Tip:** Prefer short expirations for CI. Rotate if exposure is suspected. ## Automated Reports Automated Reports generate PDF summaries of test execution data and send them to specified recipients on a recurring schedule. ### Create a Report 1. Click **Create Automated Report**. 2. Enter a **Report Name**. 3. Add recipients, configure the schedule, and apply optional filters. 4. Select **Create**. Use the **Enable/Disable** toggle to control whether the report generates on schedule. ### Recipients Add one or more email addresses. Each recipient has a type selector for **To**, **CC**, or **BCC**. ### Schedule | Setting | Options | | :--- | :--- | | Frequency | Daily, Weekly, or Monthly | | Time (UTC) | 00:00 to 23:00 (local timezone shown in parentheses) | | Day of Week | Sunday to Saturday (Weekly frequency only) | | Report Time Period | Lookback window of 1 to 30 days | ### Filters Filters are optional. Narrow report data by tags or environment. | Filter | Description | | :--- | :--- | | Tags | Add one or more tags to scope the report | | Environment | Select from branch environment mappings | ### Report Actions | Action | Description | | :--- | :--- | | Preview | Download a sample PDF before sending | | Edit | Update recipients, schedule, filters, and time period | | Pause/Resume | Temporarily disable or re-enable a report | | Delete | Remove a report configuration | For step-by-step setup and report content details, see the [Automated Reports Guide](/guides/automated-playwright-reports). ## TestDino Add-ons ### Status Badges Embed live SVG badges in GitHub or GitLab READMEs that display test health, flakiness, and test counts from the latest completed run. Configure badges from **Integrations → TestDino Add-ons → Status Badges**. **GitHub:** ![GitHub status badges configuration](https://testdinostr.blob.core.windows.net/docs/docs/setting/integration/github-status-badges.webp) **GitLab:** ![GitLab status badges configuration](https://testdinostr.blob.core.windows.net/docs/docs/setting/integration/gitlab-status-badges.webp) For badge types, color scales, and setup steps, see the [Status Badges guide](/guides/test-health-badges). ## Integrations A central place to connect CI/CD, communication, and issue tracking. For installation, permissions, and workflows, see [**Integrations**](/integrations/overview). ### 1. CI/CD ![CI/CD](https://testdinostr.blob.core.windows.net/docs/docs/setting/integration/ci-cd.webp) **GitHub** - Test-run summaries on commits and PRs. **GitLab** - Test-run summaries on merge requests and commits. Only one Git provider (GitHub or GitLab) can be active per project. **TeamCity** - Upload Playwright test reports from your TeamCity builds directly to TestDino. ### 2. Issue Tracking ![Issue Tracking](https://testdinostr.blob.core.windows.net/docs/docs/setting/integration/issue-tracking.webp) To create issues from failed or flaky tests, use: * [Jira](/integrations/jira-playwright-test-failures) * [Linear](/integrations/issue-tracking/linear) * [Asana](/integrations/issue-tracking/asana) * [monday](/integrations/issue-tracking/mon) ### 3. Communication ![Communication](https://testdinostr.blob.core.windows.net/docs/docs/setting/integration/communication.webp) * **Slack Webhook** - posts a test run summary to the Slack channel. * **Slack App** - posts a test run summary to the Slack channel mapped to the run's branch environment. > **Note:** If no environment mapping matches, TestDino posts the summary to the default Slack channel. > > **Remember**: Map each branch pattern to an environment and select a Slack channel for that environment; environment mapping takes precedence over the default channel. This applies to GitHub as well. ### Typical Actions * Connect the integration using OAuth, then grant the required scopes. * Configure targets, such as the default project, team, or channel, and map by environment or branch where supported. * For **GitHub:** Enable bot comments on pull requests and commits, and choose the branches or environments that should receive summaries. * For **Slack App:** Set a default channel, map environments or branch patterns to channels, refresh the channel list, and send a test message. ## Branch Mapping It maps repository branches to specific environments (production, staging, etc.) using exact names or patterns, ensuring your results display in the correct environment throughout the platform. ![branch mapping](https://testdinostr.blob.core.windows.net/docs/docs/setting/branch-mapping/branch-mapping-table.webp) ### Why does it matter? End-to-end tests often run on pull requests and short-lived branches. Without mapping, those runs fragment across dozens of branch names. Mapping rolls them up to the right environment, so pass rates, volumes, and alerts reflect reality. ### Add or edit an environment 1. Enter **Name** and a short **Label** used in chips and filters. 2. Optionally set a **Description** and color. 3. Define **Branch patterns**: * **Exact match** to bind a single branch, for example, `main`. * **Pattern match** to bind many branches, for example, `feature/*`, `release/*`, `hotfix/*`. 4. Save. Changes can take up to 2 minutes to apply. > **Warning:** > * Environments and branches: A development or staging environment typically encompasses multiple branches, for example, `feature/123` or `user/td-123`. Production typically maps to one protected branch, such as, `main` or `master`. Teams that run tests on PRs opened or merged will execute on the PR's head branch. Mapping ensures those runs are attributed to the correct environment. > * Keep labels short, for example, PROD, STAGE, DEV. > * Review patterns when you add long-lived branches. > * Limit: up to 10 environments per project. ### CLI Environment Override [Video: CLI Environment Override video](https://www.youtube.com/embed/2jUSi6EZEqw?si=Tkos9cRpbp_5p0dn) You can bypass branch mapping and assign test runs to a specific environment directly from the CLI. #### 1. How it works When enabled, the `--environment` flag in your upload command takes priority over branch mapping rules. If you specify `--environment=staging`, that run goes to staging regardless of which branch triggered it. This is useful when: * You run tests against multiple environments (prod, stage) from a single commit * Your CI pipeline targets specific environments that don't match your branch naming * You need manual control over the environment assignment for certain runs #### 2. Enable CLI Environment Override 1. Go to **Project Settings → Environment Settings** 2. Turn on **CLI Environment Override** 3. Click **Save** to apply the changes. With the toggle off (default), the CLI flag is ignored, and branch mapping rules apply. #### 3. Using the flag Add `--environment` to your upload command: ```bash npx tdpw upload ./playwright-report --token="your-token" --environment="staging" ``` > **Warning:** > * Maximum 10 environments per project. If you've hit the limit, the CLI run still succeeds but uses branch mapping instead. You'll see a warning in the CLI output. > * Environment names must be valid. Invalid characters cause the upload to fail. ### How CLI runs interact with branch mapping updates * **CLI-created runs**: When you update branch mapping rules, test runs created via the `--environment` flag keep their original environment. The CLI value is preserved. * **Branch-mapped runs**: Existing runs created through branch mapping update to reflect the new rules. --- ## TestDino Playwright Test Dashboard > Source: https://docs.testdino.com/platform/playwright-test-dashboard > Description: Unified Playwright test dashboard for KPIs, failures, flaky tests, and trends. The Dashboard displays Playwright test health in a single view. KPI tiles, recent runs, pull requests, execution trends, flaky tests, and slowest tests are all visible without switching between views. [Video: Dashboard video](https://www.youtube.com/embed/SoYwbdolz6g?si=4XhExWBY1TD0x5H3) ## Quick Reference | Section | What It Shows | |:---|:---| | KPI Tiles | Total executions, passed, failed, and average run duration | | Recent Test Runs | Latest runs with status badges and metadata | | Recent Pull Requests | PR activity with merge/open status | | Test Case Execution Trend | Daily pass/fail volume over time | | Most Flaky Tests | Tests ranked by flakiness with severity and count | | Slowest Tests | Tests ranked by duration with stability indicator | ## Getting Started 1. **Open the project** — Go to Organization, then select your Project. 2. **Set the period** — Choose a time window (Last 7, 14, or 30 days) from the top-right filter. 3. **Review the dashboard** — All sections load for the selected scope. ## KPI Tiles Four metric tiles at the top of the dashboard for the selected period. Each tile includes a trend badge showing percentage change compared to the previous period. Green indicates improvement; red indicates regression. - **Total Test Case Execution**: Number of test cases executed in the selected period. Gauges coverage and CI activity. - **Passed Test Case**: Count of tests that passed. Confirms stability and that recent fixes hold. - **Failed Test Case**: Count of tests that failed. Direct queue for triage and fixes. - **Avg Run Duration**: Average time per test run. Spots slow suites and tracks pipeline efficiency. ## Recent Test Runs Displays the latest test runs with status and metadata. Each row includes the run status icon, run ID and title linking to full details, color-coded result badges for passed (green), failed (red), flaky (yellow), and skipped (grey) tests, plus metadata like author, time elapsed, branch name, and environment tag. Click "View all Test Runs" to open the full Test Runs page. ## Recent Pull Requests Shows recent PR activity across the project. Each row includes the PR status (Merged or Open), PR title and number linking to the PR, and author with timestamp showing when it was last updated. ## Test Case Execution Trend This chart shows the daily breakdown of passed and failed test case executions over the selected time period. A rising green area signals growing stability; red spikes indicate regressions or environment issues. Hover over a date to see exact counts. Use it to spot failure spikes, compare day-over-day volume, and correlate changes with deployments or code updates. It shows two daily metrics: - **Passed Test Cases**: The number of test cases that passed each day. Track this to gauge suite stability and confirm improvements after fixes. - **Failed Test Cases**: The number of test cases that failed each day. Use this to estimate triage load and verify that the failure volume is trending downward. ## Most Flaky Tests Tests that pass and fail across runs, ordered by flakiness. Helps QA prioritize stabilization and developers target fragile areas. Each test is also clickable to immediately view the latest run for that test. ## Slowest Tests Tests that take the longest to execute, ordered by duration. Helps identify tests that slow down CI pipelines and are candidates for optimization. Each test is also clickable to immediately view the latest run for that test. --- ## Playwright Pull Request Test Summary > Source: https://docs.testdino.com/platform/pull-requests/summary > Description: View Playwright test results, commit history, and code changes for every pull request in TestDino. Spot failures and regressions before merging. The Pull Requests view lists all PRs with their latest test runs. Click any PR to open a detailed view with three tabs: [Overview](/platform/pull-requests/overview), [Timeline](/platform/pull-requests/timeline), and [Files Changed](/platform/pull-requests/files-changed). [Video: Pull Requests in TestDino](https://www.youtube.com/embed/ntZG8IM6Sa8) ## Why Use This View | Benefit | Description | | ----- | ----- | | Run context at a glance | See run ID, duration, and pass/fail/flaky/skipped counts per PR | | Open for proof | Click a row to view [failure clusters](/guides/playwright-error-grouping), specs, logs, screenshots, and traces | | Verifiable history | Expand a PR to see all runs and confirm if the retries stabilized tests | | Fast handoff | Jump from PR to a failing run or test case to file an issue | | Integrated code review | Review diffs in Files Changed alongside test results | | Filterable timeline | Trace which commits led to specific failures or fixes | ## Layout ![Pull Requests list view showing PR metadata, test run results, and status columns](https://testdinostr.blob.core.windows.net/docs/docs/pull-request/layout.webp) Each row represents a PR and includes: - **Pull request metadata:** title, number, author, and state - **Latest test run:** test run ID, start time, and duration - **Test results summary:** counts for passed, failed, flaky, and skipped - **Row expander:** full run history for that pull request ### Pull Request State Shows PR title, number, author, and state badge. - **Open:** Active PR under review. New commits trigger runs. - **Merged:** Changes integrated into the base branch. History retained. - **Closed:** PR closed without merging. History remains, but no new runs trigger. > **Tip:** Click the **PR number** to open the PR in GitHub or the merge request in GitLab. ### Filters and Controls Use the controls to narrow the list and refresh data: - Search by pull request title or number - Filter by status and author - Sort by newest (or other available sort order) - Sync to refresh the list ## Pull Request Detail View Select a pull request to open its detail view. ### Overview Use this tab to review the current test health for the pull request and open the latest run for deeper debugging. ### Timeline Use this tab to correlate activity over time. It can include: - Commits - Test runs - Pull request and code comments Use the timeline to identify when failures began and whether subsequent runs improved. ### Files Changed Use this tab to review code diffs associated with the pull request while keeping test results in context. ## Quick Start Steps ### Set Scope Filter the PR list by Status and Author. ### Scan Rows Scan the Latest Test Run and Test Results columns to identify risky pull requests. ### Open Detail View Click any PR row, and open the pull request detail view. ### Analyze Context Use Overview for current status, Timeline for cause and sequence, and Files Changed for the related code diff. > **Tip:** If needed, open a failing run to inspect logs, screenshots, traces, and console output. ## What You Get Beyond GitHub | Feature | Description | | ----- | ----- | | [Test run summary per PR](/platform/pull-requests/overview) | View all test runs per PR with pass/fail/flaky counts without opening CI pages | | [One click triage](/platform/playwright-test-runs) | Jump from PR to full test run and create Jira, Linear, Asana, or monday issues | | [Flakiness visibility](/guides/playwright-flaky-test-detection#quick-reference) | View run-to-run patterns to confirm fixes or spot recurring instability | | [Environment context](/guides/environment-mapping) | Results mapped to environments (dev, stage, prod) for quick impact assessment | | [Code diffs in context](/platform/pull-requests/files-changed) | Files Changed shows additions and deletions alongside test results | | [Timeline filtering](/platform/pull-requests/timeline) | Filter by event type to show only test runs, commits, or comments | ## Related Timeline, files changed, and PR overview. - [Overview](https://docs.testdino.com/platform/pull-requests/overview): Check PR health and test status at a glance - [Timeline](https://docs.testdino.com/platform/pull-requests/timeline): Chronological log of commits, test runs, and review activity - [Files Changed](https://docs.testdino.com/platform/pull-requests/files-changed): Review code diffs and modifications inside TestDino --- ## Playwright Pull Request Test Overview > Source: https://docs.testdino.com/platform/pull-requests/overview > Description: Get an instant overview of pull request health in TestDino. See Playwright test status, failure counts, and flakiness for every open and merged PR. The Overview tab shows the current status of a pull request. Assess PR health and decide whether to review, rerun, or fix. ## PR Header The header displays the primary context for the PR: - **PR Title and Number:** Name and unique identifier - **Status Badge:** Current state (Open, Draft, Merged, Closed) - **View PR Link:** Direct link to the PR in GitHub or merge request in GitLab - **Branch Information:** Source and target branches - **Timestamp:** Time of last update or creation ## Sidebar ![PR header showing title, status badge, branch info, and sidebar with author and file changes](https://testdinostr.blob.core.windows.net/docs/docs/pull-request/header.webp) A panel displaying PR metadata and activity: - Author, reviewer(s), and assignees - Total files changed - Aggregate code changes (additions and deletions) - Created and last updated timestamps ## KPI Tiles ![KPI tiles showing test runs count, pass rate percentage, files changed, and average duration](https://testdinostr.blob.core.windows.net/docs/docs/pull-request/kpi-tiles.webp) Four metrics summarizing PR activity: | Tile | Description | | ----- | ----- | | Test Runs | Total number of test runs executed for this PR | | Pass Rate | Aggregate pass rate across all test runs | | Files Changed | Total lines added and deleted | | Average Duration | Average execution time per test run | ## Latest test run ![Latest test run card showing passed, failed, flaky, skipped counts](https://testdinostr.blob.core.windows.net/docs/docs/pull-request/latest-test-run.webp) This card shows results from the most recent test execution: - **Status Counts:** Passed, failed, flaky, and skipped tests for this run - **Duration:** Execution time for this specific run - **Test Run Insights:** Summary with pass/fail counts and key failure details > **Tip:** Select **View full report** to open the full run detail for that execution. ## Test results trend ![Test results trend graph plotting passed, failed, flaky, and skipped counts over time](https://testdinostr.blob.core.windows.net/docs/docs/pull-request/test-results-trend.webp) A graph showing test results across all runs for this PR: - **Trend Graph:** Plots passed, failed, flaky, and skipped counts over time - **Interactive Tooltip:** Hover over any point to see exact counts for that run > **Tip:** Use the **filters** to change the scope by **Author** or **time period** (Last 7 runs, Last 14 runs). ## Related Summary, timeline, and files changed. - [Timeline](https://docs.testdino.com/platform/pull-requests/timeline): Chronological log of commits, test runs, and review activity - [Files Changed](https://docs.testdino.com/platform/pull-requests/files-changed): Review code diffs and modifications inside TestDino - [Test Runs](https://docs.testdino.com/platform/playwright-test-runs): View detailed test run results and analysis --- ## Playwright Pull Request Timeline View > Source: https://docs.testdino.com/platform/pull-requests/timeline > Description: View a chronological timeline of commits, Playwright test runs, and review activity for each pull request directly in TestDino. The **Timeline** tab displays all events associated with a pull request in a single feed. Use it to track which commits triggered specific test outcomes and trace PR history within TestDino. [Video: Timeline view showing commits, test runs, and code review events in chronological order](https://testdinostr.blob.core.windows.net/docs/docs/pull-request/timeline.mp4) ## Event Types The timeline shows several types of entries: | Type | Description | | ----- | ----- | | Commit with Test Run | Commit message with pass/fail/flaky/skipped counts. Clickable to open the test run. | | Commit Only | Commit details (message, author, SHA) with no linked test run. Appears when no run was executed. | | Code Reviews | Review events and comments synced from GitHub or GitLab. | > **Note:** The code review events and comments from GitHub can be filtered. ## Filtering and Sorting Controls at the top of the timeline: | Control | Options | | ----- | ----- | | Search | Filter by keywords | | Author | Show events from specific authors | | Data Type | All Data, Comments, Code Reviews, Commits, Test Runs | | Status | All, Resolved, Unresolved | | Sort | Newest, Oldest | ## Common Actions - **Open test run:** Click any entry with test stats to view full details in TestDino - **Open in GitHub or GitLab:** Click a commit message or "**View commit**" link - **Refresh the timeline:** Click Sync in the sidebar to fetch the latest events ## Related Summary, files changed, and PR overview. - [Overview](https://docs.testdino.com/platform/pull-requests/overview): Check PR health and test status at a glance - [Files Changed](https://docs.testdino.com/platform/pull-requests/files-changed): Review code diffs and modifications inside TestDino - [Test Runs](https://docs.testdino.com/platform/playwright-test-runs): View detailed test run results and analysis --- ## Pull Request Files Changed in TestDino > Source: https://docs.testdino.com/platform/pull-requests/files-changed > Description: Review code diffs and file-level changes associated with a pull request inside TestDino. See which files changed alongside Playwright test results. This tab displays all file changes associated with a pull request or merge request. Review diffs, additions, and deletions directly within TestDino. > **Note:** Requires an active [GitHub](/integrations/ci-cd/github) or [GitLab](/integrations/playwright-gitlab-ci) integration. If not connected, a prompt to connect appears. [Video: Files Changed tab showing code diffs with added and removed lines highlighted](https://testdinostr.blob.core.windows.net/docs/docs/pull-request/file-changed.mp4) ## Layout | Element | Description | | ----- | ----- | | PR Header | Pull request title, number, status, source, and target branches | | File List | All files added, modified, or deleted in the PR | | Diff Viewer | Expandable area showing line-level changes per file | ## Common Actions - **Review diffs:** Expand any file to see added (green), removed (red), and unchanged (white) lines - **View comments:** See code-level comments and their Resolved or Unresolved status within the diff - **Search files:** Use the search bar to find files by name or path - **Filter files:** Use the dropdown to show all files, or only added or modified files - **Expand/Collapse:** Use controls to manage the file list view ## Related Summary, timeline, and PR overview. - [Overview](https://docs.testdino.com/platform/pull-requests/overview): Check PR health and test status at a glance - [Timeline](https://docs.testdino.com/platform/pull-requests/timeline): Chronological log of commits, test runs, and review activity - [Test Runs](https://docs.testdino.com/platform/playwright-test-runs): View detailed test run results and analysis --- ## Playwright Test Runs > Source: https://docs.testdino.com/platform/playwright-test-runs > Description: View, filter, and debug every Playwright test run. Access traces, screenshots, and error groups from one dashboard. The Test Runs page lists every Playwright test execution in your project. Identify failing or flaky runs, confirm where they happened (branch and environment), and open detailed evidence for debugging. [Video: TestRun video](https://www.youtube.com/embed/5QMQms3wl6s?si=1fpIczlzVarYClzX) ## Search and Filters | Controls | Purpose | Options | | ----- | ----- | ----- | | **Search** | Find runs by text or ID | Commit message, run number (for example, \#1493) | | **Time Period** | Limit runs to a date range | Last 24 hours, 3 days, 7 days, 14 days, 30 days, Custom | | **Test Status** | Filter by outcome | Passed, Failed, Skipped, Flaky | | **Duration** | Sort by runtime | Low to High, High to Low | | **Author** | Show runs by author | Select one or more authors | | **Environment** | Focus on a mapped environment | production, development, hotfix | | **Branch** | Scope by branches | Select one or more branches | | **Tags** | Filter by run-level or test-case-level tags | Switch between **Run Tags** and **Case Tags** tabs, search, then select one or more tags | ### Tags Filter The Tags dropdown contains two tabs: | Tab | What it filters | | :--- | :--- | | **Run Tags** | Tags attached to the entire test run via the `--tag` CLI flag | | **Case Tags** | Tags set on individual test cases via Playwright's `tag` metadata | Type in the search box to find a specific tag. Select one or more tags to filter the list. Multiple tags use **OR** logic: a run matches if it contains any of the selected tags. ### Active Test Runs Runs currently executing appear in a collapsible **Active Test Runs** section at the top of the list. Results update in real time as tests complete. Each active run displays a progress bar, live pass/fail/skip counts, commit, branch, and CI source. ![Active test run with sharded execution showing shard tabs, worker status, and live progress bar](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/sharded-runs.webp) For sharded runs, the run is labeled **SHARDED** with tabs for each shard. Select a shard tab to view its workers and currently executing tests. Non-sharded runs show a single progress bar with per-worker detail. ### Test Run Key Columns ![Test runs list showing run ID, commit info, branch, environment, test results counts](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/test-runs-attempts.webp) | Column | Description | | ----- | ----- | | Test Run | Run ID, start time, and executor (CI or Local). Click the CI label to open the job. | | Commit | Commit message, short SHA, and author. Links to the commit in your Git host. | | Branch & Environment | Branch name, mapped environment label, and run-level tag chips. When more tags exist than the row can display, a `+N` badge shows the remaining count. | | Test Results | Counts for Passed, Failed, Flaky, Skipped, Interrupted and total. | ### Test Run Grouping Runs that share the same commit hash and commit message are grouped as attempts by TestDino. This usually happens when you rerun a CI workflow or trigger multiple executions for the same commit. Expand the group to see each attempt (for example, Attempt \#1, Attempt \#2). **This grouping helps you:** - Track reruns for a single commit without scanning separate rows - Compare results across attempts to confirm if a rerun fixed flaky failures - See how many times a workflow was triggered for the same code change ### Run-Level Tags Attach labels to an entire test run using the `--tag` CLI flag. Tags appear as chips on each run row in the list and are available as filter values. ```bash npx tdpw upload ./playwright-report --tag="regression,smoke" ``` Use run-level tags to label runs by build number, sprint, release, or test type. These are separate from test-case-level tags set via [annotations](/guides/playwright-test-annotations). | Tag type | Set via | Scope | Example | | :--- | :--- | :--- | :--- | | Run-level | `--tag` CLI flag | Entire test run | `regression`, `sprint-42`, `nightly` | | Test-case-level | Test annotations | Individual test cases | `smoke`, `critical-path`, `login` | ## Run Details Header Opening a test run displays a header bar above the detail tabs. The header contains: | Element | Description | | :--- | :--- | | Commit message | The commit message and run number | | Environment | Mapped environment badge (for example, `STAGE`) | | Branch | Branch name | | Commit SHA | Short SHA linking to the commit in your Git host | | Author | Committer name | | Timestamp | When the run started | | Duration | Total run time | | Tags | Run-level tags displayed as chips (for example, `@regression`, `@smoke`, `@v1.2.3`) | Tags in the header are the same run-level tags set via the `--tag` CLI flag. They are visible across all detail tabs (Summary, Specs, Errors, History, Configuration, Coverage). ## Quick Start Steps 1. **Set scope** - Filter by Time Period, Environment, Branch, Committer, Status, or Tags, and sort by Duration to focus the list. 2. **Scan and open** - Review result counts, then open a run that needs action. 3. **Review details** - The [run details page](/platform/test-runs/playwright-failure-summary) provides six tabs: - **Summary:** Totals for Failed, Flaky, and Skipped with sub-causes and test case analysis - **Specs:** File-centric and tag-centric views. Switch between Spec File and Tag sub-views to group by file or by tag. - **Errors:** Groups failed and flaky tests by error message. Jump to stack traces. - **History:** Outcome and runtime charts across recent runs. Spot spikes and regressions. - **Configuration:** Source, CI, system, and test settings. Detect config drift. - **Coverage:** Statement, branch, function, and line coverage with per-file breakdown. ## Related Explore runs, CI optimization, and analytics. - [Summary](https://docs.testdino.com/platform/test-runs/playwright-failure-summary): Group failures and flakiness by cause - [Detailed Analysis](https://docs.testdino.com/platform/test-runs/playwright-failure-summary#detailed-analysis): Drill down to specific tests - [Specs & Tags](https://docs.testdino.com/platform/playwright-test-runs/specs): Review results by spec file or by tag - [Errors View](https://docs.testdino.com/platform/playwright-test-runs/errors): Group failures by error message - [Historical Trends](https://docs.testdino.com/platform/playwright-test-run-history): Spot regressions and drift - [Configuration Context](https://docs.testdino.com/platform/playwright-test-runs/configuration): Debug environment differences - [Coverage](https://docs.testdino.com/platform/playwright-test-runs/coverage): Per-run code coverage breakdown - [Test Case Details](https://docs.testdino.com/platform/playwright-test-cases): Individual test analysis ## Playwright Test Run Failure Summary > Source: https://docs.testdino.com/platform/test-runs/playwright-failure-summary > Description: Structured breakdown of Playwright test run failures grouped by root cause. Drill into error clusters, flaky patterns, and regressions. The Summary tab is to review failed, flaky, and skipped tests by cause. Use filters and sorting to narrow down to the tests that need action. The run details header displays branch, commit SHA, committer, timestamp, duration, and run-level tags. Tags appear as chips (for example, `@regression`, `@smoke`, `@v1.2.3`) and are visible across all detail tabs. Test-case-level tags and annotations are visible in the Detailed Analysis table and the [Tag view within the Specs tab](/platform/playwright-test-runs/specs#tag-view). ## KPI Tiles ![KPI tiles showing failed, flaky, and skipped test counts with sub-category breakdowns](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/summary/kpi-tiles.webp) ### 1. Failed A failed test runs and ends with an error or unmet assertion. Use the cause buckets to prioritize fixes and group similar issues. - **Assertion Failure:** The expected value did not match the actual value. - **Element Not Found:** The locator did not resolve to an element. - **Timeout Issues:** An action or wait exceeded the set time. - **Network Issues:** A request failed or returned an unexpected status. - **Other Failures:** Errors that do not fit the above, for example, script errors or setup issues. ### 2. Flaky A test is categorized as Flaky when the outcome is inconsistent across attempts or recent runs without a code change. It often passes on retry. - **Timing Related:** Order, race, or wait sensitivity. Often passes on retry. - **Environment Dependent:** Fails only in a specific environment or runner. - **Network Dependent:** Intermittent remote call or service instability. - **Assertion Intermittent:** Non-deterministic data or state causes occasional mismatches. - **Other Flaky:** Unstable for reasons outside the above buckets. ### 3. Skipped A skipped test does not run due to a skip directive, configuration, or runtime condition. No assertions executed. - **Manually Skipped:** Explicitly skipped in code or via tag. - **Configuration Skipped:** Disabled by config, project, or reporter settings. - **Conditional Skipped:** Skipped due to an evaluated condition at runtime. ## Detailed Analysis ![Detailed analysis table showing test cases with status, spec file, duration, retries, and history preview](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/summary/detailed-analysis.webp) This table lists every test in the run, allowing you to move from the summary signal to a specific test. It includes: - Status, spec file, duration, retries, and failure cluster in one place. - **Annotations** badge on tests that have `testdino:` annotations. Click to expand and see chips for priority, feature, owner, and Slack targets inline. See the [Annotations guide](/guides/playwright-test-annotations) for how to add them. - **History preview** shows the current run and up to 10 past executions. - **#Trace** link (when available) to open full Playwright [trace viewer](/guides/playwright-trace-viewer) with actions, console output, and network calls. - Token search and filter chips. - Sort by duration or status to surface slow or failing tests first. - One-click access to the test case. ### Search Tokens Use tokens to filter and combine conditions: `s:` status (`passed`, `failed`, `flaky`, `skipped`) `c:` cluster (`assertion-failure`, `timeout`, `network-error`, ...) `@` tag (`smoke`, `regression`, `e2e`) `b:` browser (`chrome`, `firefox`, `safari`, `edge`) ### Sorting Switch between Default, High to Low, and Low to High to spot slow tests and quick wins. ### Context Carry-Over Selections in **Summary KPI Tiles** apply to the table. When you select Failed, Flaky, Skipped, or a cause bucket, the table shows only matching tests. ## Related Drill into specs, errors, and test cases. - [Specs](https://docs.testdino.com/platform/playwright-test-runs/specs): Review results by spec file - [Errors](https://docs.testdino.com/platform/playwright-test-runs/errors): Group failures by error message - [History](https://docs.testdino.com/platform/playwright-test-run-history): Spot regressions and drift - [Configuration](https://docs.testdino.com/platform/playwright-test-runs/configuration): Debug environment differences - [Coverage](https://docs.testdino.com/platform/playwright-test-runs/coverage): Per-run code coverage breakdown --- ## Playwright Test Run Spec File View > Source: https://docs.testdino.com/platform/playwright-test-runs/specs > Description: Review Playwright test run results by spec file or tag. Use the spec view to see pass, fail, and flaky counts per file within a run. The Specs tab shows results for a single test run grouped by spec file or by tag. Use it to find files that are failing or slow, then open the test cases within a spec or tag. The tab has two sub-views, toggled with the **Spec File** and **Tag** buttons at the top of the left panel: | View | Groups by | Use it to | | :--- | :--- | :--- | | **Spec File** (default) | Spec file path | Find failing or slow files | | **Tag** | Test-case tags | Assess health of tag subsets like `@smoke` or `@regression` | ![Specs view showing spec file list on left with status bars and test details panel on right](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/specs.webp) ## Spec File View ### Spec List The left panel lists every spec file in the run. Each item shows: - **Spec card:** File name, total tests, a status bar (pass/fail/flaky/skipped), and total duration. - **Sort:** by Name, Duration, or Status to surface slow or failing files. - **Filter:** show only Passed, Failed, Flaky, or Skipped specs. - **Search:** type a file name to jump to a spec. ### Spec Details The right panel shows tests for the selected spec. It includes: - Suite totals at the top, including total time and test count - A test list with status, duration, and retry or attempt badges From a test row, open the test detail view to review available evidence, including steps, errors, screenshots, and the trace viewer. > **Note:** Sort by **duration** to find long-running specs first. ## Tag View Switch to the **Tag** view to group tests by their tags instead of by spec file. This view surfaces how each tag category performed in the run. ### Tag List The left panel lists every tag found in the run. Each tag card shows: | Element | Description | | :--- | :--- | | Tag name | The tag label | | Test count | Total tests with this tag in the run | | Status bar | Color-coded bar showing pass/fail/flaky/skipped distribution | | Status breakdown | Counts for failed, passed, and flaky tests | An **all** card at the top aggregates all tests across every tag. ### Search, Sort, and Filter | Control | Description | | :--- | :--- | | Search tags | Type to filter tags by name | | Sort & Filter | Order by name, test count, or status. Filter to show only tags with specific outcomes. | ### Tag Details Click a tag card to show all tests with that tag in the right panel. Tests are grouped by spec file. Each test row displays the test name, status, and duration. Click any test to open the full detail view. > **Note:** Tags come from Playwright's `tag` metadata on tests or describe blocks. See the [Annotations guide](/guides/playwright-test-annotations) for how to add tags to your tests. ## Related Summary, errors, and tag analytics. - [Summary](https://docs.testdino.com/platform/test-runs/playwright-failure-summary): Group failures by cause - [Errors](https://docs.testdino.com/platform/playwright-test-runs/errors): Group failures by error message - [Tags Analytics](https://docs.testdino.com/platform/analytics/test-run#tags): Track tag health trends across runs - [Annotations Guide](https://docs.testdino.com/guides/playwright-test-annotations): Add tags and annotations to your tests - [History](https://docs.testdino.com/platform/playwright-test-run-history): Spot regressions and drift --- ## Playwright Test Run Error Grouping > Source: https://docs.testdino.com/platform/playwright-test-runs/errors > Description: Group Playwright test failures by error message within a run. Prioritize fixes by understanding which errors are most common and most impactful. The Errors tab groups failed and flaky tests by error message. Use it to see which errors affected the run and how many tests each error impacts. With this view: - **QA Engineers** can quickly spot whether a single error is responsible for most failures or if you're dealing with multiple unrelated problems. - **Developers** can jump directly to the error message and stack trace without having to click through individual test cases one by one. ## Layout The page consists of three main sections: a search bar, status filters, and an error groups table. ![Error groups table showing unique error messages with affected test counts](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/error/error-all.webp) ### Search Search by test name or error text. Results update as you type. ### Tag Filter Filter error groups by test-case tags. Select one or more tags to show only error groups that contain tests with the selected tags. This narrows the error list to failures within a specific tag category (for example, `@smoke` or `@checkout`). ### Status Filters Filter the grouped list by outcome: **All:** Shows all test cases in the run, grouped by error **Failed:** Only failed test cases ![Error groups filtered to show only failed test cases](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/error/error-failed.webp) **Flaky:** Test cases that passed on retry but had at least one failure (See: [common reasons](/guides/playwright-flaky-test-detection)). ![Error groups filtered to show only flaky test cases that passed on retry](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/error/error-flaky.webp) Each filter displays a count: See how many test cases fall into each category. **Expand / Collapse:** Use the toggle on the right to expand all error groups at once or collapse them back to just the error headers. ### Error Groups Each row in the table represents a unique error message. The row shows: - Error text - Number of affected tests Expand a row to list the tests that hit that error in the run. #### Test Case Rows Each test case row shows: - **Status icon:** A red X for failed, a warning icon for flaky. - **Test name:** The full test case title. - **Browser:** Which browser ran the test (Chromium, Firefox, Webkit, iOS, etc.). - **Duration:** How long the test took. - **Retries:** Number of retry attempts, if any. Click any test case row to open the side panel. ## Side Panel The side panel displays details for one test case without leaving the Errors tab. ![Side panel showing test case details with status, duration, retries, error message, and stack trace](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/error/test-runs-error-side-panel.webp) ### 1. Header Shows the test name and status. ### 2. Details | Field | What It Shows | | ----- | ----- | | **Status** | Failed or Flaky. | | **Duration** | Total time the test took to run. | | **Retries** | Number of retry attempts. | | **Browser** | The browser or device used. | | **Started** | Exact date and time the test began. | ### 3. Error Message Shows the Playwright error text for the failing attempt. ### 4. Stack Trace The call stack at the point of failure. Use this to trace back to the exact line in your test or application code. ### 5. View Details Opens the full test case details page for deeper analysis. From there, you can access the Overview and History tabs for deeper analysis. ## Related Summary, specs, and error grouping guide. - [Summary](https://docs.testdino.com/platform/test-runs/playwright-failure-summary): Group failures by cause - [Specs](https://docs.testdino.com/platform/playwright-test-runs/specs): Review results by spec file - [History](https://docs.testdino.com/platform/playwright-test-run-history): Spot regressions and drift --- ## Playwright Test Run History in TestDino > Source: https://docs.testdino.com/platform/playwright-test-run-history > Description: Compare Playwright test runs side by side in TestDino. Spot regressions, execution drift, and emerging failure patterns across environments over time. The History tab shows outcome and duration trends for recent runs on the same branch and CI environment as the selected run. Use it to detect instability, regressions, and environment changes. ## Run History ![Run history chart showing passed, failed, flaky, and skipped test counts over time](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/history/run-history.webp) Shows counts of passed, failed, flaky, and skipped tests over the selected period. Uses the branch from the selected run. Highlights large changes in flaky share compared to the recent baseline. ## Test Execution Time ![Test execution time chart showing total runtime per run with trend line](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/history/test-execution-time-chart.webp) Shows total runtime per run. Highlights large changes compared to the recent average. ## Related Summary, specs, and configuration. - [Summary](https://docs.testdino.com/platform/test-runs/playwright-failure-summary): Group failures by cause - [Specs](https://docs.testdino.com/platform/playwright-test-runs/specs): Review results by spec file - [Errors](https://docs.testdino.com/platform/playwright-test-runs/errors): Group failures by error message - [Configuration](https://docs.testdino.com/platform/playwright-test-runs/configuration): Debug environment differences --- ## Playwright Test Run Configuration View > Source: https://docs.testdino.com/platform/playwright-test-runs/configuration > Description: View the exact Playwright configuration used in each test run including environment variables, reporter settings, and CLI flags. Reproduce and debug drift. The Configuration tab shows the execution context for a test run. Use it to compare runs and find differences in code, CI, runner environment, or Playwright settings. ## 1. Source Control ![Source control section showing branch, commit hash, author, message, and repository links](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/configuration/source-control.webp) - Branch, commit hash, author, message, timestamp. - Links to repository, commit, and PR. Use this section to trace the exact code used for the run and reproduce from the same commit. ## 2. CI Pipeline ![CI pipeline section showing provider, workflow, build number, trigger, and environment](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/configuration/ci-pipeline.webp) - CI provider, workflow or job, build number, trigger, run URL. - Target environment and any sharding info. Use this section to confirm the test run is from the expected pipeline and compare it with other CI runs. ## 3. System Info ![System info section showing OS, container image, CPU, memory, Node.js and Playwright versions](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/configuration/system-info.webp) - OS and version, container image, CPU, and memory. - Node.js and Playwright versions, timezone, and locale. Use this section to spot runner differences between local and CI, or between CI environments. ## 4. Test Configuration ![Test configuration section showing browsers, workers, retries, timeouts, and reporter settings](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/configuration/test-configuration.webp) - Projects, browsers, workers, retries, parallel mode. - Timeouts, baseURL, headless, device, or viewport. - Reporters and artifacts, selection filters or tags. - Safe environment variables and flags are noted. Use this section to reproduce the run and identify configuration drift across runs. ## Related Summary, specs, and history. - [Summary](https://docs.testdino.com/platform/test-runs/playwright-failure-summary): Group failures by cause - [Specs](https://docs.testdino.com/platform/playwright-test-runs/specs): Review results by spec file - [Errors](https://docs.testdino.com/platform/playwright-test-runs/errors): Group failures by error message - [History](https://docs.testdino.com/platform/playwright-test-run-history): Spot regressions and drift --- ## Playwright Test Run Code Coverage View > Source: https://docs.testdino.com/platform/playwright-test-runs/coverage > Description: Collect and visualize Playwright code coverage. Track trends across branches, environments, and runs with per-file breakdowns. The Coverage tab shows how much of your application code ran during a test run. It displays four overall metrics and a file-by-file breakdown so you can find exactly which parts of your code are untested. > **Note:** This tab only appears when the `@testdino/playwright` streaming reporter (Experimental) has `coverage.enabled: true` and your application is instrumented with Istanbul. See the [Code Coverage guide](/guides/playwright-code-coverage) for full setup instructions. ## Coverage Badge Runs with coverage data display a coverage badge on the [Test Runs](/platform/playwright-test-runs) list. The badge shows the overall statement coverage percentage, giving an at-a-glance indicator without opening the run. | Badge color | Statement coverage | | :--- | :--- | | Green | 80% and above | | Yellow | 50% to 79% | | Red | Below 50% | ## Coverage Summary ![Coverage summary showing statement, branch, function, and line coverage metrics](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/code-coverage/summary.webp) Four tiles at the top of the tab show the overall coverage for the run: | Metric | What it means | | :--- | :--- | | **Statements** | How many individual code statements ran during tests | | **Branches** | How many `if`/`else` paths were taken (both the true and false side) | | **Functions** | How many functions were called at least once | | **Lines** | How many source lines ran during tests | ## Coverage by File A table showing coverage metrics for every source file in the report. Each row displays statements, branches, functions, and lines percentages. Sort by any column to find the files with the lowest coverage. Two view modes are available using the toggle in the top-right corner of the table: ### List View ![Coverage list view showing per-file coverage percentages](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/code-coverage/list-view.webp) A flat list of all source files with their coverage percentages. Each file shows its full path relative to the project root. ### Tree View ![Coverage tree view showing files grouped by directory with aggregate coverage](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/code-coverage/hierarchy-view.webp) Files grouped by directory structure. Expand or collapse folders to drill into specific areas of your codebase. Each folder row shows the aggregate coverage for all files inside it. Use **Expand All** and **Collapse All** to control the tree. ## Coverage Diff The diff view compares coverage between the current run and a baseline. It highlights files where coverage increased or decreased, making it easy to spot regressions introduced by a specific change. | Column | Description | | :--- | :--- | | **File** | Source file path | | **Current** | Coverage percentage in this run | | **Baseline** | Coverage percentage from the comparison run | | **Change** | Difference between current and baseline (positive or negative) | The baseline defaults to the previous run on the same branch. For pull request runs, the baseline is the latest run on the target branch. > **Tip:** Use the diff view during code review to confirm that new code includes adequate test coverage before merging. ## Related Links to coverage setup and analytics. - [Code Coverage Guide](https://docs.testdino.com/guides/playwright-code-coverage): Set up instrumentation and coverage collection - [Coverage Analytics](https://docs.testdino.com/platform/analytics/playwright-code-coverage): Track coverage trends across runs - [Summary](https://docs.testdino.com/platform/test-runs/playwright-failure-summary): Group failures and flakiness by cause - [Specs](https://docs.testdino.com/platform/playwright-test-runs/specs): Review results by spec file ## Playwright Test Cases > Source: https://docs.testdino.com/platform/playwright-test-cases > Description: Inspect Playwright test case status, history, and evidence. Track pass rates, flakiness, and duration trends per test. The Test Case view shows one test result within a test run. Review status, failure cause, runtime, retry attempts, and evidence (screenshots, videos, traces) for each attempt. This differs from [Test Case Management](/test-management/playwright-test-case-management), which organizes manual test cases. ## KPI Tiles ![Test case KPI tiles showing status, total runtime, and retry attempts](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/test-case/overview-kpi-tiles.webp) ### 1. Status Outcome for this run: Passed, Failed, Skipped, or Flaky. For failed or flaky tests, the primary technical cause is shown. ### 2. Total Runtime Total execution time for this test in the current run. Useful for spotting slowdowns after code or configuration changes. ### 4. Attempts Number of retries executed by your retry settings. A pass after a retry often signals instability that needs cleanup. ## Annotations If your Playwright test includes `testdino:` annotations, they appear in the **Annotations** panel just below the KPI tiles. This panel displays all the metadata attached to the test: priority, feature area, ticket link, owner, Slack notification targets, context notes, and flaky reason. These annotations come from your test code and are read-only in the UI. To add or change them, update the `annotation` array in your test file. See the [Annotations guide](/guides/playwright-test-annotations) for setup instructions and all supported types. ## Evidence ![Evidence panel showing tabs for each attempt with error details, steps, screenshots, console, and video](https://testdinostr.blob.core.windows.net/docs/docs/test-runs/test-case/evidence-panel.webp) [Evidence](/guides/debug-playwright-failures/visual-evidence) is grouped by attempt, for example, `run`, `retry 1`, `retry 2`. ### 1. Error Details Exact error text and key line. Copy this into a ticket or use it to reproduce locally. ### 2. Test Steps Shows the step list with per-step timing. Use it to locate where the failure occurred. ### 3. Screenshots Screenshots captured during the attempt. Use them to confirm UI state at the time of failure. For a detailed guide, see [Visual Evidence](/guides/debug-playwright-failures/visual-evidence#enable-screenshots). ### 4. Console Shows browser console output. Use it to correlate script errors or warnings with the failure. ### 5. Video A recording of the attempt. Use it to confirm the sequence of actions and timing across retries. Know more about videos at [Visual Evidence](/guides/debug-playwright-failures/visual-evidence#enable-video-recording). ### 6. Trace Shows the Playwright trace for the attempt when available. Use it to inspect actions, network calls, console output, and DOM snapshots. See more details at [Trace Viewer](/guides/playwright-trace-viewer). [Video: Playwright trace viewer showing execution timeline with actions, network calls, and DOM snapshots](https://testdinostr.blob.core.windows.net/docs/trace-viewer.mp4) > **Note:** Visible only when Playwright tracing is enabled, for example, `trace: 'on'` or `trace: 'on-first-retry'`. ### 7. Visual Comparison Snapshot comparison for tests that use Playwright visual assertions, for example, `toHaveScreenshot`. See more details at [Playwright Visual Testing](/guides/playwright-visual-testing). [Video: Visual comparison viewer showing diff, actual, expected, side-by-side, and slider modes](https://testdinostr.blob.core.windows.net/docs/visual-comparison.mp4) Use the viewer to compare screenshots in these modes: | Mode | What it shows | How it helps | | ----- | ----- | ----- | | **Diff** | Highlighted changed regions | Find small layout or visual shifts | | **Actual** | Screenshot from the failing attempt | See what renders during the test | | **Expected** | Stored baseline image | Decide whether the baseline must change | | **Side by side** | Expected and actual in two panes | Compare quickly across elements | | **Slider** | Interactive sweep between images | Inspect subtle differences | > **Note:** > - Visible only when the test suite generated visual comparisons. > - If no snapshot artifacts exist, the **Image Mismatch** panel is hidden. ## Related Test runs, history, and test Explorer. - [History](https://docs.testdino.com/platform/playwright-test-case-history): Track stability and failures across runs - [Test Runs](https://docs.testdino.com/platform/playwright-test-runs): View all test executions - [Visual Evidence](https://docs.testdino.com/guides/debug-playwright-failures/visual-evidence): Screenshots, videos, and traces - [Annotations](https://docs.testdino.com/guides/playwright-test-annotations): Add metadata and Slack alerts to tests --- ## Playwright Test Case History Tracking > Source: https://docs.testdino.com/platform/playwright-test-case-history > Description: Track individual Playwright test case stability across runs. See retry rates, failure frequency, and pass/fail trends over time. The History tab shows every execution of a single test case on the active branch. Use it to measure stability over time and confirm whether failures are new, recurring, or flaky. > **Note:** Only runs from the active branch appear. Test runs from other branches are excluded. ## What You See ![Test case history KPI tiles showing stability percentage, total runs, and outcome counts](https://testdinostr.blob.core.windows.net/docs/docs/test-cases/history/kpi-tiles.webp) ### 1. Test Metrics Key metrics for this test on the active branch: - **Stability:** Percentage of runs that pass. A 100% stability score means the test passed in all tracked executions. Stability is calculated as `(Passed Runs / Total Runs) x 100` - **Total Runs:** The total number of executions tracked on this branch. Provides context for all other metrics. - **Passed / Failed / Flaky / Skipped:** Counts for each outcome. ### 2. Last Status Tiles Links to the most recent run for each outcome: **Last Passed**, **Last Failed**, **Last Flaky**. Each tile shows the **Run #** and timestamp. The **current** label appears when the tile matches the run you are viewing. ### 3. Execution History Table [Video: Execution history table showing test runs with status, duration, retries, and expandable error details](https://testdinostr.blob.core.windows.net/docs/docs/test-cases/history/execution-histroy-table.mp4) Lists every execution on this branch in time order. | Column | Description | Why it matters | | ----- | ----- | ----- | | **Executed at** | Timestamp when the test ran | Correlates failures with commits or deployments | | **Run** | Unique identifier for the test run | Navigate to the exact run | | **Status** | Outcome: Passed, Failed, Flaky, Skipped | Spot trends and recurring issues | | **Duration** | Total runtime of the execution | Identify performance regressions | | **Retries** | Number of retry attempts | Surfaces flaky or unstable tests | | **Run location** | Link to the CI job | Access to the original build and logs | | **Actions** | Link to execution details | Inspect evidence and artifacts | > **Note:** Rows expand to show **Error Details** for failures or **Console Logs** if they were captured during execution. ## How to Read Stability Stability measures a test's reliability on the current branch. The percentage reflects the entire history, not just the most recent run. - **100% Stability:** Test passes in every tracked run on this branch. - **< 100% Stability:** At least one run failed or was flaky, even if the latest run passed. ## Why It Matters - Confirm whether a failure is a regression or a recurring issue. - Track retry frequency as a stability signal. - Spot duration changes that indicate performance drift. - Use run links to inspect evidence and CI context. ## Related Test case overview and test runs. - [Overview](https://docs.testdino.com/platform/playwright-test-cases): Test case status and evidence - [Test Runs](https://docs.testdino.com/platform/playwright-test-runs): View all test executions --- ## Playwright Test Explorer in TestDino > Source: https://docs.testdino.com/platform/playwright-test-explorer > Description: Explore Playwright test health across your project. Find failing, flaky, or slow tests using hierarchical and flat views. Test Explorer provides a centralized view of all test cases within a project. Analyze test health, track failure patterns, and identify flaky or slow tests across your entire test suite. [Video: Test Explorer video](https://www.youtube.com/embed/ed6jB-hXBCQ?si=X99uZDUXamHhiyT-) ## View Modes Test Explorer supports two viewing modes. Toggle between them using the view switcher in the top-right corner. ### Hierarchical View (Default) Groups test cases by their spec files. Each spec row displays aggregated metrics across all tests in that file. Expand a spec row to reveal individual test cases within it. ![Test Explorer hierarchical view grouping test cases by spec file](https://testdinostr.blob.core.windows.net/docs/docs/test-explorer/hierarchical.webp) Use this view to identify problematic spec files at a glance, then drill into specific tests. ### Flat View Lists all individual test cases in a single table with per-test metrics. Each row shows the test title, its parent spec file, and execution data. ![Test Explorer flat view listing all test cases with individual metrics](https://testdinostr.blob.core.windows.net/docs/docs/test-explorer/flat.webp) Use this view when searching for a specific test case or comparing metrics across tests from different spec files. ## Table Columns All columns are sortable in both view modes. | Column | Description | | :--- | :--- | | Spec / Test Case | File name (hierarchical) or test title with parent spec (flat) | | Executions | Total runs within the selected time period and filters | | Failure Rate | Percent of executions with at least one failure | | Flaky Rate | Percent of executions with at least one flaky result | | Avg Duration | Average execution time across all runs | | Platform | Browser or platform used (e.g., chromium, firefox, ios) | | Tags | Tags associated with the test case | | Recent Status | Status of the most recent execution (Passed, Failed, Flaky) | | Last Run | Timestamp of the last execution and the branch it ran on | ## Filtering and Search ### Search Filter by spec file name or test case title using the search bar. Search supports regex for advanced pattern matching. ### Filters | Filter | Description | | :--- | :--- | | Time Period | Scope data to Last 7, 14, 30, 60, or 90 days | | Tags | Filter by test case tags. See [Test Run Analytics](/platform/analytics/test-run#tags) for tag health trends | | Platforms | Filter by browser or platform (e.g., chromium, firefox) | | Environment | Filter by target environment (e.g., staging, production) | Activating filters updates the results count and table data immediately. ### Sync Click the sync button (top-right) to refresh data without reloading the page. ## Test Case Details Click any test case row to open the details drawer. The drawer contains two tabs. ![Test Explorer side panel showing test case details with run history and platform analytics](https://testdinostr.blob.core.windows.net/docs/docs/test-explorer/test-explorer-side-panel.webp) ### Overview Tab - **Test Run History** displays the most recent executions with status, timestamp, and duration. Click a run to open the full [test run details](/platform/playwright-test-runs) page. - **Platform Analytics** breaks down executions, failures, flaky count, and average duration by browser or platform. - **Environment Analytics** shows the same breakdown by environment (e.g., production, staging). ### Errors Tab Displays unique error messages aggregated across all executions of the test case. Each error entry shows: | Field | Description | | :--- | :--- | | First seen | Timestamp of the earliest occurrence | | Last seen | Timestamp of the most recent occurrence | | Count | Total number of times the error occurred | | Error message | Full error text with stack trace context | Use the Errors tab to identify recurring failures and track whether fixes have resolved them. ## Quick Start 1. **Set scope.** Use Time Period and Environment filters to narrow the dataset. 2. **Identify targets.** Sort by Failure Rate or Flaky Rate to surface unstable tests. Sort by Avg Duration to find slow tests. 3. **Switch views.** Use Hierarchical view to find problematic spec files, then Flat view to compare individual tests across files. 4. **Inspect details.** Click a test case to view execution history, platform breakdowns, and error patterns. ## Pagination Large datasets load in pages. Click **Load More Specs** (hierarchical) or **Load More Test Cases** (flat) at the bottom of the table to fetch the next batch. ## Related Analytics, test cases, and flaky test guides. - [Test Runs](https://docs.testdino.com/platform/playwright-test-runs): View all test executions - [Test Cases](https://docs.testdino.com/platform/playwright-test-cases): Manage and organize test cases - [Analytics](https://docs.testdino.com/platform/playwright-test-analytics): Project-wide test analytics - [Flaky Tests](https://docs.testdino.com/guides/playwright-flaky-test-detection): Detect and fix flaky tests --- ## Playwright Test Analytics in TestDino > Source: https://docs.testdino.com/platform/playwright-test-analytics > Description: Analyze Playwright test health over time. Track failure rates, flakiness, execution speed, and environment-specific trends. Analytics turns test activity into clear trends. Shows what's failing, what's flaky, where time goes, and which environments slow you down. [Video: Analytics video](https://www.youtube.com/embed/OtxjPyRtCpQ?si=Shw-cPof9R0XZvZs) ## What Analytics shows * **Spot real problems fast** - See where failures concentrate and if they're new or repeating * **Cut noise** - Find and reduce flakiness so reviews aren't blocked by random failures * **Speed up feedback** - Identify which files, tests, or environments are slow * **Prove progress** - Trends show when stability or speed improves ## Analytics Capabilities | View | What it shows | Use it to | | :--- | :--- | :--- | | [**Summary**](/platform/analytics/playwright-test-health-summary) | Total test runs, average runs per day, pass/fail counts, flakiness and failure rates | Assess overall test suite health and spot unstable tests | | [**Test Run**](/platform/analytics/test-run) | Average and fastest run times, run-level tag health table, performance by branch and day | Compare run times, review tag pass rates, and optimize test execution | | [**Test Case**](/platform/analytics/test-case) | Average, fastest, and slowest test durations, pass/fail trends | Identify slow tests and track reliability over time | | [**Errors**](/platform/analytics/errors) | Error messages grouped by type, occurrence frequency, affected tests | Find recurring problems and prioritize fixes by impact | | [**Coverage**](/platform/analytics/playwright-code-coverage) | Statement, branch, function, and line coverage trends over time | Track coverage changes and detect regressions | | [**Environment**](/platform/analytics/environment) | Test failures and successes by environment and branch, pass rates over time | Isolate environment-specific issues and focus debugging | ## Filters All analytics views share a global filter bar at the top. Filters persist as you switch between views. | Filter | Description | Default | | :--- | :--- | :--- | | Time Period | Scope data to last 7, 14, 30, 60, or 90 days | 30 days | | Environment | Filter by mapped environment (staging, production, etc.) | All environments | | Branches | Select one or more branches to include | All branches | ## Quick Start Steps 1. **Set scope** - Select Time range and Environment. Add Branches if needed. Keep these fixed during review. 2. **Review Summary** - Look for spikes in failures, flakiness, or retries. Open the day or metric that stands out. 3. **Select a view** - Use **Test Case** to address slow or flaky tests, **Test Run** to improve run time, stability, and tag health, **Errors** to find recurring problems and prioritize fixes by impact, **Environment** to isolate setup-specific issues. 4. **Apply and confirm** - Implement the fix, then verify the improvement by checking the same charts in the next run or period. ## Jump to What You Need - [Trends Overview](https://docs.testdino.com/platform/analytics/playwright-test-health-summary) - [Test Run](https://docs.testdino.com/platform/analytics/test-run) - [Test Case](https://docs.testdino.com/platform/analytics/test-case) - [Coverage](https://docs.testdino.com/platform/analytics/playwright-code-coverage) - [Errors](https://docs.testdino.com/platform/analytics/errors) - [Environment Analysis](https://docs.testdino.com/platform/analytics/environment#environment-analysis) --- ## Playwright Test Health Analytics Summary > Source: https://docs.testdino.com/platform/analytics/playwright-test-health-summary > Description: High-level view of Playwright test health including pass rates, flakiness scores, and execution trends across branches and environments in TestDino. It surfaces volume, stability, and trend signals in one place so you can spot spikes, regressions, or noise before delving into details. ## Test Run Volume This chart shows daily runs, split by Passed tests (green) and Failed tests (red). Hover over a date to see exact counts. Use it to spot spikes, compare days, and correlate changes with deployments or data updates. ![test run volume](https://testdinostr.blob.core.windows.net/docs/docs/analytics/summary/test-run-volume.webp) ### 1. Total Runs Counts all test runs in the selected time range and environment. Indicates test throughput for the period. ### 2. Average Runs per Day Mean number of test runs per calendar day. Helps check CI cadence and scheduling consistency. ### 3. Total Passed Test Runs Test Runs with zero failing tests. Track this to gauge build stability and confirm improvements after fixes. ### 4. Total Failed Test Runs Test Runs with one or more failing tests. Use this to estimate the triage load and verify that the failure volume is trending downward. ## Flakiness & Test Issues ![average flakiness](https://testdinostr.blob.core.windows.net/docs/docs/analytics/summary/flakiness.webp) Measures the percentage of executions with inconsistent results for the same code (pass in one run, fail in another) and tracks problematic tests. This is a noise indicator. A list of Flaky Tests on the right shows the name, spec file, and execution date. * High flakiness means wasted triage and unreliable signals. * Track the curve after fixes to confirm deflakes are effective. ## New Failures ![new failure rate](https://testdinostr.blob.core.windows.net/docs/docs/analytics/summary/new-failures.webp) Measures the percentage of test executions that are failing **for the first time** compared to previous runs. A list of the **New Failures** on the right shows the name, spec file, and execution date for every newly failed test. Use it to detect regressions early: * Spikes indicate recent changes that may have introduced defects or test failures. * A flat or declining line indicates improved stability for newly added or recently touched areas. ## Test Retry Trends ![test retry trends](https://testdinostr.blob.core.windows.net/docs/docs/analytics/summary/test-retry-trends.webp) This chart analyzes your test-retry behavior over the selected time period, environment, and branch. A rising trend suggests your tests are becoming unstable or "flaky." It shows three daily metrics: * **Total Retries**: The number of times tests were re-run. * **Total Runs**: The total number of test suites executed. * **Retried Test Cases**: The number of unique tests that needed a retry. --- ## Test Run Analytics > Source: https://docs.testdino.com/platform/analytics/test-run > Description: Analyze Playwright test run performance with execution speed KPIs, tag health trends, and efficiency metrics. Identify slow and inefficient runs across CI. ## Metrics ![Metrics](https://testdinostr.blob.core.windows.net/docs/docs/analytics/test-run/metrics.webp) ### 1. Average Run Time Shows the mean duration of all test runs in scope. Indicates the typical time your pipeline requires to complete. Use it as a baseline for tracking daily performance. ### 2. Fastest Run Displays the shortest single run duration in the selected window. The "Best yet" badge marks a new record relative to previous data in your project. ### 3. Speed Improvement The percentage decrease in average run time compared to the previous period. A higher positive value means runs are faster than before. ## Tags A table of all run-level tags used in the selected time period and environment. Use it to compare stability across tag categories such as `regression`, `smoke`, or `release-candidate`. The table header shows the total number of tags found in the selected period. Use the search box to find a specific tag. Pagination controls appear when the tag list exceeds one page. | Column | Description | | :--- | :--- | | Tag | Tag name with a color indicator and link icon | | Runs | Number of test runs containing this tag | | Passed | Total passed test count (green) | | Failed | Total failed test count (red) | | Flaky | Total flaky test count (yellow) | | Pass Rate | Overall pass percentage | Sort by any column to surface the least stable or most-used tags. Click a tag row to open the Test Runs list filtered to that tag. ## Speed by Branch Performance ![Branch performance speed comparison](https://testdinostr.blob.core.windows.net/docs/docs/analytics/test-run/speed-branch-performance.webp) A column chart that compares average test run time across branches. A baseline helps you see which branches are above or below the target. Hover to view each branch's average duration and test run count. ## Test Execution Efficiency Trends ![Test execution efficiency trends over time](https://testdinostr.blob.core.windows.net/docs/docs/analytics/test-run/test-execution-efficiency-trends.webp) This is an area chart of average run duration per day. Highlights gradual drifts or sudden regressions in runtime. Hover to see the day's average and test run count. ## Test Run Speed Distribution ![Test run speed distribution categories](https://testdinostr.blob.core.windows.net/docs/docs/analytics/test-run/test-run-speed-distribution.webp) Stacked bars that bucket daily runs into Fast, Normal, and Slow groups based on duration thresholds. Reveals whether slow runs are isolated or common on a given day. --- ## Test Case Analytics > Source: https://docs.testdino.com/platform/analytics/test-case > Description: Analyze individual Playwright test performance including pass rate, duration trends, and flakiness. Identify which specific tests need the most attention. ## Key Metrics ![Overview](https://testdinostr.blob.core.windows.net/docs/docs/analytics/test-case/metrics.webp) ### 1. Average Test Cases Shows the average number of test cases executed per run for the selected period. Use it to spot scope creep or missing coverage when the number shifts. ### 2. Fastest Test Displays the shortest test duration with the test name, establishing a baseline for lightweight checks and smoke tests. ### 3. Slowest Test Displays the longest single test duration, pointing to prime candidates for optimization or splitting to reduce cycle time. ### 4. Average Test Duration Reports the mean time for one test case across the period. Track this metric to estimate total run time and verify performance improvements. ## Slowest Test Cases ![Slowest test cases list and metrics](https://testdinostr.blob.core.windows.net/docs/docs/analytics/test-case/slowest-test-cases.webp) A table of optimization targets with columns for Test Name, Avg Duration, Frequency, Max Duration, and Performance Trend. You can identify the tests that consume the most time, how often they run, the worst-case time, and whether they are getting slower, faster, or remain stable. Click on a particular test case, and you will be directed to its **Test Case Details** page in TestDino. ## Test Execution Performance ![Test execution performance metrics](https://testdinostr.blob.core.windows.net/docs/docs/analytics/test-case/test-execution-performance.webp) This chart categorizes test cases into performance bands to help you identify performance bottlenecks. The bands: **Excellent**, **Good**, **Average**, **Poor**, and **Critical** are defined dynamically based on the execution times of your project's test cases. The view also lists all test cases categorized as '**Poor'** and '**Critical'** next to the chart. For each test case, you can view the following information: Test Case name, its corresponding Spec file, Average duration, and Total runs. This provides a direct and actionable list of the most impactful tests to optimize first. ## New Test Cases ![New Test](https://testdinostr.blob.core.windows.net/docs/docs/analytics/test-case/new-test-cases.webp) Line chart of new test cases added per day with the current-period average to provide a clear baseline for your team's test creation velocity. This chart tracks the growth of your test suite, helping you monitor coverage expansion and correlate it with development cycles. A list on the right shows the name, spec file, and execution date for every new test. > **Note:** Hover over any point on the chart to see the exact count of new tests for that day. ## Test Cases Pass/Fail History [Video: Test cases pass/fail history chart](https://testdinostr.blob.core.windows.net/docs/docs/analytics/test-case/test-case-pass-fail-history.mp4) Compare up to 10 tests side by side and see their changes over time. The view helps spot unstable tests, confirm fixes, and identify the day a regression started. ### Selected Test Cases * This area shows the current set of tests being compared. * Each test is assigned a color that corresponds to its line on the trend chart. * You can remove a test at any time to free up a slot for a new one. ### Available Tests * This is a searchable list of all test cases in your project that you can add to the comparison. * When you select a test, it's added to the **Selected Test Cases** list until the 10-test limit is reached. ### Pass/Fail Trends * Plots a performance trend line for each selected test over the chosen time range. * The y-axis represents the pass rate (0-100%), while the x-axis represents the date. * Hovering over a point reveals a daily breakdown, including pass, fail, and flaky (passed on retry) counts. * A color-coded legend matches each line to its test name, helping you track stability. --- ## Environment Analytics > Source: https://docs.testdino.com/platform/analytics/environment > Description: Compare Playwright test health across environments. Analyze pass rates, execution volume, and OS-specific failures to isolate environment-only issues. ## Execution Results by Environment ![Execution results by environment metrics](https://testdinostr.blob.core.windows.net/docs/docs/analytics/environment/execution-results-environment.webp) Shows the success rate per environment for the selected period. Each tile contains: * **Success rate** - the percentage of passed executions in that environment. * **Passed** and **Failed** counts, the raw numbers behind the percentage. Identify which environment is least stable, confirm where failures concentrate, and decide where to investigate first. Use the counts to judge the sample size before concluding. ## Environment Analysis ![Environment analysis breakdown and distribution](https://testdinostr.blob.core.windows.net/docs/docs/analytics/environment/environment-analysis.webp) ### Branch Distribution Lists how many test runs were executed on each branch in the selected scope. Identify which branches drive the majority of the signal, catch branches that are not being exercised, and balance CI usage across workstreams. ### OS Distribution Lists how many test runs were executed on each operating system. Check platform coverage at a glance, detect skew toward a single OS, and plan additional runs where platform risk is higher. ## Pass Rate Trends ![Pass rate trends across environments](https://testdinostr.blob.core.windows.net/docs/docs/analytics/environment/pass-rate-trends.webp) Time-series chart of pass rate by environment. Each line represents an environment. Spot the day an environment's stability dropped, correlate with deployments or infra changes, and verify that a fix improved the trend. ## Test Run Volume ![Test run volume by environment over time](https://testdinostr.blob.core.windows.net/docs/docs/analytics/environment/test-run-volume-env.webp) Time-series chart of total runs per environment. Helps you distinguish signal from noise. Large changes in pass rate after only a few test runs indicate a lack of confidence. Volume also reveals where CI capacity is used and where coverage is thin. --- ## Playwright Code Coverage Analytics > Source: https://docs.testdino.com/platform/analytics/playwright-code-coverage > Description: Track Playwright code coverage trends over time across branches, environments, and test runs. Spot regressions and monitor coverage growth in TestDino. Coverage analytics shows how your code coverage changes over time. Use it to catch coverage drops, compare coverage across branches, and confirm that new tests improve overall coverage. ![Coverage analytics dashboard showing code coverage trends over time](https://testdinostr.blob.core.windows.net/docs/docs/analytics/code-coverage/analytics-code-coverage.webp) > **Note:** Coverage data requires the `@testdino/playwright` streaming reporter (Experimental) with `coverage.enabled: true`. See the [Code Coverage guide](/guides/playwright-code-coverage) for setup. ## Coverage Trends A time-series chart plots coverage metrics (statements, branches, functions, lines) across the selected period. Each data point represents one test run. For sharded runs, the data point reflects the merged coverage across all shards. Use this chart to: - Spot sudden drops that match a specific commit or PR - Confirm that adding new tests raised coverage - Track progress toward a team coverage target Toggle individual metrics on or off to focus on a specific coverage type. Hover over a data point to see the exact percentages and the associated commit. ## Coverage by Branch Compare average statement coverage across branches. If a branch has lower coverage than `main`, tests are missing for the code changes on that branch. ## Coverage Diff The diff view compares coverage between two branches or time periods. It surfaces files where coverage changed, sorted by the largest regressions first. | Column | Description | | :--- | :--- | | **File** | Source file path | | **Base** | Coverage percentage on the base branch | | **Compare** | Coverage percentage on the compared branch | | **Change** | Difference in coverage (positive or negative) | Use this view to: - Validate that a feature branch maintains or improves coverage before merging - Identify files that lost coverage after a refactor - Compare coverage across environments or time periods ## Filters Narrow the data by time period, environment, or branch. | Filter | What it does | | :--- | :--- | | **Time Period** | Show coverage data within a date range | | **Environment** | Scope to a specific environment (production, staging, etc.) | | **Branch** | Focus on one or more branches | ## Related Links to coverage guides and analytics. - [Code Coverage Guide](https://docs.testdino.com/guides/playwright-code-coverage): Set up instrumentation and coverage collection - [Test Run Coverage](https://docs.testdino.com/platform/playwright-test-runs/coverage): Per-run coverage breakdown - [Summary Analytics](https://docs.testdino.com/platform/analytics/playwright-test-health-summary): Overall test suite health trends - [Environment Analytics](https://docs.testdino.com/platform/analytics/environment): Compare test health across environments ## Playwright Error Analytics in TestDino > Source: https://docs.testdino.com/platform/analytics/errors > Description: See which error messages are breaking your Playwright tests most often. Group errors by type, track frequency over time, and prioritize what to fix first. The Errors tab gives you a clear view of what's breaking your tests and why. It groups error messages by type, tracks how often they occur, and shows which tests they affect. Use this tab to spot patterns, find recurring problems, and decide what to fix first. **For QA Engineers:** Quickly identify whether failures come from flaky selectors, unstable network calls, or actual product bugs. **For Developers:** Pinpoint which components, endpoints, or test files keep failing so you can address the root cause. > **Note:** At the top of the page, three filters control what data you see: > > - **Time Period:** Last 7, 14, or 30 days. > - **Environment:** Filter by your mapped environments. > - **Error Types:** Show all error types or focus on a specific category. ## Error Type Reference | Category | What It Means | Common Causes | | :--------------------- | :----------------------------------------------------- | :---------------------------------------------------------------- | | **Assertion Failures** | Expected values didn't match actual values. | Logic bugs, changed UI text, outdated test data. | | **Timeout Issues** | An action or wait exceeded the allowed time. | Slow API responses, overloaded CI runners, missing elements. | | **Element Not Found** | A locator didn't resolve to any element on the page. | Changed selectors, removed UI components, timing issues. | | **Network Issues** | HTTP requests failed or returned unexpected responses. | Flaky endpoints, rate limits, service outages. | | **JavaScript Errors** | Runtime errors in browser console or test code. | Uncaught exceptions, missing dependencies, broken scripts. | | **Browser Issues** | Problems with browser launch, context, or rendering. | Driver version mismatches, resource limits, CI configuration. | | **Other Failures** | Errors that don't fit the above categories. | Setup failures, file system issues, environment misconfiguration. | ## Metrics Three tiles summarize the error landscape for your selected filters. ![Error-Tiles](https://testdinostr.blob.core.windows.net/docs/docs/analytics/error/error-tiles.webp) ### 1. Total Errors The total count of all error occurrences across every test run in the selected period. This number includes repeated errors from the same test across multiple runs. ### 2. Unique Error Types The number of distinct error signatures detected. A lower count with a high total error count indicates a few recurring errors, making them good candidates for immediate attention. ### 3. Affected Tests The count of unique test cases that encountered at least one error. This tells you how widespread the problem is across your test suite. ## Error Message Over Time ![Error message over time](https://testdinostr.blob.core.windows.net/docs/docs/analytics/error/error-message-over-time.webp) A line graph showing how each error category trends day by day. The y-axis represents the number of errors, and the x-axis shows dates within your selected period. Each error category appears as a separate line with its own color. Hover over any point to see the exact counts for that day. ### What it helps you find: - Spikes caused by unstable APIs - Recurring failures tied to UI changes - Slowdowns in CI that trigger timeouts - Selectors that break after layout shifts - Gradual increases that signal growing instability - Improvements after a fix; watch the line drop ## Error Categories A table that breaks down every error message by type and occurrences. ![Error Message](https://testdinostr.blob.core.windows.net/docs/docs/analytics/error/error-message.webp) ### How does this work? Each row represents an **error category**: Assertion Failures, Timeout Issues, Element Not Found, Network Issues, JavaScript Errors, Browser Issues, or Other Failures. > **Note:** Using the **Error Types** filter on top of the tab, you can either show all error types or focus on a specific category. The category header shows two numbers: how many unique error messages fall under that type and their combined occurrence count. Expand any category to see the individual error messages inside it. | Column | What It Shows | | :----------------- | :--------------------------------------------------------- | | **Error Message** | The actual error text or description returned by the test. | | **Occurrences** | Total times this exact error appeared across all runs. | | **Tests Affected** | Number of unique test cases that hit this error. | | **First Detected** | The date this error first appeared in your project. | | **Last Detected** | The most recent date this error occurred. | ### Side Panel Click any error row to open a detail panel. This panel shows every test case affected by that error. ![Analytics-Error-Side panel](https://testdinostr.blob.core.windows.net/docs/docs/analytics/error/analytics-error-side-panel.webp) For each test case, you see: - **Test Name:** A clickable link that opens the Test Case Details page. - **Duration:** How long the test took to run. - **Last Run:** When this test last ran, with the run ID as a clickable link to the Test Run page. - **Occurrences:** How many times has this specific test hit this error?