Skip to main content
What you’ll learn
  • How to upload Playwright results from Azure DevOps pipelines to TestDino
  • How to configure sharded test runs with merged reporting
  • How to set up the TESTDINO_TOKEN as a secret pipeline variable
Set up an Azure DevOps pipeline to upload Playwright test results to TestDino and view aggregated analytics, failure analysis, and flaky test detection on your dashboard. This guide covers a basic pipeline, sharded pipeline for parallel test execution across multiple shards, and merged reporting.

Prerequisites

Before setting up, ensure you have:
playwright.config.js
// ...existing config

reporter: [
  ['html', { outputDir: './playwright-report' }],  // Optional
  ['json', { outputFile: './playwright-report/report.json' }],  // ✅ Required
]

Set Up Your API Key

Store your TestDino API key as a secret pipeline variable so it is available to your pipeline without exposing it in logs or config files.
  1. Open your Azure DevOps pipeline
  2. Click Edit on the pipeline
  3. Click Variables
  4. Click New variable
  5. Set the name to TESTDINO_TOKEN
  6. Paste your TestDino API key as the value
  7. Check Keep this value secret
  8. Save the pipeline
Warning Never commit your API key directly in pipeline files. Always use secret variables. Secret variables are not exposed in pipeline logs.

Basic Pipeline Config

For a simple setup without sharding, add the upload step after your Playwright tests.
azure-pipelines.yml
trigger:
  branches:
    include:
      - main

pr:
  branches:
    include:
      - main

variables:
  CI: "true"

pool:
  vmImage: ubuntu-latest

steps:
  - checkout: self

  - task: UseNode@1
    inputs:
      version: "20.x"
    displayName: Install Node.js

  - script: npm ci
    displayName: Install dependencies

  - script: npx playwright install --with-deps
    displayName: Install Playwright browsers

  - script: npx playwright test
    displayName: Run Playwright tests

  - script: npx tdpw upload ./playwright-report --token="$TESTDINO_TOKEN"
    displayName: Upload results to TestDino
    condition: always()
    env:
      TESTDINO_TOKEN: $(TESTDINO_TOKEN)
Tip The condition: always() ensures the upload runs even if tests fail. The env block maps the secret variable to an environment variable accessible in the script.

Upload Options

FlagDescriptionDefault
--environment <value>Target environment tag (staging, production, qa)unknown
--tag <values>Comma-separated run tags for categorization (max 5)None
--upload-imagesUpload image attachmentsfalse
--upload-videosUpload video attachmentsfalse
--upload-htmlUpload HTML reportsfalse
--upload-tracesUpload trace filesfalse
--upload-filesUpload file attachments (.md, .pdf, .txt, .log)false
--upload-full-jsonUpload all attachmentsfalse
--jsonOutput results as JSON to stdout (for CI/CD)false
-v, --verboseEnable verbose loggingfalse

Sharded Test Runs

For larger test suites, Azure DevOps matrix strategy splits tests across multiple jobs. Each shard produces a blob report that gets merged before uploading to TestDino.

How it works

  1. Azure DevOps runs Playwright across 4 shards using a matrix strategy
  2. Each shard publishes its blob report as a pipeline artifact
  3. A separate MergeAndUpload stage downloads all blob reports, merges them into a single report.json, and uploads to TestDino
  4. The merge stage runs even if some shards fail (condition: always())

Full sharded config

azure-pipelines.yml
trigger:
  branches:
    include:
      - main

pr:
  branches:
    include:
      - main

variables:
  CI: "true"

stages:
  - stage: Test
    jobs:
      - job: Playwright
        pool:
          vmImage: ubuntu-latest
        strategy:
          matrix:
            shard1:
              SHARD: 1/4
            shard2:
              SHARD: 2/4
            shard3:
              SHARD: 3/4
            shard4:
              SHARD: 4/4
        steps:
          - checkout: self

          - task: UseNode@1
            inputs:
              version: "20.x"
            displayName: Install Node.js

          - script: npm ci
            displayName: Install dependencies

          - script: npx playwright install --with-deps
            displayName: Install Playwright browsers

          - script: npx playwright test --shard=$(SHARD)
            displayName: Run Playwright shard $(SHARD)

          - task: PublishTestResults@2
            condition: always()
            inputs:
              testResultsFormat: JUnit
              testResultsFiles: test-results/junit.xml
              mergeTestResults: false
              testRunTitle: Playwright shard $(SHARD)
            displayName: Publish shard test results

          - task: PublishPipelineArtifact@1
            condition: always()
            inputs:
              targetPath: blob-report
              artifact: blob-report-$(System.JobPositionInPhase)
              publishLocation: pipeline
            displayName: Publish shard blob report

  - stage: MergeAndUpload
    dependsOn: Test
    condition: always()
    jobs:
      - job: MergeAndUpload
        pool:
          vmImage: ubuntu-latest
        steps:
          - checkout: self

          - task: UseNode@1
            inputs:
              version: "20.x"
            displayName: Install Node.js

          - script: npm ci
            displayName: Install dependencies

          - task: DownloadPipelineArtifact@2
            inputs:
              patterns: "blob-report-*/*"
              path: all-blob-reports
            displayName: Download blob reports

          - script: |
              mkdir -p playwright-report all-blob-reports-flat
              find all-blob-reports -type f -exec cp {} all-blob-reports-flat/ \;
              if find all-blob-reports-flat -type f | grep -q .; then
                npx playwright merge-reports --reporter=json ./all-blob-reports-flat > playwright-report/report.json
              else
                echo "No blob report files were found to merge."
                exit 1
              fi
            displayName: Merge Playwright reports to JSON
            condition: always()

          - script: |
              if [ ! -f playwright-report/report.json ]; then
                echo "Merged report was not created, so TestDino upload is being skipped."
                exit 0
              fi
              npx tdpw upload ./playwright-report --token="$TESTDINO_TOKEN"
            displayName: Upload results to TestDino
            condition: always()
            env:
              TESTDINO_TOKEN: $(TESTDINO_TOKEN)

          - task: PublishPipelineArtifact@1
            condition: always()
            inputs:
              targetPath: playwright-report
              artifact: playwright-report
              publishLocation: pipeline
            displayName: Publish merged Playwright report
Info The MergeAndUpload stage uses dependsOn: Test with condition: always() so it runs even when some shards fail. The upload step also checks if the merged report exists before attempting the upload, avoiding unnecessary errors.

Key details in the sharded config

Config BlockWhat It Does
strategy: matrixDefines 4 shards with SHARD: 1/4 through 4/4
--shard=$(SHARD)Passes the shard value directly to Playwright (Azure DevOps uses 1-based values)
PublishPipelineArtifactSaves each shard’s blob report as a named artifact (blob-report-0, blob-report-1, etc.)
DownloadPipelineArtifactDownloads all blob report artifacts matching blob-report-*/*
find ... -exec cpFlattens all blob files into a single directory for merging
merge-reports --reporter=jsonMerges blob reports into a single report.json
condition: always()Ensures merge and upload run regardless of shard outcomes
env: TESTDINO_TOKENMaps the secret variable to an environment variable for the script

Pipeline execution

After the pipeline runs, Azure DevOps shows all shard jobs and the merge stage in the pipeline view. Azure DevOps pipeline execution view showing 4 Playwright shard jobs and MergeAndUpload stage

Results in TestDino

Once uploaded, the test run appears in your TestDino dashboard with full failure details, flaky detection, and trend data. TestDino test run screen showing uploaded results from Azure DevOps pipeline with pass/fail counts and failure details

Rerun Failed Tests

Cache test metadata to enable selective reruns:
- script: npx tdpw cache --token="$TESTDINO_TOKEN"
  displayName: Cache rerun metadata
  condition: always()
  env:
    TESTDINO_TOKEN: $(TESTDINO_TOKEN)
Rerun only failed tests on the next run:
- script: |
    FAILED=$(npx tdpw last-failed --token="$TESTDINO_TOKEN")
    if [ -n "$FAILED" ]; then
      npx playwright test $FAILED
    else
      echo "No failed tests found."
    fi
  displayName: Rerun failed tests
  env:
    TESTDINO_TOKEN: $(TESTDINO_TOKEN)
For advanced rerun strategies, caching patterns, and CI optimization techniques, see CI Optimization.

Troubleshooting

  • Add condition: always() to the upload step so it runs regardless of test exit code
  • For sharded runs, ensure the MergeAndUpload stage has condition: always() and dependsOn: Test
  • Verify your playwright.config.ts outputs to playwright-report/ (default location)
  • For sharded runs, ensure all shards publish blob-report as a pipeline artifact and the merge step writes to playwright-report/
  • Secret variables in Azure DevOps are not automatically available as environment variables. You must map them explicitly using the env block in the script step
  • Verify the variable name matches exactly: TESTDINO_TOKEN: $(TESTDINO_TOKEN)
  • Ensure each shard uses PublishPipelineArtifact with condition: always() to publish even on failure
  • The DownloadPipelineArtifact step uses pattern blob-report-*/*. Verify artifact names match this pattern.
  • Check that blob-report directory exists after running Playwright (configure reporter: [['blob']] in playwright.config.ts)

Next Steps

CI Optimization

Reduce CI time with smart reruns

Branch Mapping

Map branches to environments for organized test runs

Integrations

Connect Slack, Jira, Linear, Asana, and more

Azure DevOps Extension

View test runs inside Azure DevOps