Parameters, examples, and expected outputs for each TestDino MCP tool, so you can prompt reliably and debug faster.
1. health
Verifies the server is running and validates your API token.
The health tool performs a quick validation and returns:
- PAT validation status
- Connection status with TestDino
- Organisation access permissions
- Project access within each organisation
- Available modules for each project (Test runs, Test case management)
Parameters
No parameters required.
After running the health tool, tell the AI assistant which organisation or project you’re working on. The assistant automatically resolves and stores the corresponding projectId, eliminating the need to repeatedly specify project names or IDs in future tool calls.
Example Usage
2. list_testruns
Lists runs and supports filtering by branch, environment, time window, author, and commit.
Use it to locate the exact run you want to inspect before calling get_run_details.
Parameters
| Parameter | Type | Required | Default | Description |
|---|
project-id/name | string | Yes | - | Project ID/Name is required to specify which project’s test runs list is required. |
by_branch | string | No | - | Git branch name, e.g., main, develop. |
by_time_interval | string | No | - | Time filters: 1d, 3d, weekly, monthly, or date range: YYYY-MM-DD, YYYY-MM-DD. |
by_author | string | No | - | Commit author name; case-insensitive partial match. |
by_commit | string | No | - | Commit hash (full or partial). |
by_environment | string | No | - | Environment, e.g., production, staging, development. |
limit | number | No | 20 | Results per page. Max 1000. |
page | number | No | 1 | Page number for pagination. |
get_all | boolean | No | false | Retrieve all results up to 1000. |
Note:
- Filters can be combined.
- Pagination uses
page and limit. get_all=true fetches up to 1000 records.
Example Usage
3. get_run_details
Returns a full report for one run, including suite breakdowns, test cases, failure categories, rerun metadata, and raw JSON.
Parameters
| Name | Type | Required | Description |
|---|
project-id/name | string | No* | Required project ID/Name to know in which project the testrun is present. |
testrun_id | string | No | Single ID or comma-separated IDs for batch lookup (max 20). Project ID/name is not required if the testrun_id is provided. |
counter + projectId/name | number | No | Sequential run counter number. |
Note:
- Provide
testrun_id when you already have a stable run identifier.
- Provide
counter with projectID/name when your team references runs by sequence number.
Example Usage
4. list_testcase
Lists test cases and supports both run selection and case-level filters.
How it works:
- It identifies matching runs (by run ID, counter, or run filters like branch and time)
- It returns test cases from those runs
- It applies case-level filters (status, tag, browser, error category, runtime, artifacts)
Parameters
| Parameter | Type | Required | Description |
|---|
by_testrun_id | string | No* | Single or multiple run IDs (comma-separated, max 20). |
counter + projectId/name | Number + String | No* | Run counter along with project ID/Name. Alternative to by_testrun_id. |
by_status | string | No | Passed, failed, skipped, flaky. |
by_spec_file_name | string | No | Filter by spec file name. |
by_error_category | string | No | Error category name. |
by_browser_name | string | No | Browser name, e.g., chromium. |
by_tag | string | No | Tag or comma-separated tags. |
by_total_runtime | string | No | Time filter using operators, e.g., <60, >100. |
by_artifacts | boolean | No | True to only return cases with artifacts. |
by_error_message | string | No | Partial match on error message. |
by_attempt_number | number | No | Which attempt number to filter by. |
by_branch | string | No | Branch name; first filters runs, then returns cases. |
by_time_interval | string | No | Time interval: supports 1d, 3d, weekly, monthly, and date ranges. |
limit | number | No | Results per page, default: 1000, max: 1000. |
page | number | No | Page number, default: 1. |
get_all | boolean | No | Get all results up to 1000. |
* Provide at least one of these:
by_testrun_id or counter + projectId/name, or
- a run filter like
by_branch and by_time_interval
Example Usage
5. get_testcase_details
Fetches full debug context for a single test case, including retries and artifacts.
Parameters
| Parameter | Type | Required | Description |
|---|
testcase_id | string | No* | Test case ID can be used alone. |
testcase_name | string | No* | Test case name must be used with testrun_id or counter + project id/name. |
testrun_id | string | No | Required when using testcase_name to identify the run. |
counter + projectId/name | Number + String | No | Alternative to testrun_id when using testcase_name. |
* Provide either:
testcase_id, or
testcase_name plus testrun_id or counter
Example Usage
6. list_manual_test_cases
Searches manual test cases within a project.
Key inputs:
projectId is required
search, suiteId, tags, and classification filters narrow results
Parameters
Parameter | Type | Required | Description |
|---|
projectId | string | Yes | The project ID that contains the test case. |
suiteId | string | No | Filter by specific test suite ID. Use list_manual_test_suites to find suite IDs. |
search | string | No | Search term to match against title, description, or caseId. Example: ‘login’ or ‘TC-123’. |
status | string | No | Filter by test case status. Options: ‘actual’, ‘draft’, ‘deprecated’. |
priority | string | No | Filter by priority level. Options: ‘critical’, ‘high’, ‘medium’, ‘low’ |
severity | string | No | Filter by severity level. Options: ‘critical’, ‘major’, ‘minor’, ‘trivial’. |
type | string | No | Filter by test case type. Options: ‘functional’, ‘smoke’, ‘regression’, ‘security’, ‘performance’, ‘e2e’. |
layer | string | No | Filter by test layer. Options: ‘e2e’, ‘api’, ‘unit’. |
behavior | string | No | Filter by test behavior type. Options: ‘positive’, ‘negative’, ‘destructive’. |
automationStatus | string | No | Filter by automation status. Options: ‘automated’, ‘manual’, ‘not_automated’. |
tags | string | No | Filter by tags (comma-separated list). Example: ‘smoke,regression’ or ‘critical’. |
isFlaky | boolean | No | Filter test cases marked as flaky. Set to true to show only flaky tests, false for non-flaky. |
limit | number | No | Maximum number of results to return (max: 1000). |
Example Usage
7. get_manual_test_case
Fetches one manual test case, including steps and custom fields.
Parameters
| Parameter | Type | Required | Description |
|---|
projectId | string | Yes | The project ID that contains the test case. |
caseId | string | Yes | The test case identifier. Accepts either the internal _id or the human-readable ID (e.g., “TC-123”). |
Example Usage
8. create_manual_test_case
Creates a new manual test case under a specific suite with optional description, preconditions, steps, and classification fields. Steps use a classic structure with action, expected result, and optional data.
Parameters
| Parameter | Type | Required | Description |
|---|
projectId | string | Yes | The project ID where the test case will be created. |
title | string | Yes | The test case title. |
suiteId | string | Yes | The suite ID where this test case belongs. Use list_manual_test_suites to find suite IDs. |
description | string | No | Detailed description of what the test covers. |
preconditions | string | No | Set up the requirements before running this test. |
steps | array | No | An array of step objects, each containing: action (string), expectedResult (string), and data (string). |
priority | string | No | Priority level: critical, high, medium, or low. |
severity | string | No | Severity if failed: critical, major, minor, or trivial. |
type | string | No | Test type: functional, smoke, regression, security, performance, or e2e. |
layer | string | No | Test layer: e2e, api, or unit. |
behavior | string | No | Behavior type: positive, negative, or destructive. |
Example Usage
9. update_manual_test_case
Updates only the fields you provide in updates. All other fields remain unchanged.
Parameters
| Parameter | Type | Required | Description |
|---|
projectId | string | Yes | The project ID containing the test case. |
caseId | string | Yes | The test case identifier (internal _id or human-readable ID like “TC-123”). |
updates | object | Yes | Object containing the fields to update. Can include: title, description, steps, status, priority, severity, type, layer, behavior, preconditions, and more. |
Example Usage
10. list_manual_test_suites
Returns your suite hierarchy to help you locate suite IDs.
Parameters
| Parameter | Type | Required | Description |
|---|
projectId | string | Yes | The project ID to list suites from. |
parentSuiteId | string | No | If provided, returns only the child suites of this parent. Leave empty to get top-level suites. |
Example Usage
11. create_manual_test_suite
Creates a new suite. Use parentSuiteId to nest it under an existing suite.
Parameters
| Parameter | Type | Required | Description |
|---|
projectId | string | Yes | The project ID where the suite will be created. |
name | string | Yes | The name of the new test suite. |
parentSuiteId | string | No | If provided, creates this suite as a child of the specified parent. Leave empty to create a top-level suite. |
Example Usage
12. debug_testcase
Debugs a specific test case by fetching aggregated execution and failure data from TestDino reports to identify failure patterns and root causes.
Parameters
| Parameter | Type | Required | Description |
|---|
projectId | string | Yes | The project ID containing the test case. |
testcase_name | string | Yes | The name of the test case to debug. |
The tool aggregates historical execution data for a specific test case across multiple test runs to provide:
- Root cause analysis: Fetches historical execution data and analyzes past error messages, artifacts, stack traces, and error categories to determine the root cause of failures
- Failure patterns: Analyzes common error categories, messages, and locations to identify patterns that can be used to resolve test cases
- Fix recommendation: Recommends fixes based on historical analysis, failure patterns, and root cause determination
Note: While debugging a test case, the AI may ask for fixture or test code based on past failures and patterns it finds.How you should treat these fixes depends on the level of access you have to the system you’re testing.In setups where the tester is working closely with the application code, these fixes are usually safe to work with. The tester can cross-check UI changes, handle breaking updates, and adjust logic where needed. In this kind of environment, the AI is helping speed things up, not making decisions in isolation.Things are different when the tester is validating a platform without access to its development code. In such cases, the AI has no way of knowing whether a failure is caused by a recent UI change, an internal behavior update, or a critical platform-level issue. Applying code changes directly here can quietly break tests or make them unstable over time.Because of that, when source code visibility is limited, AI-generated fixes should be treated as advice, not final changes. Use the recommendations to understand why the test is failing, then validate and adjust manually based on what you observe in the product.