Skip to main content
Parameters, examples, and expected outputs for each TestDino MCP tool, so you can prompt reliably and debug faster.

1. health

Verifies the server is running and validates your API token. The health tool performs a quick validation and returns:
  • PAT validation status
  • Connection status with TestDino
  • Organisation access permissions
  • Project access within each organisation
  • Available modules for each project (Test runs, Test case management)

Parameters

No parameters required.

Why use the health tool?

After running the health tool, tell the AI assistant which organisation or project you’re working on. The assistant automatically resolves and stores the corresponding projectId, eliminating the need to repeatedly specify project names or IDs in future tool calls.

Example Usage

2. list_testruns

Lists runs and supports filtering by branch, environment, time window, author, and commit.
Use it to locate the exact run you want to inspect before calling get_run_details.

Parameters

ParameterTypeRequiredDefaultDescription
project-id/namestringYes-Project ID/Name is required to specify which project’s test runs list is required.
by_branchstringNo-Git branch name, e.g., main, develop.
by_time_intervalstringNo-Time filters: 1d, 3d, weekly, monthly, or date range: YYYY-MM-DD, YYYY-MM-DD.
by_authorstringNo-Commit author name; case-insensitive partial match.
by_commitstringNo-Commit hash (full or partial).
by_environmentstringNo-Environment, e.g., production, staging, development.
limitnumberNo20Results per page. Max 1000.
pagenumberNo1Page number for pagination.
get_allbooleanNofalseRetrieve all results up to 1000.
Note:
  • Filters can be combined.
  • Pagination uses page and limit. get_all=true fetches up to 1000 records.

Example Usage

3. get_run_details

Returns a full report for one run, including suite breakdowns, test cases, failure categories, rerun metadata, and raw JSON.

Parameters

NameTypeRequiredDescription
project-id/namestringNo*Required project ID/Name to know in which project the testrun is present.
testrun_idstringNoSingle ID or comma-separated IDs for batch lookup (max 20). Project ID/name is not required if the testrun_id is provided.
counter + projectId/namenumberNoSequential run counter number.
Note:
  • Provide testrun_id when you already have a stable run identifier.
  • Provide counter with projectID/name when your team references runs by sequence number.

Example Usage

4. list_testcase

Lists test cases and supports both run selection and case-level filters. How it works:
  1. It identifies matching runs (by run ID, counter, or run filters like branch and time)
  2. It returns test cases from those runs
  3. It applies case-level filters (status, tag, browser, error category, runtime, artifacts)

Parameters

ParameterTypeRequiredDescription
by_testrun_idstringNo*Single or multiple run IDs (comma-separated, max 20).
counter + projectId/nameNumber + StringNo*Run counter along with project ID/Name. Alternative to by_testrun_id.
by_statusstringNoPassed, failed, skipped, flaky.
by_spec_file_namestringNoFilter by spec file name.
by_error_categorystringNoError category name.
by_browser_namestringNoBrowser name, e.g., chromium.
by_tagstringNoTag or comma-separated tags.
by_total_runtimestringNoTime filter using operators, e.g., <60, >100.
by_artifactsbooleanNoTrue to only return cases with artifacts.
by_error_messagestringNoPartial match on error message.
by_attempt_numbernumberNoWhich attempt number to filter by.
by_branchstringNoBranch name; first filters runs, then returns cases.
by_time_intervalstringNoTime interval: supports 1d, 3d, weekly, monthly, and date ranges.
limitnumberNoResults per page, default: 1000, max: 1000.
pagenumberNoPage number, default: 1.
get_allbooleanNoGet all results up to 1000.
* Provide at least one of these:
  • by_testrun_id or counter + projectId/name, or
  • a run filter like by_branch and by_time_interval

Example Usage

5. get_testcase_details

Fetches full debug context for a single test case, including retries and artifacts.

Parameters

ParameterTypeRequiredDescription
testcase_idstringNo*Test case ID can be used alone.
testcase_namestringNo*Test case name must be used with testrun_id or counter + project id/name.
testrun_idstringNoRequired when using testcase_name to identify the run.
counter + projectId/nameNumber + StringNoAlternative to testrun_id when using testcase_name.
* Provide either:
  • testcase_id, or
  • testcase_name plus testrun_id or counter

Example Usage

6. list_manual_test_cases

Searches manual test cases within a project. Key inputs:
  • projectId is required
  • search, suiteId, tags, and classification filters narrow results

Parameters

ParameterTypeRequiredDescription
projectIdstringYesThe project ID that contains the test case.
suiteIdstringNoFilter by specific test suite ID. Use list_manual_test_suites to find suite IDs.
searchstringNoSearch term to match against title, description, or caseId. Example: ‘login’ or ‘TC-123’.
statusstringNoFilter by test case status. Options: ‘actual’, ‘draft’, ‘deprecated’.
prioritystringNoFilter by priority level. Options: ‘critical’, ‘high’, ‘medium’, ‘low’
severitystringNoFilter by severity level. Options: ‘critical’, ‘major’, ‘minor’, ‘trivial’.
typestringNoFilter by test case type. Options: ‘functional’, ‘smoke’, ‘regression’, ‘security’, ‘performance’, ‘e2e’.
layerstringNoFilter by test layer. Options: ‘e2e’, ‘api’, ‘unit’.
behaviorstringNoFilter by test behavior type. Options: ‘positive’, ‘negative’, ‘destructive’.
automationStatusstringNoFilter by automation status. Options: ‘automated’, ‘manual’, ‘not_automated’.
tagsstringNoFilter by tags (comma-separated list). Example: ‘smoke,regression’ or ‘critical’.
isFlakybooleanNoFilter test cases marked as flaky. Set to true to show only flaky tests, false for non-flaky.
limitnumberNoMaximum number of results to return (max: 1000).

Example Usage

7. get_manual_test_case

Fetches one manual test case, including steps and custom fields.

Parameters

ParameterTypeRequiredDescription
projectIdstringYesThe project ID that contains the test case.
caseIdstringYesThe test case identifier. Accepts either the internal _id or the human-readable ID (e.g., “TC-123”).

Example Usage

8. create_manual_test_case

Creates a new manual test case under a specific suite with optional description, preconditions, steps, and classification fields. Steps use a classic structure with action, expected result, and optional data.

Parameters

ParameterTypeRequiredDescription
projectIdstringYesThe project ID where the test case will be created.
titlestringYesThe test case title.
suiteIdstringYesThe suite ID where this test case belongs. Use list_manual_test_suites to find suite IDs.
descriptionstringNoDetailed description of what the test covers.
preconditionsstringNoSet up the requirements before running this test.
stepsarrayNoAn array of step objects, each containing: action (string), expectedResult (string), and data (string).
prioritystringNoPriority level: critical, high, medium, or low.
severitystringNoSeverity if failed: critical, major, minor, or trivial.
typestringNoTest type: functional, smoke, regression, security, performance, or e2e.
layerstringNoTest layer: e2e, api, or unit.
behaviorstringNoBehavior type: positive, negative, or destructive.

Example Usage

9. update_manual_test_case

Updates only the fields you provide in updates. All other fields remain unchanged.

Parameters

ParameterTypeRequiredDescription
projectIdstringYesThe project ID containing the test case.
caseIdstringYesThe test case identifier (internal _id or human-readable ID like “TC-123”).
updatesobjectYesObject containing the fields to update. Can include: title, description, steps, status, priority, severity, type, layer, behavior, preconditions, and more.

Example Usage

10. list_manual_test_suites

Returns your suite hierarchy to help you locate suite IDs.

Parameters

ParameterTypeRequiredDescription
projectIdstringYesThe project ID to list suites from.
parentSuiteIdstringNoIf provided, returns only the child suites of this parent. Leave empty to get top-level suites.

Example Usage

11. create_manual_test_suite

Creates a new suite. Use parentSuiteId to nest it under an existing suite.

Parameters

ParameterTypeRequiredDescription
projectIdstringYesThe project ID where the suite will be created.
namestringYesThe name of the new test suite.
parentSuiteIdstringNoIf provided, creates this suite as a child of the specified parent. Leave empty to create a top-level suite.

Example Usage

12. debug_testcase

Debugs a specific test case by fetching aggregated execution and failure data from TestDino reports to identify failure patterns and root causes.

Parameters

ParameterTypeRequiredDescription
projectIdstringYesThe project ID containing the test case.
testcase_namestringYesThe name of the test case to debug.
The tool aggregates historical execution data for a specific test case across multiple test runs to provide:
  • Root cause analysis: Fetches historical execution data and analyzes past error messages, artifacts, stack traces, and error categories to determine the root cause of failures
  • Failure patterns: Analyzes common error categories, messages, and locations to identify patterns that can be used to resolve test cases
  • Fix recommendation: Recommends fixes based on historical analysis, failure patterns, and root cause determination
Note: While debugging a test case, the AI may ask for fixture or test code based on past failures and patterns it finds.How you should treat these fixes depends on the level of access you have to the system you’re testing.In setups where the tester is working closely with the application code, these fixes are usually safe to work with. The tester can cross-check UI changes, handle breaking updates, and adjust logic where needed. In this kind of environment, the AI is helping speed things up, not making decisions in isolation.Things are different when the tester is validating a platform without access to its development code. In such cases, the AI has no way of knowing whether a failure is caused by a recent UI change, an internal behavior update, or a critical platform-level issue. Applying code changes directly here can quietly break tests or make them unstable over time.Because of that, when source code visibility is limited, AI-generated fixes should be treated as advice, not final changes. Use the recommendations to understand why the test is failing, then validate and adjust manually based on what you observe in the product.