Overview
OneTest uses an AI-powered test execution engine through the OneTest QA Agent. Tests stored in OneTest are executed via the/qa-onetest skill in Claude Code, which drives a real browser, records results per step, and reports everything back to OneTest automatically.
Prerequisites
Install Claude Code
Install Claude Code — the CLI tool that runs the QA agent.
Configure the OneTest MCP Server
Or add the Get your API key from tms.onetest.ai -> Settings -> API Keys.
test-management MCP server manually to your .mcp.json (project root):Running a Test Run
Use the/qa-onetest run command in Claude Code to create, execute, and complete a full test run.
Execute Tests
For each test case in the run:
- Navigates to the target URL in a real browser
- Executes each test step via Chrome DevTools Protocol (CDP)
- Captures screenshots before and after each step
- Checks the browser console for errors after each step
- Validates actual results against expected outcomes
Record Results
Records the result for each test execution:
- Pass / Fail / Blocked / Skipped status
- Step-level results with screenshots
- Failure reasons and classification
- Links to defects if applicable
All Commands
The/qa-onetest skill supports four commands:
run
Create, execute, and complete a full test run. The agent drives a real browser and records all results back to OneTest.
push findings
Convert QA audit findings into OneTest test cases. Maps priorities (p0-p3) and categories (accessibility, security, performance, etc.) automatically.
pull tests
Fetch test cases from OneTest for local browser execution.
status
Show the execution queue and any active runs.
How Execution Works
Browser-Based Testing
The agent executes tests in a real Chrome browser using the Chrome DevTools Protocol (CDP). This means tests interact with your application exactly as a user would — clicking buttons, filling forms, navigating pages, and verifying visual output.Exploratory Findings
If the agent discovers issues during test execution that aren’t covered by existing test cases, it can record them as exploratory findings usingrecord_exploratory_result. These appear in your test run alongside the planned test results.
Parallel Execution
Non-conflicting tests can run simultaneously using separate Chrome instances on different CDP ports. This speeds up execution when tests target different URLs or isolated user sessions.Recording Results
Every test execution records detailed results back to OneTest:Step-Level Results
Step-Level Results
Each step in a test case gets its own pass/fail status, along with:
- Screenshots captured before and after the step
- Console output from the browser
- Actual vs. expected result comparison
Failure Classification
Failure Classification
When a test fails, the agent records:
- Failure reason: What went wrong
- Failure classification: Environment issue, product defect, test data issue, etc.
- Defect links: Links to related bug tracker issues
Run Analytics
Run Analytics
After the run completes, OneTest provides analytics:
- Pass/fail/skip/blocked rates
- Completion percentage
- Total execution duration
- Failure breakdown by category
Typical Workflow
A common end-to-end workflow looks like this:- Audit your site with QA specialist skills (
/qa-accessibility,/qa-security, etc.) - Push findings to OneTest as test cases with
/qa-onetest push findings - Run tests with
/qa-onetest runto execute them in a real browser - View results in the OneTest dashboard to analyze trends and track regressions
What’s Next?
Viewing Results
Analyze test results and trends in the dashboard
QA Agent on GitHub
Full documentation for the QA agent and all skills

