Accessing the Report
Navigate to Analytics from the main navigation. The Automation Coverage dashboard loads automatically for the current product.Dashboard Overview
The report has four sections:Summary Cards
Total Test Cases
All non-archived test cases in the product
Marked Automated
Percentage of test cases with execution type set to “automated”
Linked to Results
Test cases with actual CI/CD execution evidence
Automation Gap
Test cases marked automated but with no CI/CD evidence (highlighted in red when > 0)
Distribution Charts
Two charts show how your test suite is composed:- Execution Type (pie chart) — Manual vs Automated split
- Test Category (bar chart) — Breakdown by category: functional, performance, security, accessibility, exploratory
Gap Analysis Bar
The most important visualization. A stacked horizontal bar that breaks down your “marked automated” test cases into three segments:| Segment | Color | Meaning |
|---|---|---|
| Linked to CI/CD | Green | Test has matching execution results from your CI/CD pipeline |
| No test ref set | Orange | Test is marked automated but has no automation_test_id configured — link is missing |
| Has ref, no CI/CD match | Red | Test has an automation reference but it doesn’t match any code_ref in CI/CD results |
Coverage by Priority & Category
Two stacked bar charts showing automated vs manual distribution:- By Priority — Are your P0/P1 critical tests automated?
- By Category — Which test categories have the best automation coverage?
Understanding the Gap
Green: Linked to CI/CD
Green: Linked to CI/CD
These test cases have confirmed execution evidence. OneTest matched the test case’s
automation_test_id to a code_ref in your CI/CD test results.No action needed — these tests are running.Orange: No Test Ref Set
Orange: No Test Ref Set
These test cases are marked as automated (
execution_type = automated) but don’t have an automation_test_id set. Without this reference, OneTest can’t match them to CI/CD results.Action: Set the automation_test_id field on these test cases. The value should match the test’s code reference in your framework:Red: Has Ref, No CI/CD Match
Red: Has Ref, No CI/CD Match
These are the real automation gaps. The test case has an
automation_test_id but it doesn’t match any code_ref in your CI/CD results. Common causes:- Test was renamed — the class or method name changed but
automation_test_idwasn’t updated - Test was deleted — the automation code was removed
- Test is disabled — skipped or excluded from CI/CD runs
- Format mismatch —
automation_test_iduses::separator but CI/CD sends:(OneTest normalizes these, but edge cases exist)
automation_test_id to match the current test name, or change execution_type back to “manual” if the automation was removed.How Matching Works
OneTest links test cases to CI/CD results by comparing two values:| Source | Field | Format | Example |
|---|---|---|---|
| Test Case | automation_test_id | file:Class.method | test_login.py:TestLogin.test_valid_login |
| CI/CD Results | code_ref | file:Class.method | test_login.py:TestLogin.test_valid_login |
Format Normalization
OneTest automatically normalizes common format differences:| Input Format | Normalized |
|---|---|
test_login.py::TestLogin::test_valid_login | test_login.py:TestLogin.test_valid_login |
test_login.py:TestLogin.test_valid_login | test_login.py:TestLogin.test_valid_login |
The
code_ref values come from your test reporting agent (e.g., pytest-reportportal). OneTest doesn’t control this format — it adapts to what your CI/CD sends.Multiple References
A single test case can map to multiple automated tests. Whenautomation_test_id contains multiple references (one per line), a match on any reference counts as linked:
Setting Up Automation References
From the UI
- Open a test case
- Set Execution Type to “Automated”
- Fill in the Automation Test ID field with the test’s code reference
- Save
Via API
From CI/CD (Automatic)
When using pytest-reportportal or similar agents, test results includecode_ref automatically. OneTest matches these to test cases with matching automation_test_id values.
API Reference
no_cache=true— Bypass the 5-minute cache for fresh data
Best Practices
Set automation_test_id early
When you write automation code, immediately set the
automation_test_id on the matching test case. Don’t wait for CI/CD to catch up.Review the gap weekly
Check the Automation Coverage report regularly. A growing red segment means automation is silently breaking.
Match the format
Use
file.py:Class.method format for automation_test_id. This matches what pytest-reportportal sends as code_ref.Prioritize P0/P1 coverage
Use the Coverage by Priority chart to ensure critical tests have the highest automation coverage and lowest gap.

