Skip to main content

General Questions

OneTest is an AI-powered testing platform that helps teams create, manage, and execute tests faster. It combines traditional test management with AI assistance to generate tests, search intelligently, and provide insights.Key features:
  • AI-powered test generation
  • Powerful OQL query language
  • Test execution tracking
  • Team collaboration tools
  • Artifacts and evidence storage
What makes OneTest unique:
  • AI-First Design: Generate complete test cases by chatting with AI—no manual writing required
  • OQL Query Language: Search tests like you search issues in JIRA
  • Modern Interface: Clean, intuitive UI built for modern teams
  • Open Architecture: Integrations with your existing tools
  • All-in-One: Test management + artifacts + execution in one platform
No! OneTest is designed for both technical and non-technical users:
  • Manual testers: Create and execute tests through the UI
  • QA engineers: Use OQL for advanced queries, integrate with CI/CD
  • Developers: Write automated tests, use API
  • Product managers: Review test coverage, track quality
The AI Assistant makes it even easier—just describe what you want to test in plain English.
Visit onetest.ai/pricing for current pricing information.Free trial: 14 days, no credit card required Plans: Starter, Professional, Enterprise Billing: Monthly or annual
OneTest uses a coin-based system for API usage tracking:
  • Browser usage is free — all UI actions cost zero coins
  • API key usage is metered — each API call costs 1 coin
  • Weekly budget — each product gets 1,000 coins per week (configurable)
  • Budget resets automatically each week
When coins run out, API requests are blocked until the budget resets or you top up. UI access is never affected.See Usage & Billing for details.
Use API keys for programmatic access:
  1. Go to Settings > API Keys
  2. Create a new key (shown only once — save it!)
  3. Include in requests: Authorization: Bearer ak_YOUR_KEY
API keys work with all OneTest services (test management, artifacts, receiver, etc.). Each API call costs 1 coin from your weekly budget.

Getting Started

Easiest way (AI):
  1. Press Cmd/Ctrl + K to open AI chat
  2. Type: “Create a test for user login with valid credentials”
  3. Review the generated test
  4. Click Save
Manual way:
  1. Go to Test Management
  2. Click ”+ New Test”
  3. Fill in title, description, and steps
  4. Click Save
See Your First Test for detailed walkthrough.
  1. Go to SettingsMembers
  2. Click ”+ Invite Member”
  3. Enter email address
  4. Select role (Owner or Member)
  5. Click Send Invite
They’ll receive an email invitation to join your product.
Import: OneTest supports importing tests from:
  • Excel spreadsheets (XLSX) — direct import with collision handling (skip, overwrite, or assign new IDs)
  • ZIP archives — full backup with folders, tags, and markdown files
  • AI-powered Pipelines — multi-phase ingestion for large imports with field mapping and decision gates
  • JUnit XML files — automated results via Integrations and API Keys
  • ReportPortal agents — drop-in compatible
Export: Export test cases to Excel (XLSX) or ZIP with flexible options:
  • Export all test cases
  • Export OQL-filtered results (e.g., priority = p1 AND status = ready)
  • Export selected tests (checkbox selection)
  • Export unassigned tests (not in any folder)
See Import & Export and Pipelines for details.
OneTest provides Integrations and API Keys — ReportPortal-compatible API keys for CI/CD integration:
  1. Go to Settings > Integrations and API Keys
  2. Click + Add API Key
  3. Configure your CI/CD pipeline:
RP_ENDPOINT=https://tms.onetest.ai/api/receiver
RP_PROJECT=<your-product-uuid>
RP_API_KEY=<your-api-key>
OneTest also supports JUnit XML import via POST /api/v1/products/{product_uuid}/import/junit.See Integrations and API Keys for detailed setup.

AI Assistant

The AI Assistant uses Claude (by Anthropic) to understand your testing needs and generate complete test cases.It can:
  • Generate single tests or entire test suites
  • Search existing tests using natural language
  • Answer questions about your product
  • Suggest test improvements
  • Extract test cases from requirements
It understands context:
  • Knows which product you’re working on
  • Sees your current page and selections
  • Learns from your existing tests
  • Adapts to your testing style
No. OneTest uses Claude’s API with strict data protections:
  • Your data is never used for training
  • Conversations are encrypted
  • Data stays in your region
  • You control data retention
See our Privacy Policy for details.
Yes! You can customize:
  • Personality: Technical, friendly, concise, etc.
  • Detail level: Brief or comprehensive test cases
  • Naming style: Your preferred test naming conventions
  • Prompt templates: Save prompts for common scenarios
AI behavior is configured through LLM Configuration in Settings. Go to Settings → LLM Configuration to customize.
The AI is very capable but not perfect:Always review generated tests:
  • Check steps make sense
  • Verify expected results
  • Add missing details
  • Adjust to your standards
Improve AI output:
  • Be specific in your prompts
  • Provide more context
  • Use examples
  • Give feedback on generated tests
The AI learns from your edits and improves over time.

OQL Query Language

OQL (OneTest Query Language) is a simple but powerful query language for finding tests, similar to JQL in JIRA.Example:
status = active AND priority IN (p0, p1) AND
tags CONTAINS "smoke"
ORDER BY created_at DESC
See OQL Overview for details.
Not required, but recommended for power users:
  • Basic users: Use simple filters in the UI
  • Advanced users: Learn OQL for complex queries
  • AI users: Ask AI to write OQL for you!
Example with AI:
"Show me high priority tests that failed last week"
AI generates: priority IN (p0, p1) AND status = failed AND last_run >= -7d
Yes! Save frequently-used queries:
  1. Write your OQL query
  2. Click “Save Query”
  3. Give it a name (e.g., “Critical Failed Tests”)
  4. Access from “Saved Queries” dropdown
Saved queries are shared with your team.

Test Execution

Three ways to run tests:
  1. Individual test: Open test → Click “Run Test”
  2. Test run: Select multiple tests → Click “Create Run”
  3. Automated: Schedule tests or trigger from CI/CD
See Running Tests for details.
Yes! OneTest supports:
  • Manual test execution (click through UI)
  • Automated test execution (API/Selenium/Playwright)
  • Hybrid approach (some manual, some automated)
For automation:
  • Mark tests as “automated”
  • Link to automation test ID
  • Execute via API
  • Track automated results
During test execution:
  1. Execute a test step
  2. Click “Add Evidence”
  3. Upload screenshot or drag & drop
  4. Add optional description
  5. Continue with next step
Supported formats: PNG, JPEG, GIF, PDF Storage: Stored in Artifacts service
Yes! Distribute tests across team members:
  1. Create a test run
  2. Assign different tests to different team members
  3. Everyone executes their assigned tests simultaneously
  4. Monitor progress on run dashboard
This significantly speeds up test execution.

Troubleshooting

Possible causes:
  • Too many steps per test
  • Complex test data setup
  • Slow environment
  • Network issues
Solutions:
  • Break large tests into smaller tests
  • Optimize test data
  • Use faster environment
  • Check network connectivity
  • Consider automation for repetitive tests
Try these:
  1. Check filters—clear all filters and try again
  2. Check folder—tests might be in subfolder
  3. Check status—test might be archived
  4. Use OQL: title ~ "your search term"
  5. Use AI: “Find tests related to login”
Check:
  1. LLM Configuration — ensure you’ve set up an AI provider in Settings > LLM Configuration
  2. Internet connection
  3. Browser console for errors
  4. Try refreshing the page
  5. Check system status page
If problem persists, contact support.
The AI Assistant requires an LLM provider:
  1. Go to Settings > LLM Configuration
  2. Select a provider: Azure OpenAI, AWS Bedrock, OpenAI, or Anthropic
  3. Enter your API credentials
  4. Configure model tiers (High for reasoning, Default for general, Low for fast tasks)
  5. Click Test Connection then Save Configuration
See LLM Configuration for detailed instructions.
MCP (Model Context Protocol) is a protocol that lets AI coding assistants (like Cursor, Windsurf, or Claude Code) connect to OneTest services directly from your editor.OneTest exposes MCP endpoints for each of its services (test management, artifacts, credpool, etc.). You configure your AI assistant to connect to these endpoints using your OneTest API key.Setup options:
  1. Automatic (recommended): Run npx github:onetest-ai/qa-agent init and follow the browser-connect flow — it configures everything for you.
  2. Manual: Edit your .mcp.json file directly to add the OneTest MCP endpoints.
See MCP Connectivity for configuration details and endpoint reference.
  1. Go to login page
  2. Click “Forgot Password?”
  3. Enter your email
  4. Check email for reset link
  5. Follow link to set new password
Or contact support if you need help.

Support

Email Support

Documentation

Browse our comprehensive docs

GitHub Issues

Report bugs or request features

Status Page

Check system status

Still have questions?

Contact Support

Our team is here to help! Email us at artem@onetest.ai