This page is under construction. More content coming soon!
Core Testing Principles
Test Early, Test Often
Catch bugs when they’re cheapest to fix—during development
Automate Wisely
Automate stable, repetitive tests. Keep exploratory testing manual.
Test the Right Things
Focus on critical functionality and user journeys, not every edge case
Make Tests Maintainable
Clear, simple tests are easier to maintain than clever, complex ones
Writing Effective Test Cases
- Clarity
- Scope
- Independence
- Data
Write Clear, Unambiguous Tests
Good test case characteristics:- Specific: No room for interpretation
- Repeatable: Anyone can execute and get same result
- Self-contained: Includes all necessary information
- Actionable: Clear steps to follow
Test Organization
Use Meaningful Names
Use Meaningful Names
Test cases: Start with action verb
- ✅ “Verify user can checkout with credit card”
- ❌ “Checkout test #42”
- ✅
/Authentication/Login/ - ❌
/Tests/Stuff/More Stuff/
- ✅
smoke,regression,critical - ❌
smoke,Smoke,SMOKE,smoke-test
Prioritize Tests
Prioritize Tests
Use priority levels:
- P0/Critical: Blocks releases if failing
- P1/High: Important functionality
- P2/Medium: Standard features
- P3/Low: Nice to have
- P4/Trivial: Optional
Track Coverage
Track Coverage
Ensure coverage of:
- All critical user journeys
- All P0/P1 requirements
- Common error scenarios
- Edge cases for critical features
Automation Strategy
- What to Automate
- Automation Pyramid
- Maintenance
Automate These Tests
Good candidates for automation:- ✅ Smoke tests (run after every deployment)
- ✅ Regression tests (run before release)
- ✅ API tests (fast, stable, repeatable)
- ✅ Data-driven tests (same steps, different data)
- ✅ Tests run frequently (>5 times per week)
- ✅ Stable tests (pass rate >95%)
- ❌ Exploratory testing
- ❌ Usability testing
- ❌ Visual design testing
- ❌ Tests that change frequently
- ❌ Tests run rarely
- ❌ Tests with complex verification
Test Execution
When to Run Tests
When to Run Tests
Smoke tests: After every deployment
- Quick verification (15-30 min)
- Critical paths only
- Blocks further testing if fails
- Comprehensive validation (2-4 hours)
- All important features
- Blocks release if P0/P1 fail
- Ad-hoc testing
- New features
- Edge cases
- Unusual scenarios
- On every commit (unit tests)
- Nightly (full regression)
- Pre-merge (affected tests)
Test Environments
Test Environments
Environment strategy:
- Development: Developer local testing
- QA/Test: QA team testing
- Staging: Pre-production validation
- Production: Smoke tests only
- Keep staging identical to production
- Use production-like data (anonymized)
- Isolate test data from production
- Refresh test environments regularly
- Document environment differences
Handling Failures
Handling Failures
When a test fails:
-
Reproduce: Can you reproduce the failure?
- Yes → Investigate
- No → Might be flaky, investigate further
-
Classify: What type of failure?
- Real bug → Create defect
- Test issue → Fix the test
- Environmental → Check environment
- Known issue → Link to existing bug
-
Document: Add details:
- Screenshots
- Logs
- Steps to reproduce
- Environment details
-
Act: Take appropriate action:
- Block release if P0/P1
- Assess risk if P2/P3
- Fix test if test issue
Team Collaboration
Shared Responsibility
Everyone owns quality: devs write unit tests, QA writes integration tests, product defines acceptance criteria
Clear Communication
Use test results to communicate: pass rate, coverage, trends, risks
Review Tests
Review test cases like code: check for clarity, completeness, maintainability
Share Knowledge
Document testing strategies, share test patterns, train new team members
Common Anti-Patterns
Testing Everything
Testing Everything
Problem: Trying to test every possible scenarioWhy it’s bad: Wastes time, most tests add little valueSolution: Focus on:
- Critical user journeys
- High-risk areas
- Recently changed code
- Areas with frequent bugs
Test Automation for Everything
Test Automation for Everything
Problem: Automating all tests blindlyWhy it’s bad: Some tests cost more to automate than run manuallySolution: Automate selectively based on:
- Frequency of execution
- Stability of feature
- Cost of automation vs manual
- ROI of automation
Ignoring Flaky Tests
Ignoring Flaky Tests
Problem: “Oh, that test is flaky, just re-run it”Why it’s bad: Erodes trust in test suite, masks real issuesSolution:
- Fix flaky tests immediately
- If can’t fix, remove from suite
- Never accept “sometimes it fails”
No Test Maintenance
No Test Maintenance
Problem: Writing tests but never updating themWhy it’s bad: Tests become outdated, irrelevant, or brokenSolution:
- Review tests quarterly
- Update for product changes
- Remove obsolete tests
- Fix broken tests immediately
Testing Too Late
Testing Too Late
Problem: Only testing at the end of developmentWhy it’s bad: Bugs are expensive to fix late in cycleSolution:
- Test during development
- Write tests first (TDD)
- Review requirements before coding
- Continuous testing in CI/CD
Metrics That Matter
Track these metrics to improve testing:Pass Rate
Target: >95% for regression suite
Coverage
% of features with tests
Defect Detection Rate
Bugs found in test vs production
Cycle Time
Time from code to tested
Automation Rate
% of tests automated
Flaky Test Rate
Target: <2%
Further Reading
Test Management
Learn about OneTest’s features
Smoke Testing
Critical path testing workflow
Regression Testing
Comprehensive testing workflow
Team Collaboration
Work together on testing
Recommended Books
Lessons Learned in Software Testing
Lessons Learned in Software Testing
By Cem Kaner, James Bach, and Bret PettichordClassic book with 293 lessons about software testing
Agile Testing
Agile Testing
By Lisa Crispin and Janet GregoryTesting in agile environments
Explore It!
Explore It!
By Elisabeth HendricksonExploratory testing techniques
The Art of Software Testing
The Art of Software Testing
By Glenford J. MyersFundamentals of software testing (classic)

