Skip to main content
This page is under construction. More content coming soon!

Core Testing Principles

Test Early, Test Often

Catch bugs when they’re cheapest to fix—during development

Automate Wisely

Automate stable, repetitive tests. Keep exploratory testing manual.

Test the Right Things

Focus on critical functionality and user journeys, not every edge case

Make Tests Maintainable

Clear, simple tests are easier to maintain than clever, complex ones

Writing Effective Test Cases

Write Clear, Unambiguous Tests

Good test case characteristics:
  • Specific: No room for interpretation
  • Repeatable: Anyone can execute and get same result
  • Self-contained: Includes all necessary information
  • Actionable: Clear steps to follow
**Title**: Verify user can login with valid credentials

**Preconditions**:
- User account exists: test@example.com / Password123!
- User is not currently logged in

**Steps**:
1. Navigate to https://app.example.com/login
   → Login page displays with email and password fields

2. Enter email: test@example.com
   → Email field accepts input

3. Enter password: Password123!
   → Password is masked with dots

4. Click "Login" button
   → User is redirected to dashboard at /dashboard
   → Welcome message displays: "Welcome, Test User"

Test Organization

Test cases: Start with action verb
  • ✅ “Verify user can checkout with credit card”
  • ❌ “Checkout test #42”
Folders: Group by feature or user journey
  • /Authentication/Login/
  • /Tests/Stuff/More Stuff/
Tags: Be consistent
  • smoke, regression, critical
  • smoke, Smoke, SMOKE, smoke-test
Use priority levels:
  • P0/Critical: Blocks releases if failing
  • P1/High: Important functionality
  • P2/Medium: Standard features
  • P3/Low: Nice to have
  • P4/Trivial: Optional
Run tests by priority:
priority = p0  # Run first
priority IN (p0, p1)  # Smoke suite
priority IN (p0, p1, p2)  # Full regression
Ensure coverage of:
  • All critical user journeys
  • All P0/P1 requirements
  • Common error scenarios
  • Edge cases for critical features
Use OQL to find gaps:
# High priority features without tests
requirements ~ "US-" AND priority IN (p0, p1) AND test_count = 0

# Features not tested recently
status = active AND last_run < -30d AND priority IN (p0, p1)

Automation Strategy

Automate These Tests

Good candidates for automation:
  • ✅ Smoke tests (run after every deployment)
  • ✅ Regression tests (run before release)
  • ✅ API tests (fast, stable, repeatable)
  • ✅ Data-driven tests (same steps, different data)
  • ✅ Tests run frequently (>5 times per week)
  • ✅ Stable tests (pass rate >95%)
Keep these manual:
  • ❌ Exploratory testing
  • ❌ Usability testing
  • ❌ Visual design testing
  • ❌ Tests that change frequently
  • ❌ Tests run rarely
  • ❌ Tests with complex verification

Test Execution

Smoke tests: After every deployment
  • Quick verification (15-30 min)
  • Critical paths only
  • Blocks further testing if fails
Regression tests: Before releases
  • Comprehensive validation (2-4 hours)
  • All important features
  • Blocks release if P0/P1 fail
Exploratory tests: Ongoing
  • Ad-hoc testing
  • New features
  • Edge cases
  • Unusual scenarios
Automated tests: Continuously
  • On every commit (unit tests)
  • Nightly (full regression)
  • Pre-merge (affected tests)
Environment strategy:
  1. Development: Developer local testing
  2. QA/Test: QA team testing
  3. Staging: Pre-production validation
  4. Production: Smoke tests only
Best practices:
  • Keep staging identical to production
  • Use production-like data (anonymized)
  • Isolate test data from production
  • Refresh test environments regularly
  • Document environment differences
When a test fails:
  1. Reproduce: Can you reproduce the failure?
    • Yes → Investigate
    • No → Might be flaky, investigate further
  2. Classify: What type of failure?
    • Real bug → Create defect
    • Test issue → Fix the test
    • Environmental → Check environment
    • Known issue → Link to existing bug
  3. Document: Add details:
    • Screenshots
    • Logs
    • Steps to reproduce
    • Environment details
  4. Act: Take appropriate action:
    • Block release if P0/P1
    • Assess risk if P2/P3
    • Fix test if test issue

Team Collaboration

Shared Responsibility

Everyone owns quality: devs write unit tests, QA writes integration tests, product defines acceptance criteria

Clear Communication

Use test results to communicate: pass rate, coverage, trends, risks

Review Tests

Review test cases like code: check for clarity, completeness, maintainability

Share Knowledge

Document testing strategies, share test patterns, train new team members

Common Anti-Patterns

Avoid these testing mistakes:
Problem: Trying to test every possible scenarioWhy it’s bad: Wastes time, most tests add little valueSolution: Focus on:
  • Critical user journeys
  • High-risk areas
  • Recently changed code
  • Areas with frequent bugs
Problem: Automating all tests blindlyWhy it’s bad: Some tests cost more to automate than run manuallySolution: Automate selectively based on:
  • Frequency of execution
  • Stability of feature
  • Cost of automation vs manual
  • ROI of automation
Problem: “Oh, that test is flaky, just re-run it”Why it’s bad: Erodes trust in test suite, masks real issuesSolution:
  • Fix flaky tests immediately
  • If can’t fix, remove from suite
  • Never accept “sometimes it fails”
Problem: Writing tests but never updating themWhy it’s bad: Tests become outdated, irrelevant, or brokenSolution:
  • Review tests quarterly
  • Update for product changes
  • Remove obsolete tests
  • Fix broken tests immediately
Problem: Only testing at the end of developmentWhy it’s bad: Bugs are expensive to fix late in cycleSolution:
  • Test during development
  • Write tests first (TDD)
  • Review requirements before coding
  • Continuous testing in CI/CD

Metrics That Matter

Track these metrics to improve testing:

Pass Rate

Target: >95% for regression suite

Coverage

% of features with tests

Defect Detection Rate

Bugs found in test vs production

Cycle Time

Time from code to tested

Automation Rate

% of tests automated

Flaky Test Rate

Target: <2%

Further Reading

By Cem Kaner, James Bach, and Bret PettichordClassic book with 293 lessons about software testing
By Lisa Crispin and Janet GregoryTesting in agile environments
By Elisabeth HendricksonExploratory testing techniques
By Glenford J. MyersFundamentals of software testing (classic)