Getting started
Overview
What is Test Case Management?
A complete system for organizing, executing, and tracking your testing activities.
- Create and manage test cases (regular and cross-platform)
- Organize tests into executable test suites
- Run interactive test sessions with real-time tracking
- Capture evidence with screenshots and notes
- Generate execution reports and analytics
- Track test history and trends over time
Regular Test Cases
Traditional single-platform test cases with detailed steps.
• Standard test execution flow
• Detailed step-by-step instructions
• Single platform/environment
• Manual or automated execution
Cross-Platform Tests
Tests designed for multiple platforms from one requirement.
• Web, Mobile, API, Accessibility, Performance
• Platform-specific adaptations
• Approval workflow for generated tests
• Framework-tagged for filtering
Complete workflow
Generate test cases → Organize into suites → Execute in sessions → Review results → Generate reports. Each step builds on the previous one.
Foundation
Managing Test Cases
Test case types
Understanding regular vs. cross-platform test cases.
Structure:
- Title and description
- Test type (functional, security, etc.)
- Priority (critical, high, medium, low)
- Detailed test steps with actions and expectations
- Expected result
- Prerequisites and test data
AI-generated
Most regular test cases are generated from requirements using the AI Generator, though you can create them manually too.
Organization
Test Suites
What are test suites?
Collections of test cases grouped for execution and tracking.
Key benefits:
- Execute multiple related tests in one session
- Track execution progress and results
- Organize tests by testing phase or feature
- Schedule and plan test execution
- Generate suite-level reports
Manual
Interactive test execution with step-by-step tracking
Regression
Validate existing functionality after changes
Smoke
Quick validation of critical paths
Integration
Test component interactions and data flow
Execution
Running Tests
Test execution workflow
Interactive test execution with step-by-step tracking.
The execution process:
- Start a test session from a suite
- Execute tests one at a time
- Mark steps as completed
- Record Pass/Fail/Blocked/Skipped results
- Add notes and capture evidence
- Track overall session progress
- Generate execution reports
Session-based execution
All test runs happen within a session. Sessions track progress, timing, environment, and results for reporting.
Tracking
Test Sessions
Understanding test sessions
Sessions are execution containers that track all testing activity.
What sessions capture:
- Which test suite was executed
- When execution started and ended
- Who executed the tests
- Environment tested (staging, production, etc.)
- Individual test results
- Overall pass/fail statistics
- Notes and evidence captured
Documentation
Test Evidence
Capturing test evidence
Document test execution with screenshots and detailed notes.
Why capture evidence:
- Prove test was executed correctly
- Document bugs visually for developers
- Meet compliance and audit requirements
- Help reproduce issues later
- Improve defect reports
Insights
Reports & Analytics
Test execution analytics
Track testing progress, trends, and quality metrics.
Available reports:
- Suite Reports: Pass rate, trends, and execution history per suite
- Session Details: Individual session results and statistics
- Test Coverage: Which tests have been executed
- Failure Analysis: Common failure patterns and reasons
- Execution Trends: Pass rates over time
Quality
Best Practices
Effective test execution
Strategies for high-quality, efficient testing.
✅ Do
- Prepare test environment before starting
- Capture screenshots for failures
- Provide detailed failure reasons
- Execute in clean environment
- Follow test steps exactly
- Use keyboard shortcuts for speed
❌ Avoid
- Marking tests without executing
- Vague failure reasons
- Testing in contaminated environments
- Deviating from test steps
- Skipping evidence capture
- Abandoning sessions without pausing
Test suite organization
Structure suites for maximum efficiency.
Suite design principles:
- Keep suites focused: 10-30 tests per suite for manageable sessions
- Order logically: Group related tests, dependencies first
- Set realistic estimates: Accurate duration helps planning
- Use priority: Mark critical tests as high priority
- Separate by type: Don't mix smoke and regression in one suite
Evidence capture guidelines
When and what to document during execution.
Always capture for:
- Failed tests (show what went wrong)
- Blocked tests (prove the blocker exists)
- Unexpected behavior (even if test passes)
- Security testing (document vulnerabilities)
- Compliance testing (audit requirements)
Optional for:
- Passed tests (unless required by policy)
- Routine regression tests
- Tests without UI (API tests)
Session management
Optimize your test execution workflow.
- Schedule dedicated testing time (avoid interruptions)
- Use auto-advance for smooth flow through similar tests
- Disable auto-advance for complex or critical tests
- Pause sessions when blocking issues are found
- Complete sessions same day when possible (maintain context)
- Review session stats before closing
FAQ
Frequently Asked Questions
Help
Support
Getting help
Resources for test management assistance.
Contact options:
- Email: support@synthqa.app
- Use the feedback button in your dashboard
- Visit our knowledge base for detailed guides
When reporting execution issues:
- Session ID (visible in URL or session details)
- Suite name and test case title
- What action you were trying to perform
- Screenshots if UI is behaving unexpectedly
- Browser console errors (F12)
Last updated: January 2026 · Guide version: 1.0