Overview
- Group test cases, requirements, suites, and templates
- Filter and view work by project across the platform
- Track project-level statistics and progress
- Customize with colors and icons for visual identification
- Manage project lifecycle from planning to completion
- Archive old projects while preserving historical data
Creating Projects
Customization
Organization
Project Lifecycle
Project Dashboard
Click any project to open its dashboard. You'll see a live snapshot of everything happening across manual and automated test runs for that project.
- KPI cards for total executions, pass rate, failures, and artifact counts
- Execution trend chart covering the last 30 days — both manual and automation runs
- Top problem tests ranked by failure count and flakiness score
- Test suites summary with pass rates and last run dates
- Quick links to requirements, test cases, and failed tests
Total test executions in the last 30 days — combines manual runs, automated test executions, and automation suite runs. Shows combined pass rate and average duration. Automation run count appears as a violet annotation when automation data is present.
Total passing executions. Shows failed count in the subtitle. If automation runs are present, a violet "Auto: X/Y" note shows automation pass rate separately.
Combined blocked and skipped executions. Broken down individually in the subtitle.
Counts of test suites, total test cases, requirements, and templates attached to this project.
The chart plots four lines:
The problem tests panel surfaces the most frequently failing tests from the last 30 days. It covers three sources:
Manual and automated executions linked to your standard test cases. Shows failure count, priority badge, and flakiness percentage.
Cross-platform tests (Web, Mobile, API, Accessibility) that are failing. Same failure count and flakiness metrics.
Suite-level failures from the Automation Hub where individual test breakdowns aren't available. Shown with a violet "automation" badge and link to the Automation Hub. Displays the suite name, framework, and branch alongside failed test count.
The dashboard shows the five most recent suites. Each entry shows the suite name, test case count, pass rate, and when it was last run. Click "View All" to open the full Test Library filtered to this project.
Filtering & Search
Best Practices
- Create projects for distinct products/features
- Use clear, descriptive names
- Choose consistent color/icon schemes
- Archive completed projects regularly
- Update status as projects progress
- Add meaningful descriptions
- Too many small, granular projects
- Generic names like "Project 1"
- Random color choices without system
- Leaving all projects as "Active" forever
- Creating projects for temporary work
- Forgetting to archive old projects
One project per product or application:
Best for: Teams with multiple distinct products
- Testing a new product or major feature that will have ongoing work
- Starting a distinct testing initiative (e.g., security audit)
- Managing work that spans multiple sprints or releases
- Needing to separate reporting and metrics by product/area
- Testing a small bug fix or minor feature (use existing project)
- Work is temporary or one-time (doesn't need long-term organization)
- You're just starting and have only a few test cases
- The work fits well within an existing project
- Monthly: Review project statuses, update to reflect current state
- Quarterly: Archive completed projects, review if all projects are still needed
- After releases: Mark projects as completed or archived
- As needed: Update descriptions when project scope changes
Frequently Asked Questions
Support
- Email: support@synthqa.app
- Use the feedback button in your dashboard
- Visit our knowledge base for examples
- Describe your team structure and testing needs
- Share your current project organization challenges
- Explain what you're trying to achieve
- Include screenshots if reporting UI issues