- Getting started
- Project management
- Documents
- Working with Change Impact Analysis
- Importing Orchestrator test sets
- Creating test sets
- Adding test cases to a test set
- Assigning default users in test set execution
- Enabling activity coverage
- Enabling Healing Agent
- Configuring test sets for specific execution folders and robots
- Overriding parameters
- Cloning test sets
- Exporting test sets
- Applying filters and views
- Accessibility testing for Test Cloud
- Searching with Autopilot
- Project operations and utilities
- Test Manager settings
- ALM tool integration
- API integration
- Troubleshooting

Test Manager user guide
Use Autopilot Search directly inside the chat, by typing full, partial or fuzzy terms of what you are looking for. The agent retrieves all the matching objects. If a typo is made, the chat auto-corrects common mistakes.
After results appear, you can expand them or open the filtered view in the artifact table, without leaving the chat.
Query examples:
- Find all requirements where custom field Sprint is set to 123.
- Find all the test cases that failed in the last 5 days.
Use the Autopilot Charts functionality to generate a visual representation of your data in the form of bar charts, line charts, and pie charts.
Query examples:
- Show me attachment distribution by file types.
- How many requirements are fully, partially, or not tested?
- Show failed test cases grouped by requirement.
- Show failed tests without linked defects grouped by requirements.
- Show test case distribution by label.
- Show distribution of requirements with custom field 'Sprint'.
- Which test sets take the longest to execute?
- Show me trends of testcases that haven’t been executed recently.
Use the Evaluate Quality functionality to assess the clarity, completeness, and testability of requirements, ensuring higher-quality inputs before test design begins.
To invoke a requirement evaluation from Autopilot Chat, reference the requirement by name or ID. Autopilot Chat understands the context, opens the Requirement Evaluation interface, and displays a detailed analysis with improvement suggestions.
After the requirement evaluation is completed, review the outcome, check the identified issues, and refine the requirement without leaving the chat.
Query examples:
- Evaluate the quality of the ‘Submit Loan’ requirement.
- Evaluate the quality of UIB:24.
- Enter a query indicating
which requirement needs to be evaluated.
Figure 3. Autopilot Chat - Evaluate Requirement query
- Select Configure and edit the fields: add any documents or select
which documents to be included in the analysis, add or edit a prompt, select
an AI model, and then either Accept or Reject the operation.
Figure 4. Autopilot Chat - Evaluate Requirement configuration
- If you select Reject,
the operation is not performed. If you select Accept, Autopilot works
behind the scenes to provide the results.
Figure 5. Autopilot Chat - Evaluate Requirements result review
- (Optional) Enter a query about the most immediate action that needs to be performed next.
Use the Generate Test Cases functionality to automatically create high-quality, structured test cases based on requirement details, user documents, RAG or user prompts.
To invoke Generate Test Cases from Autopilot Chat, reference a requirement by name or ID. Autopilot Chat interprets your intent, identifies the relevant requirement, and launches the Generate Test Cases tool where you can provide more context (labels, custom fields).
After the tests are generated, review the generated tests.
Query examples:
- Generate test cases for the ‘Submit Loan’ requirement.
- Generate tests cases for UIB:24.
- Enter a query which generates tests for a specific requirement.
Figure 6. Autopilot Chat - Test Generation query
- Enter a query indicating which requirement needs to be evaluated.
- Select Configure and edit the fields: add any documents or select which documents to be included in the analysis, add or edit a prompt, select an AI model, and then either Accept or Reject the operation.
- If you select Reject, the operation is not performed. If you select
Accept, review the results.
Figure 7. Autopilot Chat - Generate Test result review
Ask questions in natural-language questions to learn how to use Test Manager. Autopilot Chat retrieves the information directly from the official documentation, complete with source links for further reading.
This makes onboarding new teams much faster and eliminates the need to switch between documentation and product screens.
Query examples:
- How do I create a requirement?
- How do I execute a test set?
Use the Find Obsolete Tests functionality to maintain a clean, up-to-date test repository by automatically identifying the outdated or redundant test cases linked to requirements.
To invoke Find Obsolete Tests from Autopilot Chat, reference a requirement by either a name or ID. Autopilot Chat interprets your intent, identifies the relevant requirement, and launches the Find Obsolete Tests tool where you can provide more context.
Autopilot Chat analyzes the relationship between requirements and their associated test cases to detect obsolete test cases caused by:
- Updated or deprecated requirements
- Redundant coverage of the same functionality
- Outdated test environments or dependencies
- Misaligned or unsupported test steps
Finding obsolete test cases allows testers to focus only on relevant, executable test assets, improving both test accuracy and maintenance efficiency.
When a query falls outside the scope of documentation or Autopilot Search, Autopilot Chat automatically switches to web search mode, but only within the context of Test Automation.
Autopilot Chat searches trusted public sources for topics related to testing frameworks, automation strategies, or QA methodologies, and then summarizes the most relevant insights.
Users receive contextual, automation-focused guidance while staying within the Test Manager domain.
Query examples:
- What are the best practices for writing automated regression tests?
- How should I design data-driven test cases in automation?