- Getting Started with Test Suite
- Studio
- Orchestrator
- Testing robots
- Test Manager
- Change Impact Analysis
- Requirements
- Assigning test cases to requirements
- Linking test cases in Studio to Test Manager
- Unlink automation
- Delete test cases
- Document test cases with Task Capture
- Create test cases
- Importing manual test cases
- Generate tests for requirements
- Cloning test cases
- Exporting test cases
- Automate test cases
- Manual test cases
- Applying filters and views
- Test sets
- Executing tests
- Documents
- Reports
- Export data
- Bulk operations
- Searching with Autopilot
- Troubleshooting
Generate tests for requirements
This page lists guidelines and best practices to effectively generate test cases using AutopilotTM in Test Manager.
This section outlines key characteristics of a requirement in Test Manager.
Requirements often involve specific capabilities linked to quality aspects such as functional aspects (what the software should do), performance aspects (how fast it should operate), usability (how easy it is to use), and security (how securely it should operate), among many other.
AI models, such as AutopilotTM, rely on specificity for correctly interpreting requirements. Broad or vague descriptions might lead to irrelevant, or incorrect test cases. To mitigate this, start with a concise, yet precise user-focused statement that outlines the requirement's purpose. Focus on enforcing the ultimate benefit to the user.
Example: For a life insurance application, you might start with:
"As a potential policyholder, I want to calculate my insurance premiums so that I can understand my potential costs".
This clarifies the expected benefit for the user and sets a definitive goal for testing that requirement.
The efficiency of AutopilotTM in generating accurate and detailed test steps largely depends on its understanding of the user journey and application sequence. So, it's crucial to detail the specific interactions that the user will have with the application and the subsequent application responses (from the start of the application up to the final test action) is key. This helps AutopilotTM understand the chronological order of operations, leading to more accurate and detailed test steps.
Example: For the insurance premium calculation feature, describe the workflow as follows:
"The user starts on the main screen, navigates to the 'Get a quote' screen via the main menu. They then fill in their personal data, including age and gender, in the designated form fields. They select the desired insurance coverage and policy term from the available options. When the user clicks 'Calculate Premium', the application calculates and displays the premium on the next screen".
Clear, measurable acceptance criteria are vital for setting application expectations and guiding AutopilotTM to verify specific outcomes. They should encompass both positive and negative scenarios, including situations where users may not follow prescribed usage, may input invalid data, or when the application may reach an error state. Criteria should also consider non-functional factors like security, usability, and scalability. Without well-defined acceptance criteria, AutopilotTM might generate inadequate test cases.
Example: For the premium calculation feature of our life insurance application, specify concrete acceptance criteria like in one of the following examples:
- "The system must calculate the premium considering the age of the user. For every year above 25, an increment of $5 must be added to the base premium of $100"
- "The system must increase the premium by $50 for smokers due to the associated higher health risks"
- "If the user enters an age below 18, the system should display an error message"
- "The premium calculation process should not take more than 3 seconds when the number of concurrent users is less than or equal to 1000"
This section provides guidelines so you can allow AutopilotTM to focus on aspects that should be considered when generating test cases.
Guide AutopilotTM by providing additional instructions in the Provide Additional Guidance screen. Use the out-of-the-box prompts from the Prompt Library, which help generate end-to-end test cases from flow diagrams, generate tests for valid end-to-end scenarios, generate tests to find elusive issues, generate end-to-end tests from flow diagrams, and generate tests to find elusive issues. You can also add your own custom prompts to the Prompt Library, especially those you frequently use for manual test case generation.
To generate a specific number of test cases, instruct AutopilotTM with commands like "Generate the top 20 test cases for this requirement." By default, AutopilotTM generates expected results only for the final test step in each test case. To generate expected results for each test step, use "Generate expected results for each test step". Additionally, AutopilotTM can generate preconditions and/or postconditions for manual test cases upon request. Specify what to include or exclude, as the preconditions and postconditions are not generated by default.
This section lists supporting documents you can provide to AutopilotTM. These documents are additional information that complement the description of a requirement in Test Manager. These documents are intended to enhance the understanding of Autopilot about a requirement, enabling it to generate more accurate and useful test cases for a requirement.
To illustrate the step-by-step operations within the application, consider including use case diagrams, flowcharts, or process diagrams as images, or BPMN files. Process diagrams help Autopilot grasp the sequential and logical flow of user activities that are important for the specific requirement. With the help of these process representations, Autopilot can generate more precise test cases that align closely with the actual workflows of the application.
For ease of understanding by Autopilot, consider adding visual diagrams which depict your UI/UX requirements. This is particularly useful when testing new front-end functionalities and helps to clarify the layout, user interactions, and elements to be tested.
In regulated industries such as healthcare, finance, or telecommunications, consider including compliance and regulatory documents. These guidelines often apply universally across various requirements (for example: user stories or use cases) in Test Manager. By uploading these documents, you enable Autopilot to integrate compliance standards into the test cases it generates for each requirement, not just those directly linked to specific compliance criteria. This approach ensures that all test cases adhere to industry regulations and that compliance is consistently addressed across all requirements tested by Autopilot.
Consider including a transcript of discussions about specific requirements. This could be from a meeting or a virtual session via online platforms, involving developers, product owners, and testers. Transcripts provide Autopilot with insights on how team members interpret, or plan to implement a requirement, offering context that can significantly enhance the precision of the generated test cases.
This section outlines the current limitations of AutopilotTM.
You can only upload the following file extensions, from which Autopilot processes only the text content:
- DOCX
- XLSX
- TXT
- CSV
- PNG
- JPG
- BPMN
The maximum input token capacity of Autopilot is 128,000, which is equivalent to approximately 96,000 words, or 512,000 characters.
Ensure that your requirement description and supporting documents do not exceed these limits.
To check the approximate token count of your documents, open the document as a TXT file and copy the content into the Open AI Tokenizer tool The provided token count is an approximate. The actual token count can be higher.
- Requirement description
- 1. The purpose of the requirement
- 2. Application logic
- 3. Acceptance criteria
- Additional instructions
- Supporting documents
- Process diagrams
- Mockups and wireframes
- Compliance documents
- Discussion transcripts
- Functional limitations
- Supported file types
- Input token capacity
- Test case generation