test-suite
latest
false
Test Suite User Guide
Automation CloudAutomation Cloud Public SectorAutomation SuiteStandalone
Last updated Oct 21, 2024

AI-powered generation

Important:

This feature is currently part of an audit process and is not to be considered part of the FedRAMP Authorization until the review is finalized. See here the full list of features currently under review.

This page lists guidelines and best practices to effectively generate test cases using AutopilotTM in Test Manager.

Requirement description

This section outlines key characteristics of a requirement in Test Manager.

Requirements often involve specific capabilities linked to quality aspects such as functional aspects (what the software should do), performance aspects (how fast it should operate), usability (how easy it is to use), and security (how securely it should operate), among many other.

1. The purpose of the requirement

AI models, such as AutopilotTM, rely on specificity for correctly interpreting requirements. Broad or vague descriptions might lead to irrelevant, or incorrect test cases. To mitigate this, start with a concise, yet precise user-focused statement that outlines the requirement's purpose. Focus on enforcing the ultimate benefit to the user.

Example: For a life insurance application, you might start with:

"As a potential policyholder, I want to calculate my insurance premiums so that I can understand my potential costs".

This clarifies the expected benefit for the user and sets a definitive goal for testing that requirement.

2. Application logic

The efficiency of AutopilotTM in generating accurate and detailed test steps largely depends on its understanding of the user journey and application sequence. So, it's crucial to detail the specific interactions that the user will have with the application and the subsequent application responses (from the start of the application up to the final test action) is key. This helps AutopilotTM understand the chronological order of operations, leading to more accurate and detailed test steps.

Example: For the insurance premium calculation feature, describe the workflow as follows:

"The user starts on the main screen, navigates to the 'Get a quote' screen via the main menu. They then fill in their personal data, including age and gender, in the designated form fields. They select the desired insurance coverage and policy term from the available options. When the user clicks 'Calculate Premium', the application calculates and displays the premium on the next screen".

3. Acceptance criteria

Clear, measurable acceptance criteria are vital for setting application expectations and guiding AutopilotTM to verify specific outcomes. They should encompass both positive and negative scenarios, including situations where users may not follow prescribed usage, may input invalid data, or when the application may reach an error state. Criteria should also consider non-functional factors like security, usability, and scalability. Without well-defined acceptance criteria, AutopilotTM might generate inadequate test cases.

Example: For the premium calculation feature of our life insurance application, specify concrete acceptance criteria like in one of the following examples:

  • "The system must calculate the premium considering the age of the user. For every year above 25, an increment of $5 must be added to the base premium of $100"
  • "The system must increase the premium by $50 for smokers due to the associated higher health risks"
  • "If the user enters an age below 18, the system should display an error message"
  • "The premium calculation process should not take more than 3 seconds when the number of concurrent users is less than or equal to 1000"

Additional instructions

This section provides examples of additional instructions you can provide to AutopilotTM so it can focus on aspects that should be considered when generating test cases.

End-to-end flow verification

Check the following list for guidelines that you can give to Autopilot when generating end-to-end test cases from flow diagrams:

  • Verify each unique path in the flow diagram as a separate test case.
  • Focus exclusively on testing end-to-end paths within the diagram.
  • Ensure each test case represents a complete journey from the beginning to the end.
  • Achieve comprehensive coverage by testing every complete journey within the diagram.

Rapid test idea generation

Check the following list of guidelines that you can give to Autopilot to generate numerous ideas for quick testing:

  • Do not create any test steps, only test case titles.
  • Limit the test cases titles to a maximum of 12 words.
  • Create a minimum of 50 creative test cases.

Elusive issue detection

Check the following list of guidelines that you can give to Autopilot for generating test cases to find elusive issues:

  • Generate only unconventional, yet plausible test scenarios to reveal hidden issues.
  • Focus on test scenarios often missed in standard tests, that require deeper insight.
  • Challenge system design and user behavior assumptions to find weaknesses.
  • Use a wide range of user behaviors, including atypical ones, to uncover issues.

Naming convention adherence

Check the following list of guidelines that you can give to Autopilot for generating test cases with a name convention.

  • Start every test case title with the action verb "Verify."
  • Keep titles under six words, ensuring they are clear and informative.
  • Include "UiPath | TC-01" at the beginning of each test case title, where "TC-01" is the number of your test case, at the beginning of each test case title.

Valid end-to-end scenario testing

Check the following list of guidelines that you can give to Autopilot to generate test cases for valid end-to-end scenarios only.

  • Create test cases exclusively for valid, complete user journeys.
  • Avoid test cases for invalid input or field validations.
  • Keep test case titles under six words, ensuring they are clear and informative.

Boundary-value testing

Check the following list of guidelines that you can give to Autopilot for generating test cases focused on boundary-value testing.

  • Define valid ranges and identify minimum, maximum, and edge values for each input.
  • Focus test cases on these boundary values, including just inside and outside valid ranges.
  • Cover lowest, highest, and subdivided range limits in your test cases.
  • Ensure all input field boundaries across the application are tested.

Supporting documents

This section lists supporting documents you can provide to AutopilotTM. These documents are additional information that complement the description of a requirement in Test Manager. These documents are intended to enhance the understanding of Autopilot about a requirement, enabling it to generate more accurate and useful test cases for a requirement.

Process diagrams

To illustrate the step-by-step operations within the application, consider including use case diagrams, flowcharts, or process diagrams as images, or BPMN files. Process diagrams help Autopilot grasp the sequential and logical flow of user activities that are important for the specific requirement. With the help of these process representations, Autopilot can generate more precise test cases that align closely with the actual workflows of the application.

Mockups and wireframes

For ease of understanding by Autopilot, consider adding visual diagrams which depict your UI/UX requirements. This is particularly useful when testing new front-end functionalities and helps to clarify the layout, user interactions, and elements to be tested.

Compliance documents

In regulated industries such as healthcare, finance, or telecommunications, consider including compliance and regulatory documents. These guidelines often apply universally across various requirements (for example: user stories or use cases) in Test Manager. By uploading these documents, you enable Autopilot to integrate compliance standards into the test cases it generates for each requirement, not just those directly linked to specific compliance criteria. This approach ensures that all test cases adhere to industry regulations and that compliance is consistently addressed across all requirements tested by Autopilot.

Discussion transcripts

Consider including a transcript of discussions about specific requirements. This could be from a meeting or a virtual session via online platforms, involving developers, product owners, and testers. Transcripts provide Autopilot with insights on how team members interpret, or plan to implement a requirement, offering context that can significantly enhance the precision of the generated test cases.

Functional limitations

This section outlines the current limitations of AutopilotTM.

Supported file types

You can only upload the following file extensions, from which Autopilot processes only the text content:

  • DOCX
  • XLSX
  • TXT
  • PNG
  • JPG
  • PDF
  • BPMN
Note: Autopilot processes only the text content in the files. Images within the files are not processed.

Input token capacity

The maximum input token capacity of Autopilot is 128,000, which is equivalent to approximately 96,000 words, or 512,000 characters.

Ensure that your requirement description and supporting documents do not exceed these limits.

Tip: From our observations, 100 tokens roughly translate to about 75 words or 400 characters.

To check the approximate token count of your documents, open the document as a TXT file and copy the content into the Open AI Tokenizer tool The provided token count is an approximate. The actual token count can be higher.

Test case generation

Autopilot is currently limited to generating maximum 50 test cases at a time. If the number of test cases to generate is not specified, Autopilot generates 10 test cases.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.