- Getting Started with Test Suite
- Studio
- Orchestrator
- Testing robots
- Test Manager
AI-powered testing
This feature is currently part of an audit process and is not to be considered part of the FedRAMP Authorization until the review is finalized. See here the full list of features currently under review.
UiPath® Test SuiteTM provides AI-powered continuous testing capabilities through AutopilotTM. Autopilot designed for Testers is a collection of AI capabilities designed to boost the productivity of testers throughout the entire testing lifecycle.
This guide shows you how to use the following AI features:
- AI-powered evaluation: AutopilotTM assists you in evaluating requirements for quality aspects such as clarity, completeness, and consistency in Test Manager.
- AI-powered generation: AutopilotTM assists you in generating manual test cases and test steps from requirements and supporting documents in Test Manager.
- AI-powered automation: AutopilotTM supports you in generating coded test automation as well as synthetic test data from text in Studio Desktop.
- AI-powered insights: AutopilotTM assists you in gaining insights into why test cases are failing, without the need for pre-built reporting templates, in Test Manager.
Start by creating a requirement, such as Submit loan request for the UiBank Application. Describe the application flow and criteria needed for the loan application process. Then select Evaluate quality and evaluate the requirement using AutopilotTM, to generate a list of suggestions that you can directly implement.
Provide supporting documents to AutopilotTM, and additional guidance through a prompt that you choose from your library or that you type in yourself.
After you trigger the evaluation, expand each suggestion to update its status based on your implementation progress. You can choose to add the suggestion to your requirements, marking its status as In Work or Done. The option to remove suggestions is also available.
Generate more suggestions or regenerate them with different supporting documents or additional guidance using Suggest More and Regenerate.
You can also keep suggestions for future reference by exporting them to Word.
Visit AI-powered evaluation and AI-powered evaluation: Best practices to understand how to efficiently evaluate your requirements using Autopilot.
Use AutopilotTM to generate a list of potential test cases.
- Generate tests from
requirement: You can generate test cases from the requirement
details, followed by uploaded documents, and additional instructions.
Open a requirement and select Generate tests to generate tests for a requirement.
- Generate tests for SAP transactions: You can generate test cases for SAP transactions from Heatmap and gaps discovered in Change impact analysis, using uploaded documents and additional instructions.
Refine the generation process with documents and instructions for AutopilotTM to use in generating the test cases. After you select Generate tests, review the generated test cases and create tests if satisfied, or refine them with more details if otherwise.
Visit AI-powered generation, Generating test cases for a specific transaction, as well as AI-powered generation: Best practices, to check how to use the test generation feature at its full potential.
Prerequisites: Install Studio Desktop 2024.10.1 or higher to use AI-powered automation.
Connect Studio to the Test Manager instance where you created your test cases. In the Test Explorer panel, right-click the manual test cases linked from Test Manager. Select Generate Coded Test Case. This generates a coded case from manual steps, which you can link to for automation. Visit Generating coded test case from manual test case for more information about generating test cases.
You can also choose Generate with Autopilot when adding test data to your test cases. Autopilot initially generates potential arguments and variables, which you can further refine as required. Visit AI-generated test data to check how to generate synthetic test data using AI.
Visit AI-powered automation: Best practices to discover how to efficiently create and manage your test automations in Studio Desktop.
Get actionable insights into your test results by generating a report with Autopilot detailing why your test cases are repeatedly failing.
To generate the report, go to Execution. Select Generate Insights, choose your test results, and select Generate Insights. Access your report in Execution under the Insights tab.
Visit AI-Powered insights and AI-powered insights: Best practices to better understand how to identify issues in your test executions.
Visit Autopilot licensing to check information about how Autopilot activities are measured and licensed.
The AI Trust Layer governance policy allows you to manage the use of AI-powered features within your organization. Although all members have default access to these features, you can use this policy to restrict access as needed. The AI Trust Layer governance policy empowers you to limit a user's access to certain AI-powered features or all of them, at a user, group, or tenant level. Additionally, it gives you the ability to decide which AI products users can access. You can create, modify, and implement this governance policy in AutomationOps.
If you want to deploy an AI Trust Layer governance policy and still use the AI-powered testing capabilities, ensure that, within the policy's Features Toggle, you select Yes for Enabling Test Manager features.
Check the following resources to learn how to create, configure, and deploy a governance policy for your organization.