- Getting Started with Test Suite
- Studio
- Orchestrator
- Testing robots
- Test Manager
- Change Impact Analysis
- Requirements
- Assigning test cases to requirements
- Linking test cases in Studio to Test Manager
- Unlink automation
- Delete test cases
- Document test cases with Task Capture
- Create test cases
- Importing manual test cases
- Generate tests for requirements
- Cloning test cases
- Exporting test cases
- Automate test cases
- Manual test cases
- Applying filters and views
- Test sets
- Executing tests
- Documents
- Reports
- Export data
- Bulk operations
- Searching with Autopilot
- Troubleshooting
Autopilot for Testers
Autopilot for Testers is a collection of AI-powered digital systems, also known as agents, designed to boost the productivity of testers throughout the entire testing lifecycle. These capabilities are integrated into UiPath® Studio Desktop and UiPath® Test Manager, two key components of UiPath® Test SuiteTM.
Autopilot for Testers offers capabilities that can be grouped into the following categories:
- Agentic test design: AutopilotTM in Test Manager supports you to evaluate requirements for quality aspects such as clarity, completeness, and consistency. Autopilot also helps you generate manual test cases for requirements (such as user stories) and SAP transactions.
- Agentic test automation: AutopilotTM in Studio Desktop supports you to convert text such as manual test cases into coded and low-code automated UI and API test cases, generate test data for data-driven testing, refactor coded test automation, fix validation errors, generate expressions, and perform fuzzy verifications, among others.
- Agentic test management: AutopilotTM in Test Manager allows you to get actionable insights into test results through the Test insights report.
The following sections what AutopilotTM can help you with, when you create testing projects.
Start by creating a requirement, such as Submit loan request for the UiBank Application. Describe the application flow and criteria needed for the loan application process. Then select Evaluate quality and evaluate the requirement using AutopilotTM, to generate a list of suggestions that you can directly implement.
Provide supporting documents to AutopilotTM, and additional guidance through a prompt that you choose from your library or that you type in yourself.
After you trigger the evaluation, expand each suggestion to update its status based on your implementation progress. You can choose to add the suggestion to your requirements, marking its status as In Work or Done. The option to remove suggestions is also available.
Generate more suggestions or regenerate them with different supporting documents or additional guidance using Suggest More and Regenerate.
You can also keep suggestions for future reference by exporting them to Word.
Visit Quality-check requirements and AI-powered evaluation: Best practices to understand how to efficiently evaluate your requirements using Autopilot.
Use AutopilotTM to generate a list of potential test cases. You can generate test cases from the requirement details, followed by uploaded documents, and additional instructions.
Open a requirement and select Generate tests to generate tests for a requirement.
Refine the generation process with documents and instructions for AutopilotTM to use in generating the test cases. After you select Generate tests, review the generated test cases and create tests if satisfied, or refine them with more details if otherwise.
Visit Generate tests for requirements and AI-powered generation: Best practices, to check how to use the test generation feature at its full potential.
With AutopilotTM, you can generate test cases for SAP transactions from the Heatmap, and generate test cases for gaps discovered in the Change impact analysis, using uploaded documents and additional instructions. You can further refine the generation process with documents and instructions for AutopilotTM to use in generating the test cases.
Visit Generating test cases for a specific transaction, as well as AI-powered generation: Best practices, to check how to successfully generate tests for SAP transactions.
AutopilotTM assists you in generating coded automations, either from text and existent code, or from manual test cases created in Test Manager.
Generating coded automations from text and existent code
Ctrl + Shift + G
shortcut, or the Autopilot icon, to generate
code.
Generating coded automations from manual tests
- Create a coded test case, where the steps are shown as comments, using Create Coded Test Case.. Then prompt AutopilotTM to generate code based on the comments.
- Directly generate a fully functional coded test case using Generate Coded Test Case with Autopilot.
If you want to create a coded test case that automates a scenario involving APIs, AutopilotTM can help with generating the code for this scenario. In the code editor, right-click and select Generate Code then offer AutopilotTM the necessary instructions. For instance, you should offer AutopilotTM the API that you want to use, and the API key that it should access.
AutopilotTM helps you enhance coded automations through refactoring. Consider a situation where a coded test case contains a segment of code that could be more readable. To start the refactoring process, follow these steps: select the desired code, right-click the selection, then select Generate Code. Lastly, offer Autopilot instructions on how to refactor the selected code.
AutopilotTM assists you in generating low-code test cases, either from text or from manual tests created in Test Manager.
Generating low-code test cases from manual tests
After you connect Studio Desktop to Test Manager, navigate to the Test Explorer and search for your manual tests. Right-click a manual test and select Generate Test Case with Autopilot.
Generating low-code test cases from text
Open your blank low-code test case and select generate with Autopilot from the Designer panel. Enter the desired test steps, and then select Generate to trigger the test case generation.
For more information, visit Generating low-code test cases using AI.
You can choose Generate with Autopilot when adding test data to your test cases. AutopilotTM initially generates potential arguments and variables, which you can further refine as required. Visit AI-generated test data to check how to generate synthetic test data using AI.
Get actionable insights into your test results by generating a report with Autopilot detailing why your test cases are repeatedly failing.
To generate the report, go to Execution. Select Generate Insights, choose your test results, and select Generate Insights. Access your report in Execution under the Insights tab.
Visit Generate test insights report and AI-powered insights: Best practices to better understand how to identify issues in your test executions.
Use AutopilotTM to import manual test cases from Excel files. You can import from one file at a time. You can import tests from multiple sheets. The import process transfers all information to Test Manager, unless specified otherwise. For instance, test case properties such as Priority, Status, or Owner are imported into Test Manager as custom field values at the test case level.
To better identify the imported test cases, you can instruct Autopilot, in the Provide import instructions section, to place certain labels on the imported test cases.
To check how to efficiently import manual test cases, visit Importing manual test cases.
With the help of the Autopilot search, you can search any test object within a project using natural language. If you are not sure what to search for, you can use one of the example search queries that Autopilot provides. Additionally, after returning the results, Autopilot allows you to perform actions on the resulted objects.
For information on using the search and exploring search queries Searching with Autopilot.
Visit Autopilot licensing to check information about how Autopilot activities are measured and licensed.
The AI Trust Layer governance policy allows you to manage the use of AI-powered features within your organization. Although all members have default access to these features, you can use this policy to restrict access as needed. The AI Trust Layer governance policy empowers you to limit a user's access to certain AI-powered features or all of them, at a user, group, or tenant level. Additionally, it gives you the ability to decide which AI products users can access. You can create, modify, and implement this governance policy in AutomationOps.
If you want to deploy an AI Trust Layer governance policy and still use the AI-powered testing capabilities, ensure that, within the policy's Features Toggle, you select Yes for Enabling Test Manager features.
Check the following resources to learn how to create, configure, and deploy a governance policy for your organization.
- Quality-check requirements
- Generate tests for requirements
- Create tests for SAP transactions
- Generate coded automations
- Generate coded API automation
- Refactor coded automations
- Generate low-code automations
- Generate synthetic test data
- Generate test insights report
- Import manual test cases - Preview
- Search Test Manager project - Preview
- Licensing
- User access
- User access management with Autopilot for Testers