test-suite
latest
false
UiPath logo, featuring letters U and I in white

Test Suite User Guide

Automation CloudAutomation Cloud Public SectorAutomation SuiteStandalone
Last updated Dec 4, 2024

Autopilot for Testers

Important:

This feature is currently part of an audit process and is not to be considered part of the FedRAMP Authorization until the review is finalized. See here the full list of features currently under review.

Autopilot for Testers is a collection of AI-powered digital systems, also known as agents, designed to boost the productivity of testers throughout the entire testing lifecycle. These capabilities are integrated into UiPath® Studio Desktop and UiPath® Test Manager, two key components of UiPath® Test SuiteTM.

Autopilot for Testers offers capabilities that can be grouped into the following categories:

  • Agentic test design: AutopilotTM in Test Manager supports you to evaluate requirements for quality aspects such as clarity, completeness, and consistency. Autopilot also helps you generate manual test cases for requirements (such as user stories) and SAP transactions.
  • Agentic test automation: AutopilotTM in Studio Desktop supports you to convert text such as manual test cases into coded and low-code automated UI and API test cases, generate test data for data-driven testing, refactor coded test automation, fix validation errors, generate expressions, and perform fuzzy verifications, among others.
  • Agentic test management: AutopilotTM in Test Manager allows you to get actionable insights into test results through the Test insights report.

The following sections what AutopilotTM can help you with, when you create testing projects.

Quality-check requirements

Start by creating a requirement, such as Submit loan request for the UiBank Application. Describe the application flow and criteria needed for the loan application process. Then select Evaluate quality and evaluate the requirement using AutopilotTM, to generate a list of suggestions that you can directly implement.

Provide supporting documents to AutopilotTM, and additional guidance through a prompt that you choose from your library or that you type in yourself.

After you trigger the evaluation, expand each suggestion to update its status based on your implementation progress. You can choose to add the suggestion to your requirements, marking its status as In Work or Done. The option to remove suggestions is also available.

Generate more suggestions or regenerate them with different supporting documents or additional guidance using Suggest More and Regenerate.

You can also keep suggestions for future reference by exporting them to Word.

Visit Quality-check requirements and AI-powered evaluation: Best practices to understand how to efficiently evaluate your requirements using Autopilot.

Generate tests for requirements

Use AutopilotTM to generate a list of potential test cases. You can generate test cases from the requirement details, followed by uploaded documents, and additional instructions.

Open a requirement and select Generate tests to generate tests for a requirement.

Refine the generation process with documents and instructions for AutopilotTM to use in generating the test cases. After you select Generate tests, review the generated test cases and create tests if satisfied, or refine them with more details if otherwise.

Visit Generate tests for requirements and AI-powered generation: Best practices, to check how to use the test generation feature at its full potential.

Generate tests for SAP transactions

With AutopilotTM, you can generate test cases for SAP transactions from the Heatmap, and generate test cases for gaps discovered in the Change impact analysis, using uploaded documents and additional instructions. You can further refine the generation process with documents and instructions for AutopilotTM to use in generating the test cases.

Visit Generating test cases for a specific transaction, as well as AI-powered generation: Best practices, to check how to successfully generate tests for SAP transactions.

Generate coded automations

AutopilotTM assists you in generating coded automations, either from text and existent code, or from manual test cases created in Test Manager.

Generating coded automations from text and existent code
In a new or existent coded automation, you can use AutopilotTM to generate code from natural language, from comments in your automation, or from existent lines of code. Use the Ctrl + Shift + G shortcut, or the Autopilot icon, to generate code.
Generating coded automations from manual tests
After you connect Studio Desktop to Test Manager, navigate to the Test Explorer and search for your manual tests. From here, you can choose one of the following scenarios to generate a coded test case for it:
  • Create a coded test case, where the steps are shown as comments, using Create Coded Test Case.. Then prompt AutopilotTM to generate code based on the comments.
  • Directly generate a fully functional coded test case using Generate Coded Test Case with Autopilot.
Visit Creating a coded test case from a manual test case and Generating coded test cases using AI for more information about generating test cases.

Generate coded API automation

If you want to create a coded test case that automates a scenario involving APIs, AutopilotTM can help with generating the code for this scenario. In the code editor, right-click and select Generate Code then offer AutopilotTM the necessary instructions. For instance, you should offer AutopilotTM the API that you want to use, and the API key that it should access.

Refactor coded automations

AutopilotTM helps you enhance coded automations through refactoring. Consider a situation where a coded test case contains a segment of code that could be more readable. To start the refactoring process, follow these steps: select the desired code, right-click the selection, then select Generate Code. Lastly, offer Autopilot instructions on how to refactor the selected code.

Generate low-code automations

AutopilotTM assists you in generating low-code test cases, either from text or from manual tests created in Test Manager.

Generating low-code test cases from manual tests

After you connect Studio Desktop to Test Manager, navigate to the Test Explorer and search for your manual tests. Right-click a manual test and select Generate Test Case with Autopilot.

Generating low-code test cases from text

Open your blank low-code test case and select generate with Autopilot from the Designer panel. Enter the desired test steps, and then select Generate to trigger the test case generation.

For more information, visit Generating low-code test cases using AI.

Generate synthetic test data

You can choose Generate with Autopilot when adding test data to your test cases. AutopilotTM initially generates potential arguments and variables, which you can further refine as required. Visit AI-generated test data to check how to generate synthetic test data using AI.

Generate test insights report

Get actionable insights into your test results by generating a report with Autopilot detailing why your test cases are repeatedly failing.

To generate the report, go to Execution. Select Generate Insights, choose your test results, and select Generate Insights. Access your report in Execution under the Insights tab.

Visit Generate test insights report and AI-powered insights: Best practices to better understand how to identify issues in your test executions.

Import manual test cases - Preview

Use AutopilotTM to import manual test cases from Excel files. You can import from one file at a time. You can import tests from multiple sheets. The import process transfers all information to Test Manager, unless specified otherwise. For instance, test case properties such as Priority, Status, or Owner are imported into Test Manager as custom field values at the test case level.

To better identify the imported test cases, you can instruct Autopilot, in the Provide import instructions section, to place certain labels on the imported test cases.

To check how to efficiently import manual test cases, visit Importing manual test cases.

Licensing

Visit Autopilot licensing to check information about how Autopilot activities are measured and licensed.

User access

User access management with Autopilot for Testers

The AI Trust Layer governance policy allows you to manage the use of AI-powered features within your organization. Although all members have default access to these features, you can use this policy to restrict access as needed. The AI Trust Layer governance policy empowers you to limit a user's access to certain AI-powered features or all of them, at a user, group, or tenant level. Additionally, it gives you the ability to decide which AI products users can access. You can create, modify, and implement this governance policy in AutomationOps.

If you want to deploy an AI Trust Layer governance policy and still use the AI-powered testing capabilities, ensure that, within the policy's Features Toggle, you select Yes for Enabling Test Manager features.

Check the following resources to learn how to create, configure, and deploy a governance policy for your organization.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.