- Getting Started with Test Suite
- Studio
- Orchestrator
- Testing robots
- Test Manager
- Change Impact Analysis
- Requirements
- Assigning test cases to requirements
- Linking test cases in Studio to Test Manager
- Unlink automation
- Delete test cases
- Document test cases with Task Capture
- Create test cases
- Importing manual test cases
- Generate tests for requirements
- Cloning test cases
- Exporting test cases
- Automate test cases
- Manual test cases
- Applying filters and views
- Test sets
- Executing tests
- Documents
- Reports
- Export data
- Bulk operations
- Searching with Autopilot
- Troubleshooting
AI-powered automation
- Converting manual test cases into automations.
- Converting text into code.
- Generating synthetic test data.
After you link Studio to Test Manager, the Test Explorer from Studio Desktop offers you a view of all test cases within your Test Manager project. The panel displays both automated and manual tests. Autopilot allows you to convert the manual tests into automated tests.
Visit Transforming manual tests into coded test cases to learn how to convert manual tests into coded test cases.
Autopilot uses UI Automation capabilities to convert manual test steps into automated ones. In order to reference the UI elements that you want to test, Autopilot needs object repository references to each of them. It's crucial to maintain a consistent naming convention, ensuring that you use identical control names within your manual test steps as those within your object repository.
For instance, consider the following manual test step: "Click on 'Submit loan application'". In this case, Autopilot will generate a Click activity, and attempt to locate an object repository element named Submit loan application. Consistency for control names within your manual steps facilitates a smooth automation conversion process.
A typical manual test step might have the following structure: "Type 'john.doe@gmail.com' into the 'Email' field". This step includes the following information used for automating it:
- Action: Type, which is recognized as the Type Into activity.
- Data: 'john.doe@gmail.com'.
- Target: 'Email', which is recognized as the corresponding object repository element.
Check the table for the verbs you should use in your manual steps. These verbs are then converted in the appropriate activity/API within Studio Desktop.
Manual test step | UiPath activity | Uipath API |
---|---|---|
"'Click' on 'My button'" | Click | Click |
"'Type' 'hello world' into 'My textfield'" | Type Into | TypeInto |
"'Get Text' from 'My label'" | Get Text | GetText |
"'Check' 'My checkbox'" | Check | Check |
"'Select item' '3 years' from 'Loan term'" | Select Item | SelectItem |
If a form that required data input is displayed on your screen, you can use the 'Fill Form' keyword to tell Autopilot to automate it.
Consider the following manual test step as an example: "Fill the form on the screen named 'myFormScreen' with the following values: Email: 'john@doe.com', Loan Amount: '10000', Loan Term: '3'". Autopilot will execute the 'Fill form' command within your coded automation, populating the values across all identified user controls within your specified form.
- Generate any C# code.
- Refactor existing code.
- Generate a UiPath® automation.
Visit Generating code to check the various methods you can use for generating code.
Test data management can consume up to 50% of your testing efforts. AutopilotTM can help you save this time, by auto-generating synthetic test data for your test cases.
When you generate your synthetic test data, Autopilot considers the existing arguments within your workflow, as well as the additional instructions provided in the prompt. With this information it will try to generate test data that leads to a high coverage within your test case.
To produce a robust set of test data, we recommend to use arguments wherever feasible, instead of relying solely on local variables, or specific string values. By using arguments, you allow Autopilot to generate test data that aligns more closely with the dynamic requirements of your workflow.
By default, Autopilot generates approximately ten data records, aiming to achieve high code or activity coverage within your test case. However, if your test strategy requires a specific algorithm, such as 'pairwise' or 'all combinations', include this instruction in your prompt.
For instance, consider the following example prompt: "Generate test data for every argument and combine the data fields pairwise."
You can also instruct Autopilot to perform various other tasks, such as generate a specific number of data records, or include additional data fields.
Consider the following example prompts:
- "Generate a minimum of 25 data records."
- "Include the 'Country' field in the dataset, assigning a distinct country value to each record."
- Converting manual test cases into automation
- 1. Prepare a consistent object repository
- 2. Use common activity names
- 3. Automating form filling
- Converting text into code
- Prompt examples
- Generating synthetic test data
- 1. Create arguments for your data fields
- 2. Provide instructions on your preferred data combination method
- 3. Use the prompt instruction to customize your data set