- Getting Started with Test Suite
- Studio
- Orchestrator
- Testing robots
- Test Manager
Test Results
When you start executing a test set, Test Manager goes through the following process:
- A Test Execution is created in the Test Results section of Test Manager. This serves as the container for the test results.
- For every test case within the executed test set, an empty Test Case Log is generated. This log is attached to the Test Execution and will hold its associated test results.
- As the execution continues, the Test Manager fills each Test Case Log with results and related log information.
The chart below illustrates the process of executing a test set in Test Manager.
As a consequence, after a test set has been executed, any changes to the test set or its associated test cases do not affect the results from previous executions. Even when a test set is deleted, all the test executions remain unchanged.
Who can see projects
All user roles can see test results.
For more information see User and Group Access Management.
To view test executions, open Test Results in Test Manager. The execution of each test set is listed as a test execution entry. To understand how the test executions work behind the scenes, see Test Results.
By analyzing your test execution, you can take the following actions:
- Find test results that have been executed manually or automatically through Orchestrator.
- Check the progress on running test executions.
- Open the test set that has been executed.
- Examine logs and attachments.
- Create defects in your defect management system directly from Test Manager, if you have an Application Lifecycle Management tool integration. For more information, see ALM Tool Integration.
By default, the test executions are sorted based on the date on which the execution was finished. Currently running and pending executions are placed at the top.
You can identify the status through the color codes assigned to each test execution, as follows.:
- Green: Test cases that passed.
- Red: Test cases that failed.
- Grey: Test cases without a definitive results, such as test cases that have not been executed yet but are part of a test set that is currently being executed.
The results from automated test executions are imported from Orchestrator automatically. To have your automated tests imported to Test Manager, you need to meet the following conditions:
- The automated test needs to be part of a test set
on your Test Manager project. For more
information, see Automated Tests.
Note: If a Test Execution from Orchestrator holds results for test cases from several projects in Test Manager, the test execution is split in Test Manager. The results appear in the projects where the test cases are.
Overriding the results of a test case can help in scenarios where the current test result does not accurately reflect the actual behavior of the application and re-execution is not an effective solution. You can override the test results of a test case and then you can clear the operation, if needed.
- Navigate to Test Results and open a test result.
- From the Results tab open a test case log.
- In the Assertions tab, select Tasks > Override result.
- In the Override test result window configure the following fields:
- Change result to - select whether you want to override the result with the opposite status (Passed/Failed) or set the result to None.
- Comment - Type the reason why you override the result.
- Select Confirm.
The test results that you overrode have the following icon next to the test result status: . Select the icon and you can view the override details.
- Optionally, if you want to clear the override operation, select the next to the result status of a test case log.
- In the Override details window, select Clear override.
- In the Override details window, select Clear override.
- Optionally, if you want to edit the override operation, select the next to the result status of a test case log.
- Perform changes, and click Confirm.
To quickly find your test results, use the search function and the filters. Navigate within the page using and configuring the paginator. Alternatively, you can use the breadcrumb to navigate between the pages.
- Filter - You can use the filter to narrow your search. For example, you can search for test results by execution type, when the execution was finished or by status. The filters are automatically saved and kept active until you clear them.
- Search - Use the search bar to find test results by their key, execution type or status (requires full search term match). You can use the search bar at the top of the page to look for test results.
To view detailed test results, go to Test Results.
- Test Set: For standard results tracking within test executions.
- Test Case: For cross-execution
analysis when filtering the results based on criteria excluding test execution
attributes.
You can use the Reporting Date filter to set a unified date across all test results from the test execution. This is useful when you run tests overnight, where some test cases are executed before and after midnight, preventing results from splitting onto separate dates.
In the Results section you can examine all the executed test cases within the test set, and take action for each, as necessary. For more information, see the Execution log section.
You can also open the executed test set by right-clicking an entry in the Test Sets page and selecting Open Test Set, or directly within the test execution.
As part of test reporting, execution logs hold information such as execution details (e.g., data variation and screenshots), failed and passed assertions, and a detailed log of execution events.
To open a test case log, go to Test Results, open a test execution and then click a test case Key.
In the following table, you can view the type of information that is collected during test case execution.
Test execution reporting |
Description |
---|---|
Assertions | View failed or passed assertions (e.g.,Verify expression activity), and
associated screenshots if any were taken during test case execution.
To make sure
that Orchestrator takes screenshots during executions, visit the following
resources:
|
Logs | View INFO level logs as part of the RobotLogs , with information
about processes, execution robot, and event logs, including failures.
Select the icon to go to the Logs tab of the selected test case, for a detailed description of the failure. |
Execution details | View argument details such as input and output values, as well as execution and robot details (e.g., project, machine, robot). |
Affected requirements | View the requirements that are assigned to the test case that has been executed. You can use this tab to go directly to the affected requirement. |
Attachments | View:
|
Prerequisites: Enable activity coverage for the desired test sets either in Orchestrator or Test Manager.
- Open the test set, select More Options and then Execute Automated.
- Go to Test Results and open the test set you executed.
- Go to the Activity Coverage tab
and investigate the information on the activities that were covered during the test
execution.
You can create defect reports including the execution log to your external defect management system, if you already have it integrated with Test Manager. For more information, see ALM Tool Integration.
To create a defect out of an execution log, you need to open a test case log, click Tasks, and then select Create Defect. After the defect has been created, a link is available in the execution log, so you can access the integrated external tool.
You can synchronize execution results with external tools, as part of the Application Lifecycle Management tool integration. Information that is gathered during execution, such as results, logs, timestamps, and other details is synchronized with the tool that you have integrated with Test Manager.
Please note that at the moment there can be only one connection which is enabled for defect synchronization overall per project.
- To synchronize defects, you need to configure a connector in Test Manager. See available connections in Test Manager.
- You need to have executed a test set first.
All user roles aside from Read-only can synchronize defects.
For more information see User and Group Access Management.
You can create defects when you access test case logs in the Test Results page.
The defect is created and synchronized with your external tool. You can open the defect directly in the tool (e.g., Atlassian Jira) by navigating to the test execution result that has a synchronized defect.
The Failed Tests Report gives you an actual summary of your test results, powered by AI (Artificial Intelligence). You generate this report by selecting a specific set of test results. Each report you create is stored and can be accessed anytime from the Insights tab under Test Results.
The report can have up to five sections, each of them showing the problems encountered in automated tests. Select Show to view the impacted test cases and search through them.
Name | Description |
---|---|
Overview | An overview of the test results that you selected for the report, showing information about the average failure rate, the test set failure rate, and the percentage of errors by severity. |
Top Failing Tests | Shows most frequently failed test cases and allows you to directly access them. |
Common Errors | Highlights the most common errors encountered during test executions. |
Error Patterns | Categorizes errors and allows you to identify failure patterns based on them. This helps in easier troubleshooting and resolution. |
Recommendations | Offers best practices to prevent the errors encountered in the chosen test executions. |
- Open a project in Test Manager.
- Go to Test Results and select Generate Insights.
- In the Results table, select the test results to be included in the report.
- Optionally, you can filter the
test results by:
- Keywords - use the Search bar.
- Execution Type - the type of execution.
- Execution Finished - the execution completion time.
- Status - the execution status.
- Select Generate
Report.
When the generate process finishes, you receive an in-app notification and email indicating the report status: Ready or Failed.
- If the report is ready, select the Insights Report is ready notification, or Open report in the email notification, to access the report. If the report failed, you may regenerate it.
- Select Show for each section of the report to view the test cases impacted by a certain error or recommendation.
- Navigate to the Insights
tab inside Test Results.
- Optionally, to rename the insights report, select More Options, and then Rename.
- Select the Download
button.
The Failed Tests Report is downloaded as a DOCX file.
The AI Trust Layer governance policy allows you to manage the use of AI-powered features within your organization. Although all members have default access to these features, you can use this policy to restrict access as needed. The AI Trust Layer governance policy empowers you to limit a user's access to certain AI-powered features or all of them, at a user, group, or tenant level. Additionally, it gives you the ability to decide which AI products users can access. You can create, modify, and implement this governance policy in AutomationOps.
If you want to deploy an AI Trust Layer governance policy and still use the AI-powered testing capabilities, ensure that, within the policy's Features Toggle, you select Yes for Enabling Test Manager features.
Check the following resources to learn how to create, configure, and deploy a governance policy for your organization.
- Analyze Test Results
- Results from automated tests
- Overriding test results
- Navigation and search
- Working with test execution results
- Execution log
- Viewing activity coverage
- Create defects from results
- Related articles
- Defect Synchronization
- Who can synchronize defects
- Create Defect
- Unlink Defect
- AI-Powered test insights - Preview
- Generating Failed tests report
- Downloading Failed tests report
- User access management for AI-powered testing