Test Suite
latest
false
Banner background image
Test Suite User Guide
Last updated Apr 22, 2024

Test Results

When you start executing a test set, Test Manager goes through the following process:

  1. A Test Execution is created in the Test Results section of Test Manager. This serves as the container for the test results.
  2. For every test case within the executed test set, an empty Test Case Log is generated. This log is attached to the Test Execution and will hold its associated test results.
  3. As the execution continues, the Test Manager fills each Test Case Log with results and related log information.
Note: Every time a test execution is created, Test Case Logs for all associated test cases from the original test set are added. Furthermore, dynamic assignments are resolved at this stage. This means existing Test Executions remain consistent, regardless of updates or deletions made to the original test set.

The chart below illustrates the process of executing a test set in Test Manager.



As a consequence, after a test set has been executed, any changes to the test set or its associated test cases do not affect the results from previous executions. Even when a test set is deleted, all the test executions remain unchanged.

Who can see projects

All user roles can see test results.

For more information see User and Group Access Management.

Analyze Test Results

To view test executions, open Test Results in Test Manager. The execution of each test set is listed as a test execution entry. To understand how the test executions work behind the scenes, see Test Results.



By analyzing your test execution, you can take the following actions:

  • Find test results that have been executed manually or automatically through Orchestrator.
  • Check the progress on running test executions.
  • Open the test set that has been executed.
  • Examine logs and attachments.
  • Create defects in your defect management system directly from Test Manager, if you have an Application Lifecycle Management tool integration. For more information, see ALM Tool Integration.

By default, the test executions are sorted based on the date on which the execution was finished. Currently running and pending executions are placed at the top.

You can identify the status through the color codes assigned to each test execution, as follows.:

  • Green: Test cases that passed.
  • Red: Test cases that failed.
  • Grey: Test cases without a definitive results, such as test cases that have not been executed yet but are part of a test set that is currently being executed.

Results from automated tests

The results from automated test executions are imported from Orchestrator automatically. To have your automated tests imported to Test Manager, you need to meet the following conditions:

  • The automated test needs to be part of a test set on your Test Manager project. For more information, see Automated Tests.

    Note: If a Test Execution from Orchestrator holds results for test cases from several projects in Test Manager, the test execution is split in Test Manager. The results appear in the projects where the test cases are.

Overriding test results

Overriding the results of a test case can help in scenarios where the current test result does not accurately reflect the actual behavior of the application and re-execution is not an effective solution. You can override the test results of a test case and then you can clear the operation, if needed.

  1. Navigate to Test Results and open a test result.
  2. From the Results tab open a test case log.
  3. In the Assertions tab, select Tasks > Override result.
  4. In the Override test result window configure the following fields:
    1. Change result to - select whether you want to override the result with the opposite status (Passed/Failed) or set the result to None.
    2. Comment - Type the reason why you override the result.
  5. Select Confirm.

    The test results that you overrode have the following icon next to the test result status: . Select the icon and you can view the override details.

    docs image
  6. Optionally, if you want to clear the override operation, select the docs image next to the result status of a test case log.
    1. In the Override details window, select Clear override.
      docs image
  7. Optionally, if you want to edit the override operation, select the docs image next to the result status of a test case log.
    1. Perform changes, and click Confirm.

Working with test execution results

To view detailed test results, go to Test Results.

In the Results tab, select View By and choose your preferred view for test results. You can select one of the following views:
  • Test Set: For standard results tracking within test executions.
  • Test Case: For cross-execution analysis when filtering the results based on criteria excluding test execution attributes.


Select an entry to open the detailed view. A typical test execution detailed view shows information on when the executions started, duration, and execution logs.

You can use the Reporting Date filter to set a unified date across all test results from the test execution. This is useful when you run tests overnight, where some test cases are executed before and after midnight, preventing results from splitting onto separate dates.

In the Results section you can examine all the executed test cases within the test set, and take action for each, as necessary. For more information, see the Execution log section.

You can also open the executed test set by right-clicking an entry in the Test Sets page and selecting Open Test Set, or directly within the test execution.



Execution log

As part of test reporting, execution logs hold information such as execution details (e.g., data variation and screenshots), failed and passed assertions, and a detailed log of execution events.

To open a test case log, go to Test Results, open a test execution and then click a test case Key.



In the following table, you can view the type of information that is collected during test case execution.

Test execution reporting

Description

Assertions

View failed or passed assertions (e.g.,Verify expression activity), and associated screenshots if any were taken during test case execution.
To make sure that Orchestrator takes screenshots during executions, visit the following resources:
  • Default roles in Orchestrator - to check if the default roles have the Test Case Execution Artifacts permission assigned. In case the default roles don't have this permission, then you can create a custom role with this permission.
  • Managing roles in Orchestrator - to learn how to create, edit, or import a role in Orchestrator.

Logs

View INFO level logs as part of the RobotLogs, with information about processes, execution robot, and event logs, including failures.

Select the icon to go to the Logs tab of the selected test case, for a detailed description of the failure.

Execution details

View argument details such as input and output values, as well as execution and robot details (e.g., project, machine, robot).

Affected requirements

View the requirements that are assigned to the test case that has been executed. You can use this tab to go directly to the affected requirement.
AttachmentsView:
  • the attachments uploaded using the Attach Document activity.
  • the attachments of a test case result, that is linked from Orchestrator.

Viewing activity coverage

Note: Activity coverage is available only for automated test executions.

Prerequisites: Enable activity coverage for the desired test sets either in Orchestrator or Test Manager.

  1. Open the test set, select More Options docs image and then Execute Automated.
  2. Go to Test Results and open the test set you executed.
  3. Go to the Activity Coverage tab and investigate the information on the activities that were covered during the test execution.


Create defects from results

You can create defect reports including the execution log to your external defect management system, if you already have it integrated with Test Manager. For more information, see ALM Tool Integration.

To create a defect out of an execution log, you need to open a test case log, click Tasks, and then select Create Defect. After the defect has been created, a link is available in the execution log, so you can access the integrated external tool.

Defect Synchronization

You can synchronize execution results with external tools, as part of the Application Lifecycle Management tool integration. Information that is gathered during execution, such as results, logs, timestamps, and other details is synchronized with the tool that you have integrated with Test Manager.

Please note that at the moment there can be only one connection which is enabled for defect synchronization overall per project.

Note:

Who can synchronize defects

All user roles aside from Read-only can synchronize defects.

For more information see User and Group Access Management.

Create Defect

You can create defects when you access test case logs in the Test Results page.

  1. Navigate to Test Results
  2. Open a test result and the click the test case key to open the logs.
  3. Click Tasks and select Create Defect.


The defect is created and synchronized with your external tool. You can open the defect directly in the tool (e.g., Atlassian Jira) by navigating to the test execution result that has a synchronized defect.



Unlink Defect

When you unlink defects from an external tool, the entry created in the tool remains unchanged. In Test Manager, the test execution result will not be linked with an external tool.

  1. Navigate to Test Results
  2. Open a test result and then click Tasks
  3. Select Unlink Defect.

AI-Powered test insights - Preview

The Failed Tests Report gives you an actual summary of your test results, powered by AI (Artificial Intelligence). You generate this report by selecting a specific set of test results. Each report you create is stored and can be accessed anytime from the Insights tab under Test Results.

The report can have up to five sections, each of them showing the problems encountered in automated tests. Check the information within each section in the following table:
NameDescription
OverviewAn overview of the test results that you selected for the report, showing information about the average failure rate, the test set failure rate, and the percentage of errors by severity.
Top Failing TestsShows most frequently failed test cases and allows you to directly access them.
Common ErrorsHighlights the most common errors encountered during test executions.
Error PatternsCategorizes errors and allows you to identify failure patterns based on them. This helps in easier troubleshooting and resolution.
RecommendationsOffers best practices to prevent the errors encountered in the chosen test executions.

Generating Failed tests report

Prerequisites: To generate an AI-powered test insights report, your Test Manager role must have the following permissions: Test Execution - Read.
  1. Open a project in Test Manager.
  2. Go to Test Results and select Generate Insights.
  3. In the Results table, select the test results to be included in the report.
  4. Optionally, you can filter the test results by:
    1. Keywords - use the Search bar.
    2. Execution Type - the type of execution.
    3. Execution Finished - the execution completion time.
    4. Status - the execution status.
  5. Select Generate Report.

    When the generate process finishes, you receive an in-app notification and email indicating the report status: Ready or Failed.



  6. If the report is ready, select the Insights Report is ready notification, or Open report in the email notification, to access the report. If the report failed, you may regenerate it.


Downloading Failed tests report

  1. Navigate to the Insights tab inside Test Results.
    1. Optionally, to rename the insights report, select More Options, and then Rename.
  2. Select the Download docs image button.

    The Failed Tests Report is downloaded as a DOCX file.

User access management for AI-powered testing

The AI Trust Layer governance policy allows you to manage the use of AI-powered features within your organization. Although all members have default access to these features, you can use this policy to restrict access as needed. The AI Trust Layer governance policy empowers you to limit a user's access to certain AI-powered features or all of them, at a user, group, or tenant level. Additionally, it gives you the ability to decide which AI products users can access. You can create, modify, and implement this governance policy in AutomationOps.

If you want to deploy an AI Trust Layer governance policy and still use the AI-powered testing capabilities, ensure that, within the policy's Features Toggle, you select Yes for Enabling Test Manager features.

Check the following resources to learn how to create, configure, and deploy a governance policy for your organization.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.