task-mining
latest
false
UiPath logo, featuring letters U and I in white
Task Mining
Automation CloudAutomation Cloud Public SectorAutomation Suite
Last updated Nov 13, 2024

DEPRECATED
Unassisted Task Mining analysis guide

This guide serves as an introduction to working with Unassisted Task Mining analysis results after a project is created, recording of actions is completed, and an analysis is run. It is intended for Business Analysts, Project Administrators, and others who want to learn how to interpret Unassisted Task Mining results and identify tasks with the potential for optimization. This guide also provides guidance on how to handle unexpected results and noise from the analysis.

To generate results, the AI algorithm looks for occurrences of the same sequence of steps within the recorded data. It works without any context and might therefore present tasks candidates that do not fully capture real-life tasks from the beginning to the end.

Sometimes the analysis results may include tasks and steps that are irrelevant from a business perspective. This is considered noise. To identify automation candidates, it is important for the reviewer to differentiate between high-quality tasks and noise.

Tasks identified by the AI algorithm

The tasks identified by the AI algorithm may align with real-life tasks, but they may also differ from what is expected. Not all task candidates are suitable for automation, and the reviewer needs to be familiar with the different types of results they might encounter. The identified task candidates may:

  1. Not show the expected tasks
  2. Show unexpected tasks
  3. Split one real-life task into multiple tasks
  4. Partially capture a task without the real start and end

1. Results do not show expected tasks

Unassisted Task Mining applies an algorithm to identify tasks, which may be good candidates for automation or process optimization. The AI algorithm is not guaranteed to detect anything, and it may detect a partial process or even a larger process than expected. By following the steps provided in this document, the reviewer can determine whether the identified tasks are suitable for automation. Since Unassisted Task Mining is not guaranteed to detect known tasks or to pick out every variation or iteration, it shouldn't be used purely for monitoring known tasks. Task Mining is a better fit for use cases to document or review known tasks.

2. Results show unexpected tasks

Unassisted Task Mining identifies task candidates which are then ranked by their potential as automation opportunities. Some results may not be representative of a real-life end-to-end task, but the reviewer can still identify them as good automation candidates based on the steps presented in this document.

3. Results split real tasks into multiple Task Mining tasks

The Unassisted Task Mining algorithm looks for the most frequently occurring and consistent sequence of steps. Depending on how consistently users executed the task, a real-life task may be split up into multiple tasks in the result. The end of one task may be the start of the next one. The task might still be suitable for automation or process improvement actions. In that case, we recommend exporting these subtasks to Process description documents (.docx).

4. Results partially capture a task without the real start or end

The AI algorithm identifies the most consistent sequences of steps as tasks. Depending on the variability of users executing the task, the middle of a task might be more consistent than the start and/or the end causing the algorithm to detect this subtask as a candidate instead of the full end-to-end task.

This is likely to occur when the start and/or end of a task involves highly multi-functional applications such as Outlook, Excel, etc. These applications are likely used during multiple tasks, and it is difficult for the algorithm to distinguish specific occurrences of them as the start or end of a task. In this case, we recommend focusing on the bulk of the tasks, not covering 100% of all the clicks that a user did. If the task is a suitable candidate for automation, the missing start and/or end can be added when building the automation.

Prioritizing tasks for analysis

Depending on the recorded data, the Task Mining algorithm might identify many tasks. Therefore, it's important for the reviewer to prioritize which candidates to analyze first to not waste time on tasks which aren't likely to be suitable automation candidates. The Analysis overview and the Tasks tabular view on the Results tab provide input for this prioritization.

The tasks in the Results are ordered by how likely they are to be a suitable automation candidate. The higher the task is on the list, the more likely it is to be a good automation candidate. The task with the 'Task 1' has been identified as best automation candidate by the Unassisted Task Mining algorithm, considering various factors, including repeatability and complexity. However, this ranking does not indicate the overall quality of the Task Mining results, but relatively 'Task 1' is more likely to be a better automation candidate than 'Task 10'.

When analyzing a task based on the default ranking, it can occur that this task has a high automation potential, but the end-to-end task is not entirely correct. In that case, it is recommended to check for alternative task candidates based on a different ranking. As a reviewer, you can change the standard ranking by selecting the sort icon for the column headers in the Tasks tabular view. This enables you to identify tasks with a high automation potential based on different metrics. Once you have found a representative task, you can select it and mark it as Favorite.

Focus on the higher-ranked tasks. In general, the higher-ranked tasks are of higher quality. Task candidates ranked past 10 or 20 are usually lower quality.

Investigate the metrics of the different tasks. Each task displays different metrics such as the total time spent by recording users on this task, number of recording users who have performed this task, median number of actions in the task, etc. Consider these metrics in your analysis and apply your own criteria based on the business context of your project.

For example, if a task has a much shorter Total duration, number of Traces and Actions compared to another task, this might indicate that task has a lower automation potential. However, note that there are no overall guidelines regarding how long Total duration should be across all tasks identified by the AI algorithm. Total duration that hold across all Task Mining projects. These metrics should always be interpreted in the business context of the specific project.

Make use of the Favorites and rename functionality. When prioritizing the different task for a deeper analysis, it is important to keep an overview of what has been prioritized or even already analyzed. Marking tasks as Favorites and renaming tasks with a descriptive name can help to structure the analysis.

Analyzing individual tasks

After the reviewer has prioritized different tasks, your analysis can begin. To guide the reviewer, the following section first provides some insights to keep in mind during the analysis and afterward provides a step-by-step guide on how to navigate through the analysis view.

Keep in mind during the analysis

The steps are based on screens. The task and its steps are displayed at the level of a unique user interface/screen and do not represent individual click or type actions. Multiple clicks or type actions that occur on the same screenare usually grouped into one step by the AI algorithm. Therefore, the graph does not show each individual click or type action.

A task needs at least two steps (screens) to be identified as such. For the Task Mining algorithm to identify a task, it needs to consist of a clear start and end step. Therefore, an action that is only performed on one screen will not be identified as a task.

Steps are the same throughout the different tasks. Steps are not bound to one specific task. A step that occurs in one task can also occur in another one.

The PII masking algorithm may incorrectly mask or not mask as PII. The Personal Identifiable Information (PII) module is an AI algorithm that detects PII in screens. It might occur that the algorithm makes a mistake and some PII may not be masked or text which is not PII may be masked. These mistakes depend on the detected text on the screen as well as the context of the words themselves. If the text is not accurately captured by the OCR or is partially cut off, it might not be masked. Additionally, if other words on the screen are different, it is possible for the same text to be identified as PII in one screen and not PII in another.

If a task does not make visual sense when examining traces, it is likely not a high-quality task. The algorithm can detect noisy and irrelevant tasks, especially for tasks with lower ranks in the task ranking. These tasks might be short or long. Once this becomes clear after examining a few traces, you should not waste your time trying to interpret them.

Look for the bulk of the process (80/20 rule). The tasks may not align fully with the expected real-life task but only partially cover a certain part of it. As already mentioned above, depending on the variability of actions taken by recording users executing the task, certain steps of a task might be more consistent than others, causing the algorithm to only detect certain steps of the task instead of the full end-to-end task.

The task might still be suitable for automation regardless of missed steps. These missed steps can be added when building the automation.

Scroll through the results. The traces of a task and the screenshots for the steps are sorted chronologically. Therefore, it is recommended to scroll through the lists to review the results at multiple points.

Step-by-step Analysis

To closely analyze the prioritized tasks, follow the steps below. This will help in differentiating between automation candidates and noisy tasks

  1. On the analysis details page locate the task you want to analyze from the Task tabaular view list.
    • Change the status of the task to Review in progress.

    • Select the task, to display the task details. A list of available variants (a grouping of similar traces within a task) and traces (individual instances of the task performed) for the the task is displayed in the Traces panel.
    • An overview of the task is displayed in the Task diagram.

    • Select the Step action preview option, to show the screenshots in the steps of the graph.
      docs image
    • Review the steps in the task to understand what is happening.

    • Rename the steps, this helps you keep an overview of which steps have been reviewed.

    • The number between two consecutive steps (rectangles in the task graph) represents the number of traces that executed those steps.

  2. Analyze the variants to determine their quality.
    • Variants are ordered by the number of traces detected.

    • Hover over a variant, to see the number of traces relatively to the total number of traces for the task.
      docs image
    • Select the variant to display the traces.
      docs image
  3. Review traces to see how one iteration of the task was performed by a single user.
    • Traces are ordered chronologically. We recommend reviewing traces at the beginning of the list, in the middle, and toward the end.
    • From a variant, select the trace you want to review and the diagram is refreshed to see the steps from the selected trace.
    • A high-quality task will contain many traces that look similar. Look for the following indicators:

      • Traces have similar steps in the middle of the trace.
      • Traces make sense from a business perspective.
      • Check the screenshots to see whether trace information on the item that is worked on is the same within each trace, but different between traces (e.g., issue ID, customer name, invoice number etc.). Make sure you are analyzing this on the right level, as an invoice number may appear in multiple traces, but each trace covers a different line of the invoice.
    • If you determine during the analysis of the traces that task is of poor quality, it is recommended to not focus on them and move on to the next task in your priority list.
    • Filter out the low-quality traces. Even a high-quality task will contain some low-quality traces, where the algorithm made a mistake. These traces will often be much longer or much shorter than others and include noise/irrelevant actions. Remove them by applying the filters from the Filter panel.

    • Use the Favorites option to prioritize traces for further analysis.

  4. Analyze the steps of the trace to determine their quality.
    • Examine the screenshots of the steps of the trace to understand what is happening. A high-quality step is consistent in the application used and the work that is performed.

    • The screenshots are ordered chronologically, so it is good practice to review the screenshots in the beginning, middle and end of the list.
    • Some inconsistent and noisy steps are acceptable, but ideally, a task with high automation potential will have at least a few high-quality steps in the middle of the graph that are part of most traces. Delete any duplicate or obsolete screenshots.

    • Rename the steps, this helps you keep an overview of which steps have been reviewed.
  5. In the Task tabular view, update the status the tasks.
    • Use the RPA Candidate status for tasks that are candidate for automation.

    • Use the Uninformative status for low-quality tasks.

    • Use the Favorites option to prioritize tasks for further analysis.

Once you have selected tasks that are candidate for automation, we recommend to submit an idea for automation by exporting selected to UiPath Automation Hub.

Rename steps

Renaming steps serves two purposes. First, it makes the steps more interpretable. Second, it allows you to distinguish between high quality and noise. Since steps can occur in multiple tasks, renaming them will save you the trouble of reviewing them again in the next task. Some best practices:

  • High-quality step: rename to Application name + verb + noun. It is not possible to filter for applications, but you can filter for step names. When there are multiple applications used for the task, this makes the analysis easier.
  • Noise steps: rename to noise.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.