- Release notes
- Task Mining overview
- Setup and configuration
- Notifications
- Task Mining
- Additional resources
Task Mining
DEPRECATEDUnassisted Task Mining analysis guide
This guide serves as an introduction to working with Unassisted Task Mining analysis results after a project is created, recording of actions is completed, and an analysis is run. It is intended for Business Analysts, Project Administrators, and others who want to learn how to interpret Unassisted Task Mining results and identify tasks with the potential for optimization. This guide also provides guidance on how to handle unexpected results and noise from the analysis.
To generate results, the AI algorithm looks for occurrences of the same sequence of steps within the recorded data. It works without any context and might therefore present tasks candidates that do not fully capture real-life tasks from the beginning to the end.
Sometimes the analysis results may include tasks and steps that are irrelevant from a business perspective. This is considered noise. To identify automation candidates, it is important for the reviewer to differentiate between high-quality tasks and noise.
The tasks identified by the AI algorithm may align with real-life tasks, but they may also differ from what is expected. Not all task candidates are suitable for automation, and the reviewer needs to be familiar with the different types of results they might encounter. The identified task candidates may:
- Not show the expected tasks
- Show unexpected tasks
- Split one real-life task into multiple tasks
- Partially capture a task without the real start and end
Unassisted Task Mining applies an algorithm to identify tasks, which may be good candidates for automation or process optimization. The AI algorithm is not guaranteed to detect anything, and it may detect a partial process or even a larger process than expected. By following the steps provided in this document, the reviewer can determine whether the identified tasks are suitable for automation. Since Unassisted Task Mining is not guaranteed to detect known tasks or to pick out every variation or iteration, it shouldn't be used purely for monitoring known tasks. Task Mining is a better fit for use cases to document or review known tasks.
Unassisted Task Mining identifies task candidates which are then ranked by their potential as automation opportunities. Some results may not be representative of a real-life end-to-end task, but the reviewer can still identify them as good automation candidates based on the steps presented in this document.
The Unassisted Task Mining algorithm looks for the most frequently occurring and consistent sequence of steps. Depending on how consistently users executed the task, a real-life task may be split up into multiple tasks in the result. The end of one task may be the start of the next one. The task might still be suitable for automation or process improvement actions. In that case, we recommend exporting these subtasks to Process description documents (.docx).
The AI algorithm identifies the most consistent sequences of steps as tasks. Depending on the variability of users executing the task, the middle of a task might be more consistent than the start and/or the end causing the algorithm to detect this subtask as a candidate instead of the full end-to-end task.
This is likely to occur when the start and/or end of a task involves highly multi-functional applications such as Outlook, Excel, etc. These applications are likely used during multiple tasks, and it is difficult for the algorithm to distinguish specific occurrences of them as the start or end of a task. In this case, we recommend focusing on the bulk of the tasks, not covering 100% of all the clicks that a user did. If the task is a suitable candidate for automation, the missing start and/or end can be added when building the automation.
Depending on the recorded data, the Task Mining algorithm might identify many tasks. Therefore, it's important for the reviewer to prioritize which candidates to analyze first to not waste time on tasks which aren't likely to be suitable automation candidates. The Analysis overview and the Tasks tabular view on the Results tab provide input for this prioritization.
The tasks in the Results are ordered by how likely they are to be a suitable automation candidate. The higher the task is on the list, the more likely it is to be a good automation candidate. The task with the 'Task 1' has been identified as best automation candidate by the Unassisted Task Mining algorithm, considering various factors, including repeatability and complexity. However, this ranking does not indicate the overall quality of the Task Mining results, but relatively 'Task 1' is more likely to be a better automation candidate than 'Task 10'.
When analyzing a task based on the default ranking, it can occur that this task has a high automation potential, but the end-to-end task is not entirely correct. In that case, it is recommended to check for alternative task candidates based on a different ranking. As a reviewer, you can change the standard ranking by selecting the sort icon for the column headers in the Tasks tabular view. This enables you to identify tasks with a high automation potential based on different metrics. Once you have found a representative task, you can select it and mark it as Favorite.
Focus on the higher-ranked tasks. In general, the higher-ranked tasks are of higher quality. Task candidates ranked past 10 or 20 are usually lower quality.
Investigate the metrics of the different tasks. Each task displays different metrics such as the total time spent by recording users on this task, number of recording users who have performed this task, median number of actions in the task, etc. Consider these metrics in your analysis and apply your own criteria based on the business context of your project.
For example, if a task has a much shorter Total duration, number of Traces and Actions compared to another task, this might indicate that task has a lower automation potential. However, note that there are no overall guidelines regarding how long Total duration should be across all tasks identified by the AI algorithm. Total duration that hold across all Task Mining projects. These metrics should always be interpreted in the business context of the specific project.
Make use of the Favorites and rename functionality. When prioritizing the different task for a deeper analysis, it is important to keep an overview of what has been prioritized or even already analyzed. Marking tasks as Favorites and renaming tasks with a descriptive name can help to structure the analysis.
After the reviewer has prioritized different tasks, your analysis can begin. To guide the reviewer, the following section first provides some insights to keep in mind during the analysis and afterward provides a step-by-step guide on how to navigate through the analysis view.
The steps are based on screens. The task and its steps are displayed at the level of a unique user interface/screen and do not represent individual click or type actions. Multiple clicks or type actions that occur on the same screenare usually grouped into one step by the AI algorithm. Therefore, the graph does not show each individual click or type action.
A task needs at least two steps (screens) to be identified as such. For the Task Mining algorithm to identify a task, it needs to consist of a clear start and end step. Therefore, an action that is only performed on one screen will not be identified as a task.
Steps are the same throughout the different tasks. Steps are not bound to one specific task. A step that occurs in one task can also occur in another one.
The PII masking algorithm may incorrectly mask or not mask as PII. The Personal Identifiable Information (PII) module is an AI algorithm that detects PII in screens. It might occur that the algorithm makes a mistake and some PII may not be masked or text which is not PII may be masked. These mistakes depend on the detected text on the screen as well as the context of the words themselves. If the text is not accurately captured by the OCR or is partially cut off, it might not be masked. Additionally, if other words on the screen are different, it is possible for the same text to be identified as PII in one screen and not PII in another.
If a task does not make visual sense when examining traces, it is likely not a high-quality task. The algorithm can detect noisy and irrelevant tasks, especially for tasks with lower ranks in the task ranking. These tasks might be short or long. Once this becomes clear after examining a few traces, you should not waste your time trying to interpret them.
Look for the bulk of the process (80/20 rule). The tasks may not align fully with the expected real-life task but only partially cover a certain part of it. As already mentioned above, depending on the variability of actions taken by recording users executing the task, certain steps of a task might be more consistent than others, causing the algorithm to only detect certain steps of the task instead of the full end-to-end task.
The task might still be suitable for automation regardless of missed steps. These missed steps can be added when building the automation.
Scroll through the results. The traces of a task and the screenshots for the steps are sorted chronologically. Therefore, it is recommended to scroll through the lists to review the results at multiple points.
To closely analyze the discoverd task candidates, follow the steps below. This will help in differentiating between automation candidates and noise.
Once you have selected tasks that are candidate for automation, we recommend to submit an idea for automation by exporting selected to Automation Hub.
Renaming steps serves two purposes. First, it makes the steps more interpretable. Second, it allows you to distinguish between high quality and noise. Since steps can occur in multiple tasks, renaming them will save you the trouble of reviewing them again in the next task. Some best practices:
- High-quality step: rename to Application name + verb + noun. It is not possible to filter for applications, but you can filter for step names. When there are multiple applications used for the task, this makes the analysis easier.
- Noise steps: rename to noise.
- Tasks identified by the AI algorithm
- 1. Results do not show expected tasks
- 2. Results show unexpected tasks
- 3. Results split real tasks into multiple Task Mining tasks
- 4. Results partially capture a task without the real start or end
- Prioritizing tasks for analysis
- Analyzing individual tasks
- Keep in mind during the analysis
- Step-by-step analysis
- Rename steps