- Release notes
- Before you begin
- Getting started
- Integrations
- Working with process apps
- Working with dashboards and charts
- Working with process graphs
- Working with Discover process models and Import BPMN models
- Showing or hiding the menu
- Context information
- Export
- Filters
- Sending automation ideas to UiPath® Automation Hub
- Tags
- Due dates
- Compare
- Conformance checking
- Root cause analysis
- Simulating automation potential
- Triggering an automation from a process app
- Viewing Process data
- Creating apps
- Loading data
- Customizing process apps
- App templates
- Additional resources
- Out-of-the-box Tags and Due dates
- Editing data transformations in a local environment
- Setting up a local test environment
- Designing an event log
- Extending the SAP Ariba extraction tool
- Performance characteristics
Data transformations
When you create a process app from an app template, you want to make sure the data used in the process app reflects your business process in the correct way. With Data transformations you can customize the data used for Process Mining.
You can either customize the transformations from within Process Mining using Data transformations, or you can edit them locally on your desktop, see Editing data transformations in a local environment
This page describes how to customize data transformations from within Process Mining. Data transformations are integrated in the dashboard editor. See Working with the dashboard editor.
Data transformations enable you to:
-
add new fields for
group by
and filters. For example, Quality check type. -
add events. For example, Invoice quality check done.
-
add tags. For example, Failed to meet SLA, Four eyes violation.
-
add business logic for calculating KPIs. For example,
Case_has_maverick_buying
.
-
Select the Data transformations button in the upper right corner of the dashboard editor to open the Data transformations.
If you open the in-line editor for the first time, it will take some time to load the editor.
The in-line data transformations editor is displayed.
The input data panel shows the input tables that have been loaded into the process app as raw data. Select a table to see the fields and the data contents in the data preview. The data preview enables you to check if the input data looks as expected.
The preview shows 1000 records of data. If you want to filter for specific data, create a temporary debug query, see Data transformations.
The Preview panel shows the data of the last data run in which this table was recalculated. If you have made recent changes, then start a new data run to view its results, see Editing and testing data transformations.
-
Select Transformations to view the structure of the transformations and to display the
.sql
files.
See Structure of transformations for more information on the structure of the transformations.
.sql
file defines a new table in the data model. If you select the a .sql
file in the Transformations panel, the SQL query is displayed in the Transformations and a preview of the data file you are editing is displayed in the Preview panel.
When editing a query, you can see a preview of the data of the last time the query was run in the Preview panel.
If there are any unsaved changes in the transformations, the Save button is enabled. Select Save to save the changes. The status of the transofrmation will be set to Up to date.
Always make sure your data model adheres the requirements. See Data model requirements.
-
Select Data model to view the data model of your process app.
See Editing and testing data transformations for more information on how to change the data model.
-
Select + Add table. The Add table dialog is displayed.
-
Select the table that defines the new output table.
-
Select the Primary key for the new table, and select Done.
-
Select the table that you want to relate to another table.
The Edit table panel is displayed.
-
In the Edit table panel, select + Add new to create a new relation.
-
Select the field that you want to use in this table from the Key list.
-
Select the Table you want to connect to and select the field that use to connect from the Key list.
-
Select Apply.
-
Select the table for which you want to change the key.
The Edit table panel is displayed.
-
In the Edit table panel, locate the relation for which you want to change the key.
-
Select the new field that is to be used as the Key to join the tables.
-
Select Apply.
-
Select the table for which you want to delete an outgoing relation.
The Edit table panel is displayed.
-
In the Edit table panel, locate the relation you want to delete and select Delete relation.
-
Select Apply.
-
Select the table that you want to delete in the data model editor.
The Edit table panel is displayed.
-
Select Delete table.
A confirmation message is displayed.
-
Select Delete to continue.
The table and the relations are deleted from the data model.
-
Select Save to save the data model.
-
Select Apply to dashboards to run the transformations and make new the table available for use in dashboards.
Note:This may take several minutes. If the run finishes successfully, the changes to the data model will show up in the Data Manager.
The Save option is only enabled, after you made any changes to the data model.
Log level | Description |
Data run status |
Information |
An Information message contains helpful information on the progress of the datarun. |
Success |
Warning |
A Warning refers to a potential problem in your data that might affect what will be displayed on a chart in the published process app. It is advised to resolve any warnings to prevent any potential future problems. |
Success |
Error | An Error refers to a mistake in your data that prevents the process app from loading the data or running the transformations.
You must resolve all errors to enable a successful data run. |
Failed |
You can use the Filter menu to change the log level. See the illustration below for an example.
You can select the icon to copy the contents of the log file and paste in, for example, a Notepad file that you can save on your computer. This enables you to view the messages when working on solving the issues causing errors or warnings.
This also enables you to share the contents of the log file, for example if you need support.
Refer to the pages listed below for more information on editing data transformations.
For versions before 2023.10.3, Run all is limited to 10 minutes. Starting with 2023.10.4, Run all is limited to 1 hour. Therefore, it is strongly recommended to use a small dataset for the development and testing of data transformations, to make sure your transformations do not exceed this time limit. See also: Process Mining portal.
cases_input.sql
has been modified. The Run queries command would run cases_input.sql
and cases.sql
, which uses the results of cases_input.sql
.
The resulting data will only be available in the Data Transformations editor, not in the dashboard editor.
There are two types of runs:
-
Run queries: starts a data run, recalculating all modified queries, and their dependencies.
-
Run file: starts a run to recalculate only the currently selected SQL file and its parent queries. This option enables you to test and debug customizations to specific SQL files.
You can select Cancel run at any time to cancel the transformations run.
Run queries is typically much faster than Apply to dashboards which starts a full data run.
Run queries does not affect the data in the published process app.
You can create new folders and add new files in your transformations.
Naming conventions
-
For names of files, folders, fields, and tables:
-
you can only use lowercase (a-z) characters, uppercase (A-Z) characters, and numbers (0-9);
-
you cannot use special characters
|*:?"<>\/
for names of files, folders, fields, and tables.
-
-
For file names, you can only use
_:.
as special characters. -
A file name cannot start with an
_
(underscore).
Creating a new folder or file from the transformations menu
Follow these steps to create new folder or file.
Steps |
Action |
---|---|
1 |
Navigate to the location in the transformations where you want to add a new file or folder. |
2 |
Go to the Transformations panel and select the menu icon to open the transformations menu. |
3 |
Select the applicable option from the menu. |
4 |
Enter a descriptive name for the new folder or file and select Enter. |
The new folder or file is created in the selected folder.
Creating a new folder or file from the context menu
Follow these steps to create new folder or file from the context menu.
Steps |
Action |
---|---|
1 |
Right-click on the folder in the transformations where you want to add a new file or folder. |
2 |
Select the applicable option from the menu. |
3 |
Enter a descriptive name for the new folder or file and select Enter. |
Unsaved changes in files
.sql
file it is indicated in the Transformations folder structure that there are unsaved changes.
In the above example there are unsaved changes in the Tags.sql
file that is in the models -> 5_business_logic folder.
Follow these steps to create new folder or file.
Steps |
Action |
---|---|
1 |
Right-click on the folder or file you want to rename to open the context menu. |
2 |
Select Rename. |
3 |
Edit the name as desired and select Enter. |
The folder or file is renamed.
Follow these steps to create new folder or file.
Steps |
Action |
---|---|
1 |
Right-click on the folder or file you want to delete to open the context menu. |
2 |
Select Delete. A confirmation message is displayed. |
3 |
Select Delete to confirm the deletion. |
{# ... #}
. This also enables you to comment macros.
SQL
{# {{ pm_utils.optional(ref('Cases_base'), '"Case_status"') }} as "Case_status", #}
{# case
when {{ pm_utils.optional(ref('Cases_base'), '"Case_value"', 'double') }} >= 1000000
then {{ pm_utils.as_varchar('>= 1M') }}
when {{ pm_utils.optional(ref('Cases_base'), '"Case_value"', 'double') }} >= 0
then {{ pm_utils.as_varchar('0 - 1M') }}
when {{ pm_utils.optional(ref('Cases_base'), '"Case_value"', 'double') }} is not null
then {{ pm_utils.as_varchar('< 0') }}
end as "Case_value_group", #}
{# {{ pm_utils.optional(ref('Cases_base'), '"Case_status"') }} as "Case_status", #}
{# case
when {{ pm_utils.optional(ref('Cases_base'), '"Case_value"', 'double') }} >= 1000000
then {{ pm_utils.as_varchar('>= 1M') }}
when {{ pm_utils.optional(ref('Cases_base'), '"Case_value"', 'double') }} >= 0
then {{ pm_utils.as_varchar('0 - 1M') }}
when {{ pm_utils.optional(ref('Cases_base'), '"Case_value"', 'double') }} is not null
then {{ pm_utils.as_varchar('< 0') }}
end as "Case_value_group", #}
In the Data Transformations editor, you can find and replace texts inside a single file.
Find text
-
Click on a random place in the code editor, or select the text you want to search for other occurrences.
-
Press
CTRL+F
. -
Enter the test you want to search for in the Find field.
All occurrences of the text in the current file are highlighted.
-
Select
ENTER
to skip to the next occurrence.
Find and replace
To find a text in the data transformation editor:-
Click on a random place in the code editor, or select the text you want to search for other occurrences.
-
Press
CTRL+H
. -
Enter the test you want to search for Find field and enter the new test in the Replace field.
All occurrences of the text in the current file are highlighted.
-
Select
ENTER
to replace the text and skip to the next occurrence.
-
Click on the code editor.
-
Press
CTRL+H
. -
Enter the test you want to search for Find field and enter the new test in the Replace field.
All occurrences of the text in the current file are highlighted.
-
Press
Crtl+Alt+Enter
.
Find/replace in multiple files
If you want to Find/replace texts in multiple files, you can only do this in an external editor.
Follow these steps to Find/replace texts in multiple files:
-
Export transformations from your process app.
Note:If you have pending changes in the Data Transformations editor, first use Apply to dashboards, and export the transformations after the data run is completed.
-
Unzip the transformations.
-
Open the transformations in your favorite code editor, for example, Notepad++, or Visual Studio Code.
-
Use the functionality of those tools to find or replace texts in multiple files.
-
Save the files.
-
Add the transformation files to a .zip file.
-
Import the transformations in your process app.
When you are asked which data to use for a data run, select Cancel if you want to continue editing transformations before doing a new data run.
dbt_project.yml
.
vars
section, if required. The other parts of the dbt_project.yml
should not be changed to ensure the transformations keep working properly.
For detailed information on dbt, see the official dbt documentation.
To make the changes available in the dashboards, you must execute a full data run to load the data and run the transformations. Select the Apply to dashboards button to start a data run.
Depending on the size of your dataset, this make take some time.
After a successful data run, the new transformations become available in the dashboard editor and can be used to adjust the dashboards.
If you have a published version of the process app, business users will instantly see the new data. If you want to make sure that your published app keeps running properly while editing transformations, it is strongly advised to use a development app.
Follow these steps.
-
Clone your app to create a development app.
-
Export the transformations from the development app.
-
Import the transformations in your published process app.
If your event log is split over multiple event log files, you can merge the event logs into a single event log in Data transformations.
To merge multiple event logs, the Case_ID should point to the same object for all files. For example, all Case IDs are pointing to Sales Order IDs., such that Case_ID points to the same table for all event logs.
-
Create a new app, based on the Event log app template. See App templates.
-
Use the Upload data option in the Selecting the Data Source step, and select the event log files you want to merge.
-
Continue the Create new app wizard to create the app and to transform the data.
The data run will fail and the Application status will show Run failed.
-
Select View log to view the log file.
An error message is displayed indicating that the expectedEvent_log_raw.sql
file is not found. -
Go to the Data transformations editor.
-
For each input file:
-
Create a
.sql
file with the same contents as the already availableEvent_log_input.sql
. -
Adjust the file to select the required input fields and type cast them correctly.
-
Add the name of the new source tables to
sources.yml
.Now all new input files can be combined into a single event log.
Attention:Make sure that the input files all have the exact same set of fields.
-
-
Select the
Event_log_input.sql
file and replace the contents with:-- The following code merges 3 input tables. select * from {{ ref('Event_log_input_1') }} union all select * from {{ ref('Event_log_input_2') }} union all select * from {{ ref('Event_log_input_3') }}
-- The following code merges 3 input tables. select * from {{ ref('Event_log_input_1') }} union all select * from {{ ref('Event_log_input_2') }} union all select * from {{ ref('Event_log_input_3') }}Note: Adjust the code to match the names of the input tables needed. -
Select Apply to dashboards to run the data transformations and make the resulting table available for use in the dashboards.
- Useful links
- Editing transformations locally
- Prerequisites
- Opening Data transformations
- Input data
- Transformations
- Viewing and editing the data model
- Validation checking
- Adding a table
- Adding relations
- Changing the key for a table
- Deleting a relation
- Deleting a table
- Making the new data model available for use in dashboards
- Viewing the transformations Log
- Log levels
- Filtering logs
- Saving the log file
- Editing and testing data transformations
- Editing transformations
- Running the queries
- Adding folders and files
- Renaming a folder or file
- Deleting a folder or file
- Adding comments in files
- Find / replace in files
- Editing dbt project configuration files
- Making the transformations available in dashboards
- Merging event logs