process-mining
latest
false
UiPath logo, featuring letters U and I in white
Process Mining
Automation CloudAutomation Cloud Public SectorAutomation SuiteStandalone
Last updated Nov 13, 2024

Configuring DataBridgeAgent

Introduction

This page describes how to configure DataBridgeAgent to load data for a process app in Process Mining.

Configuring DataBridgeAgent

Follow these steps to configure DataBridgeAgent.

  1. Download DataBridgeAgent. See Loading data using DataBridgeAgent.
  2. On the server, create a folder for the DataBridgeAgent. For instance, D:\processmining\P2P_data\.
Note: In the remainder of this guide, we will refer to this directory as <EXTRACTORDIR>.
  1. Place the installation package in the <EXTRACTORDIR> folder.
    • Right-click on the installation package.
    • Select Extract All….
  2. Right-click on the file <EXTRACTORDIR>\datarun.json and select Open.
  3. Enter a value for the following settings:

    • azureURL
    • connectorWorkspace
    • connectorModuleCode
    • Input type
    • Use credential store

Generic parameters

Below is an overview of the generic parameters for DataBridgeAgent.

Parameter

Description

azureURL

The SAS URL of the Azure blob storage to which the extracted data needs to be uploaded. See Retrieving the credentials for the Azure blob storage

endOfUploadApiUrl

The API that is called to start data processing in Process Mining, once all data has been uploaded.
Note:
The endOfUploadApiUrl is only required if you want to upload the data files using DataBridgeAgent. If you want to upload the files using an extractor you configure the end-of-upload url in the extraction job.

connectorWorkspace
The name of the workspace of the connector used to load the data and to create the dataset.

connectorModuleCode

The module code of the connector used to load the data and to create the dataset.

Input type

Can be either:

SAP see SAP parameters
CSV see CSV parameters
ODBC see ODBC parameters
Note: depending on the preferred input type, you must enter the settings in the corresponding section.

Use credential store

Indicate whether or not a credential store is used for password storage.

Note: if set to true you specify the password identifier in de SAP Password or ODBC Password field.

Reporting currency

The currency in which price-related values are displayed.

Exchange rate type

The exchange rate type that is used for currency conversion.

Language

The language in which data is extracted from the source system.

Extraction start date

The start date of the extraction period of the data.

Note: In cases where only a subset of the data is needed, it is recommended to limit the amount of data loaded, while this may improve the loading times.

Extraction end date

The end date of the extraction period of the data.

Note: In cases where only a subset of the data is needed, it is recommended to limit the amount of data loaded, while this may improve the loading times.

SAP parameters

Below is an overview of the parameters that can be used for SAP datasources.

Parameter

Description

SAP Host

The hostname or IP address of the SAP application server.

SAP SystemNumber

The two-digit number between 00 and 99 that identifies the designated instance.

SAP Username

The username of the account that is being used to log in to the SAP instance.

SAP Password

The password of the user above.

Note: If you use a credential store, you must enter the password identifier from the credential store, instead of the password.

SAP Client

The client that is being used.

CSV parameters

Below is an overview of the parameters that can be used for CSV datasources.

SAP Setting

Description

CSV Data path

Data path in the Server Data that points to the place where the .csv files are stored. For example P2P/ if all files can be found in the folder named P2P.

CSV Suffix

A regular expression containing the files extension of the file to read in. May contain a suffix up to 2 digits that are added to the name of the table.

CSV Delimeter

The delimiter character that is used to separate the fields.

CSV Quotation character

The quote character that is used to identify fields that are wrapped in quotes.

CSV Has header

Indicate whether the first line of the .CSV file is a header line.

ODBC parameters

Below is an overview of the parameters that can be used for ODBC datasources.

Parameter

Description

ODBC Driver

The name of the ODBC driver to use for this connection.

ODBC Username

Username to be used to connect to the external datasource.

ODBC Password

Password to be used to connect to the external datasource.

Note: If you use a credential store, you must enter the password identifier from the credential store, instead of the password.

ODBC Connection parameters

Any other parameters are passed as specified to the ODBC driver. Use the format param1=value1 (;param2=value2)

Creating the dataset and upload it for use in Process Mining

To load the data and upload it to Azure for use in Process Mining you must run the EXTRACTORDIR\datarun.bat file.

The time taken for this task will depend highly on the data volumes loaded.

The output is uploaded to the blob storage which triggers processing in Snowflake.

Starting the file manually

Follow this step to start the data run.

  1. Double-click on the EXTRACTORDIR\datarun.bat file to start the data run.

Schedule a task for the data run

Instead of running the file manually, you can use Windows Task Scheduler to schedule a task that runs the batch script for automated data refreshes.

Note: It is recommended to select the local SYSTEM user account for running the task and to select the Run with highest priority option in the Create Task window when creating the task in Windows Scheduler.

The output is uploaded to the blob storage which triggers processing in Snowflake.

Important: If you add an interval for which there is no data available, the dashboards will show an error message.

Troubleshooting

The data run also creates a log file that can help in case of unexpected results or if the data upload with DataBridgeAgent fails. The file <EXTRACTORDIR>\datarun.txt contains the logs of the last data run.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.