- Release notes
- Getting started
- Working with process apps
- Working with dashboards and charts
- Working with process graphs
- Working with Discover process models and Import BPMN models
- Showing or Hiding the Menu
- Context Information
- Sending Automation Ideas to UiPath Automation Hub
- Due dates
- Root cause analysis
- Simulating automation potential
- Triggering an automation from a process app
- Starting a Task Mining project from Process Mining
- Creating apps
- Loading data
- Uploading Data
- Retrieving the Credentials for the Azure Blob Storage
- Loading Data Using DataUploader
- Viewing Logs
- Optimizing an app
- Scheduling Data Runs
- Customizing process apps
- Introduction to dashboards
- Working with the dashboard editor
- Creating dashboards
- Automation manager
- App templates
- Additional Resources
- Editing Data Transformations in a Local Environment
- Setting up a Local Test Environment
- Designing an Event Log
- System Requirements
- Configuring DataBridgeAgent
- Adding a Custom Connector to DataBridgeAgent
- Using DataBridgeAgent With SAP Connector for Purchase-to-Pay Discovery Accelerator
- Using DataBridgeAgent With SAP Connector for Order-to-Cash Discovery Accelerator
- Basic troubleshooting guide
Using DataBridgeAgent With SAP Connector for Purchase-to-Pay Discovery Accelerator
DataBridgeAgent is only needed if you are migrating from an on-premises stand-alone Process Mining version.
.mvp connector. Otherwise, use CData Sync, Theobald Xtract Universal for SAP, or files to upload data.
Datarun.json file for using DataBridgeAgent with SAP Connector for Purchase-to-Pay Discovery Accelerator.
Below is an overview of the generic settings.
The SAS URL of the Azure blob storage to which the extracted data needs to be uploaded.
|The API that is called to start data processing in Process Mining, once all data has been uploaded.
endOfUploadApiUrl is only required if you want to upload the data files using DataBridgeAgent. If you want to upload the files using an extractor
you configure the end-of-upload url in the extraction job. If you want to upload the data files using the Upload data option for a process app in Process Mining, you can leave the the
Use credential store
Below is an overview of the parameters that can be used for SAP datasources.
The hostname or IP address of the SAP application server.
The two-digit number between 00 and 99 that identifies the designated instance.
Username of the account that is being used to log in to the SAP instance.
Password of the account that is being used to log in to the SAP instance, or the password identifier from the credential store. (If the credential store is used, the Use credential store generic setting should be set to true.)
"password" | "PasswordIdentifier"
The client that is being used. (000-999).
Exchange rate type
The exchange rate type which is used for currency conversion (KURST).
The language that is being used for descriptions extracted from the data (E = English).
.csv files always make sure that all the
required Input data of the SAP Purchase-to-Pay
Connector is available.
Always make sure that:
- a separate
.csvfile is available for each table.
- the file names of the
.csvfiles are the same as the names of the input tables of the connector.
- all the fields specified in the Fields used in Process Mining column of a table are present in the
- the fields in the
.csvfiles have the same names as the field names specified in the Fields used in Process Mining column.
EBAN.CSV. The other settings can be defined in the CSV parameters of the DataBridgeAgent.
Below is an overview of the parameters that can be used for CSV datasources.
CSV Data path
Data path in the Server Data that points to the place where the
.csv files are stored. For example
P2P/ if all files can be found in the folder named
A regular expression containing the files extension of the file to read in. May contain a suffix up to 2 digits that are added to the name of the table.
The delimiter character that is used to separate the fields.
CSV Quotation character
The quote character that is used to identify fields that are wrapped in quotes.
CSV Has header
Indicate whether the first line of the
.CSV file is a header line.