Process Mining
latest
false
- Release notes
- Getting started
- Integrations
- Working with process apps
- Working with dashboards and charts
- Working with process graphs
- Working with Discover process models and Import BPMN models
- Showing or hiding the menu
- Context information
- Export
- Filters
- Sending automation ideas to UiPath® Automation Hub
- Tags
- Due dates
- Compare
- Conformance checking
- Root cause analysis
- Simulating automation potential
- Starting a Task Mining project from Process Mining
- Triggering an automation from a process app
- Viewing Process data
- Creating apps
- Loading data
- Customizing process apps
- App templates
- Additional resources
Create a job
Process Mining
Last updated Apr 17, 2024
Create a job
Important:
The input data must meet the format as required for the app template you are using to create your process app. See App Templates.
Follow these steps to create the extraction job.
- Create a new CData Sync job.
- Enter a descriptive name for the job in the Job Name field. For example, ServiceNow_to_AzureBlob or ServiceNow_to_SQLServer.
- Select the source connection created in Create a source connection the source connection from the Source list.
- Select the destination connection created in Create a destination connection from the Destination list.
- Make sure the option Standard is selected as the Replication Type and click on +Create.
- Create a Custom Query using the replication queries for your app template.
Tip:
You can copy the replication queries from the documentation for you app template that can be accessed from App Templates.
- Go to the Advanced tab in the Job Settings panel.
- Select the Drop Table option to prevent the data to be appended to the table.
- Enable the checkbox
Enable Parallel Processing
and enter8
in the Worker Pool field to improve loading speed. - Make sure the Replicate Interval and Replicate Interval Unit are set so the resulting period is equal or larger than the extraction period. For example, if you want to extract data for a period of a year, the Replicate Interval and Replicate Interval Unit must reflect at least a year period.
- Save your changes.
Follow these steps to edit the post-job event to call the data ingestion API.
- Go to the Events tab in the Job Settings panel.
- Edit the Post-Job Event section and add the ingestion API call from the code block below ensuring to replace the
value
of the "http.url" with the End of upload API. See Retrieving the credentials for the Azure blob storage.<api:set attr="http.url" value="https://my-uipath-server.com/default/defaulttenant/processMining_/api/v4.0/apps/98dfd1d5-9e42-4f0e-9a0a-8945629f01b3/transform/unauthenticated"/> <api:call op="httpPost" in="http"/>
<api:set attr="http.url" value="https://my-uipath-server.com/default/defaulttenant/processMining_/api/v4.0/apps/98dfd1d5-9e42-4f0e-9a0a-8945629f01b3/transform/unauthenticated"/> <api:call op="httpPost" in="http"/> - Save your changes.
- Click on JOBS in the menu bar and locate the destination job.
- Click on the Run All Queries icon to check if the job runs correctly.
You can add additional statements to obtain logging information on the API call. Follow these steps.
- Add the following statements between the lines previously added in the Post-Job event.
<api:set attr="http.verbosity" value="5"/>
<api:set attr="http.logfile" value="D:/mydir/cdata_log.txt"/>
See the illustration below.
- Go to the Logging & History tab in the Job Settings panel.
- Select Verbose from the Logfile Verbosity list box.
- Run the job and check the log file that is created.