- Release notes
- Getting started
- Installation
- Configuration
- Integrations
- Authentication
- Working with Apps and Discovery Accelerators
- AppOne menus and dashboards
- AppOne setup
- TemplateOne 1.0.0 menus and dashboards
- TemplateOne 1.0.0 setup
- TemplateOne menus and dashboards
- TemplateOne 2021.4.0 setup
- Purchase to Pay Discovery Accelerator menus and dashboards
- Purchase to Pay Discovery Accelerator Setup
- Order to Cash Discovery Accelerator menus and dashboards
- Order to Cash Discovery Accelerator Setup
- Basic Connector for AppOne
- SAP Connectors
- Introduction to SAP Connector
- SAP input
- Checking the data in the SAP Connector
- Adding process specific tags to the SAP Connector for AppOne
- Adding process specific Due dates to the SAP Connector for AppOne
- Adding automation estimates to the SAP Connector for AppOne
- Adding attributes to the SAP Connector for AppOne
- Adding activities to the SAP Connector for AppOne
- Adding entities to the SAP Connector for AppOne
- SAP Order to Cash Connector for AppOne
- SAP Purchase to Pay Connector for AppOne
- SAP Connector for Purchase to Pay Discovery Accelerator
- SAP Connector for Order-to-Cash Discovery Accelerator
- Superadmin
- Dashboards and charts
- Tables and table items
- Application integrity
- How to ....
- Working with SQL connectors
- Introduction to SQL connectors
- Setting up a SQL connector
- CData Sync extractions
- Running a SQL connector
- Editing transformations
- Releasing a SQL Connector
- Scheduling data extraction
- Structure of transformations
- Using SQL connectors for released apps
- Generating a cache with scripts
- Setting up a local test environment
- Separate development and production environments
- Useful resources
Process Mining
Running a SQL connector
This page contains instructions on how to run a SQL connector using scripts.
run.ps1
and load.ps1
need to be run on the same server as the Process Mining installation for production. The extraction_cdata.ps1
and transform.ps1
script can be run from other locations as well.
It is assumed that:
- the development tools described in Setting up a local test environment are installed.
-
the SQL connector is set up as described in Setting up a SQL connector.
Note: Thescripts/
directory of the connector contains a set of standard scripts to run and schedule data extraction, transformation and loading.
Follow these steps to run a connector, extract, transform, and load the data.
Step |
Action |
---|---|
1 |
Start Windows PowerShell as admin. |
2 |
Go to the
scripts/ directory.
|
3 |
Execute
run.ps1 .
|
Follow these steps to only execute the extraction.
Step |
Action |
---|---|
1 |
Start Windows PowerShell. |
2 |
Go to the
scripts/ directory.
|
3 |
Execute
extraction_cdata.ps1 .
|
extraction_ script
will be different.
Follow these steps to only execute the transformation steps.
Step |
Action |
---|---|
1 |
Start Windows PowerShell. |
2 |
Go to the
scripts/ directory.
|
3 |
Execute
transform.ps1 .
|
Each transformation step can also be run individually.
LogFile.log
is created, when running the scripts. This log file contains all stages of job execution and the associated time stamps.
The log file also returns a minimal set of error codes, that could give further guidance.
cache_generation_output.log
that is generated in the directory in which your load script is located.
For more details on the CData Sync job executions, go to your CData Sync instance and check the Logging & History tab of your job. See the illustration below.
Verbose
and run the extraction script extraction_cdata.ps1
again.
Below is an overview of the return codes of a CData Sync job.
Code |
Log description |
---|---|
0 |
Extraction SUCCESSFUL for job. |
-1 |
Extraction FAILED for job. |
-2 |
Failed to perform the extraction. Check your settings or look into the Logging & History tab for your job. |
The log file also returns a set of error codes of the transformation script. Below is an overview of the error codes.
Code |
Log description |
---|---|
-1 |
General
dbt run or dbt test failure. This means that there is an issue with the current setup or the configuration. Check the LogFile.log for more details.
|
0* |
The
dbt invocation completed without error.
|
1* |
The
dbt invocation completed with at least one handled error (e.g. model syntax error, bad permissions, etc). The run was completed,
but some models may have been skipped.
LogFile.log contains extra information stating whether the error occurred in the dbt run or in the dbt test phase.
|
2* |
The
dbt invocation completed with an unhandled error (e.g. a network interruption).
|
* 0, 1, and 2 are dbt-specific return codes. See the official dbt documentation on exit codes.
response.txt
in the scripts/
directory can be inspected. This contains the real-time responses from dbt. Once dbt test
or dbt run
have been completed, the information is appended to the LogFile.log
and the temporary file is deleted.
It is also possible to schedule data extractions at a regular interval. See Scheduling data extraction.