- Release notes
- Getting started
- Installation
- Configuration
- Integrations
- Authentication
- Working with Apps and Discovery Accelerators
- AppOne menus and dashboards
- AppOne setup
- TemplateOne 1.0.0 menus and dashboards
- TemplateOne 1.0.0 setup
- TemplateOne menus and fashboards
- TemplateOne 2021.4.0 setup
- Purchase to Pay Discovery Accelerator menus and dashboards
- Purchase to Pay Discovery Accelerator Setup
- Order to Cash Discovery Accelerator menus and dashboards
- Order to Cash Discovery Accelerator Setup
- Basic Connector for AppOne
- SAP Connectors
- Introduction to SAP Connector
- SAP input
- Checking the data in the SAP Connector
- Adding process specific tags to the SAP Connector for AppOne
- Adding process specific Due dates to the SAP Connector for AppOne
- Adding automation estimates to the SAP Connector for AppOne
- Adding attributes to the SAP Connector for AppOne
- Adding activities to the SAP Connector for AppOne
- Adding entities to the SAP Connector for AppOne
- SAP Order to Cash Connector for AppOne
- SAP Purchase to Pay Connector for AppOne
- SAP Connector for Purchase to Pay Discovery Accelerator
- SAP Connector for Order-to-Cash Discovery Accelerator
- Superadmin
- Dashboards and charts
- Tables and table items
- Application integrity
- How to ....
- Working with SQL connectors
- Introduction to SQL connectors
- Setting up a SQL connector
- CData Sync extractions
- Running a SQL connector
- Editing transformations
- Releasing a SQL Connector
- Scheduling data extraction
- Structure of transformations
- Using SQL connectors for released apps
- Generating a cache with scripts
- Setting up a local test environment
- Separate development and production environments
- Useful resources
Setting up a SQL connector
This page contains the instructions on how to create a new SQL connector and connect it to your source systems.
It is assumed that the development tools described in Setting up a local test environment are installed.
If you want to build a new SQL connector you can download the template connector that contains the basic setup for a SQL connector.
First, the extraction settings need to be set, to point to the desired source system and staging database. These configuration steps must be done once for each developer environment, and for the production environment.
Each connector comes with a set of specific extraction methods. One of the extraction methods needs to be configured, to extract source data.
Extraction method |
Description |
---|---|
Load-from-file |
• Set up a CData job for the Load-from-file extractor. See Extractors: Load from file. • Use a custom query specific to this connector, which is provided in
extractors/load-from-file/instructions.md .
|
Load-from-source |
To load input data from a source system, use the specific instructions located in
extractors/load-from-source/instructions.md .
The load-from-source extractor is only available in connectors made for specific source systems, for example a Purchase-to-pay for SAP connector. |
Make sure to set the appropriate filters for your extractor, to load only the data that is necessary for your analysis. For example, limit the data to the last 6 months, or only a specific department that’s of interest.
instructions.md
of the extractor.
extract_cdata.ps1
script must be pointed to the correct job. Follow these steps to configure the extraction script.
Step |
Action |
---|---|
1 |
In the
scripts/ directory, rename config_template.json to config.json .
config.json is ignored by Git, such that development and production environments can use different extraction configs.
|
2 |
In
config.json , change the text JOB_NAME_CREATED_IN_CDATA to the CData job that was just created.
|
instructions.md
.
profiles.yml
needs to have a reference to your staging database for this connector. These configuration steps need to be done once for
each developer environment, and for the production environment. Follow these steps to configure the transformation.
Step |
Action |
---|---|
1 |
Copy the contents of
transformations/profiles.yml of your connector.
|
2 |
Paste the contents in your personal
profiles.yml to add those.
The personal
profiles.yml should be located outside of the connector directory.
|
3 |
Change the contents to point to your staging database. |
To perform cache generation by using the load.ps1 script, it must be pointed to the Process Mining environment on which the corresponding app release is active. Follow these steps to configure the extraction script.
Step 1 can be omitted if it was already performed during configuration of the extraction.
Step |
Action |
---|---|
1 |
In the
scripts/ directory, rename config_template.json to config.json .
config.json is ignored by Git, such that development and production environments can use different extraction configs.
|
2 |
In
config.json , change the text PROCESS_MINING_ENVIRONMENT to the environment on which the release is active.
|
When the SQL connector is set up properly, it can be run. See Running a SQL connector.