- Overview
- Requirements
- Installation
- Q&A: Deployment templates
- Configuring the machines
- Configuring the external objectstore
- Configuring an external Docker registry
- Configuring the load balancer
- Configuring the DNS
- Configuring Microsoft SQL Server
- Configuring the certificates
- Online multi-node HA-ready production installation
- Offline multi-node HA-ready production installation
- Disaster recovery - Installing the secondary cluster
- Downloading the installation packages
- install-uipath.sh parameters
- Enabling Redis High Availability Add-On for the cluster
- Document Understanding configuration file
- Adding a dedicated agent node with GPU support
- Adding a dedicated agent Node for Task Mining
- Connecting Task Mining application
- Adding a Dedicated Agent Node for Automation Suite Robots
- Post-installation
- Cluster administration
- Monitoring and alerting
- Migration and upgrade
- Migration options
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Insights
- Step 7: Deleting the default tenant
- B) Single tenant migration
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to troubleshoot services during installation
- How to uninstall the cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bucket
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to automatically clean up Longhorn snapshots
- How to disable TX checksum offloading
- How to manually set the ArgoCD log level to Info
- How to generate the encoded pull_secret_value for external registries
- How to address weak ciphers in TLS 1.2
- Unable to run an offline installation on RHEL 8.4 OS
- Error in downloading the bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- First installation fails during Longhorn setup
- SQL connection string validation error
- Prerequisite check for selinux iscsid module fails
- Azure disk not marked as SSD
- Failure after certificate update
- Antivirus causes installation issues
- Automation Suite not working after OS upgrade
- Automation Suite requires backlog_wait_time to be set to 0
- GPU node affected by resource unavailability
- Volume unable to mount due to not being ready for workloads
- Support bundle log collection failure
- Failure to upload or download data in objectstore
- PVC resize does not heal Ceph
- Failure to resize PVC
- Failure to resize objectstore PVC
- Rook Ceph or Looker pod stuck in Init state
- StatefulSet volume attachment error
- Failure to create persistent volumes
- Storage reclamation patch
- Backup failed due to TooManySnapshots error
- All Longhorn replicas are faulted
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Authentication not working after migration
- Kinit: Cannot find KDC for realm <AD Domain> while getting initial credentials
- Kinit: Keytab contains no suitable keys for *** while getting initial credentials
- GSSAPI operation failed due to invalid status code
- Alarm received for failed Kerberos-tgt-update job
- SSPI provider: Server not found in Kerberos database
- Login failed for AD user due to disabled account
- ArgoCD login failed
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis probe failure
- RKE2 server fails to start
- Secret not found in UiPath namespace
- ArgoCD goes into progressing state after first installation
- Issues accessing the ArgoCD read-only account
- MongoDB pods in CrashLoopBackOff or pending PVC provisioning after deletion
- Unhealthy services after cluster restore or rollback
- Pods stuck in Init:0/X
- Prometheus in CrashloopBackoff state with out-of-memory (OOM) error
- Missing Ceph-rook metrics from monitoring dashboards
- Pods cannot communicate with FQDN in a proxy environment
- Running High Availability with Process Mining
- Process Mining ingestion failed when logged in using Kerberos
- Unable to connect to AutomationSuite_ProcessMining_Warehouse database using a pyodbc format connection string
- Airflow installation fails with sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''
- How to add an IP table rule to use SQL Server port 1433
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite Support Bundle Tool
- Exploring Logs
Manual: Advanced installation experience
cluster_config.json
file defines the parameters, settings, and preferences applied to the UiPath products deployed via Automation Suite. You
must update this file to change defaults and use any advanced configuration for your cluster.
cluster_config.json
, you can use either:
- a Linux text editor, such as vi or GNU nano, directly on the Linux machine via SSH (e.g., command:
vi cluster_config.json
); - your favorite text editor and then copy paste the file on the machine.
cluster_config.json
file allows you to configure the UiPath products you want to deploy. Be aware that products may have dependencies. For details,
see Cross-product dependencies.
cluster_config.json
file, use true
or false
for the enabled
flag.
{
"fqdn": "PLACEHOLDER",
"cluster_fqdn": "PLACEHOLDER",
"fixed_rke_address": "PLACEHOLDER",
"admin_username": "PLACEHOLDER",
"admin_password": "PLACEHOLDER",
"rke_token": "PLACEHOLDER ",
"zone_resilience": false,
"registries": {
"docker": {
"url": "registry.uipath.com"
},
"helm": {
"url": "registry.uipath.com"
}
},
"sql_connection_string_template": "PLACEHOLDER",
"sql_connection_string_template_jdbc": "PLACEHOLDER",
"sql_connection_string_template_odbc": "PLACEHOLDER",
"sql_connection_string_template_sqlalchemy_pyodbc": "PLACEHOLDER",
"orchestrator": {
"testautomation": {
"enabled": true
},
"updateserver": {
"enabled": true
},
"enabled": true
},
"infra": {
"docker_registry": {
"username": " PLACEHOLDER ",
"password": " PLACEHOLDER "
},
"pod_log_path": ""
},
"platform": {
"enabled": true
},
"automation_hub": {
"enabled": true
},
"automation_ops": {
"enabled": true
},
"action_center": {
"enabled": true
},
"aicenter": {
"enabled": true
},
"documentunderstanding": {
"enabled": true,
"datamanager": {}
},
"task_mining": {
"enabled": true
},
"apps": {
"enabled": true
},
"test_manager": {
"enabled": true
},
"insights": {
"enabled": true
},
"dataservice": {
"enabled": true
},
"asrobots": {
"enabled": true,
"packagecaching": true,
"packagecachefolder": "/uipath_asrobots_package_cache"
},
"processmining": {
"enabled": true
},
"external_object_storage": {
"enabled": false
},
"identity_certificate": {},
"profile": "ha",
"telemetry_optout": false,
"alternative_fqdn": "",
"server_certificate": {
"ca_cert_file": "/opt/UiPathAutomationSuite/UiPath_Installer/rootCA.crt",
"tls_cert_file": "/opt/UiPathAutomationSuite/UiPath_Installer/server.crt",
"tls_key_file": "/opt/UiPathAutomationSuite/UiPath_Installer/server.key"
},
"alternative_certificate": {
"ca_cert_file": "/opt/UiPathAutomationSuite/UiPath_Installer/alt-rootCA.crt",
"tls_cert_file": "/opt/UiPathAutomationSuite/UiPath_Installer/alt-server.crt",
"tls_key_file": "/opt/UiPathAutomationSuite/UiPath_Installer/alt-server.key"
}
}
{
"fqdn": "PLACEHOLDER",
"cluster_fqdn": "PLACEHOLDER",
"fixed_rke_address": "PLACEHOLDER",
"admin_username": "PLACEHOLDER",
"admin_password": "PLACEHOLDER",
"rke_token": "PLACEHOLDER ",
"zone_resilience": false,
"registries": {
"docker": {
"url": "registry.uipath.com"
},
"helm": {
"url": "registry.uipath.com"
}
},
"sql_connection_string_template": "PLACEHOLDER",
"sql_connection_string_template_jdbc": "PLACEHOLDER",
"sql_connection_string_template_odbc": "PLACEHOLDER",
"sql_connection_string_template_sqlalchemy_pyodbc": "PLACEHOLDER",
"orchestrator": {
"testautomation": {
"enabled": true
},
"updateserver": {
"enabled": true
},
"enabled": true
},
"infra": {
"docker_registry": {
"username": " PLACEHOLDER ",
"password": " PLACEHOLDER "
},
"pod_log_path": ""
},
"platform": {
"enabled": true
},
"automation_hub": {
"enabled": true
},
"automation_ops": {
"enabled": true
},
"action_center": {
"enabled": true
},
"aicenter": {
"enabled": true
},
"documentunderstanding": {
"enabled": true,
"datamanager": {}
},
"task_mining": {
"enabled": true
},
"apps": {
"enabled": true
},
"test_manager": {
"enabled": true
},
"insights": {
"enabled": true
},
"dataservice": {
"enabled": true
},
"asrobots": {
"enabled": true,
"packagecaching": true,
"packagecachefolder": "/uipath_asrobots_package_cache"
},
"processmining": {
"enabled": true
},
"external_object_storage": {
"enabled": false
},
"identity_certificate": {},
"profile": "ha",
"telemetry_optout": false,
"alternative_fqdn": "",
"server_certificate": {
"ca_cert_file": "/opt/UiPathAutomationSuite/UiPath_Installer/rootCA.crt",
"tls_cert_file": "/opt/UiPathAutomationSuite/UiPath_Installer/server.crt",
"tls_key_file": "/opt/UiPathAutomationSuite/UiPath_Installer/server.key"
},
"alternative_certificate": {
"ca_cert_file": "/opt/UiPathAutomationSuite/UiPath_Installer/alt-rootCA.crt",
"tls_cert_file": "/opt/UiPathAutomationSuite/UiPath_Installer/alt-server.crt",
"tls_key_file": "/opt/UiPathAutomationSuite/UiPath_Installer/alt-server.key"
}
}
Mandatory parameters |
Description |
---|---|
|
The load balancer (multi-node HA-ready production mode) or machine (single-node evaluation mode) domain name. |
|
Fixed address used to load balance node registration and kube API requests. This should be fqdn if load balancer is configured as recommended. Otherwise FQDN of 1st Server Node. Refer to Configuring the load balancer. Can be the IP/FQDN of the first rke2 server in your setup. |
|
Set to
true when choosing a multi-node HA-ready production profile. The value of this flag is set automatically by the interactive installer. It is used for internal purposes only and should
not be modified manually. |
|
The username that you would like to set as admin (such as: admin ) for the host organization. |
|
The host admin password to be set. |
|
Use a newly generated GUID here. This is a pre-shared, cluster-specific secret. It is needed for all the nodes joining the cluster. |
|
Sets the profile of the installation. The available profiles are:
The value of this flag is set automatically by the interactive installer. It is used for internal purposes only and should not be modified manually. |
|
The username that you would like to set for the docker registry installation. |
|
The password that you would like to set for the docker registry installation. |
Optional parameters |
Description |
---|---|
|
true or false - used to opt-out of sending telemetry back to UiPath. It is set to false by default.
If you wish to opt out, then set to
true .
|
|
Enables you to change the
/var/log/pods default path of the pod logs to a custom path of your choice.
Note:
Updating the log path discards the logs of the existing container from
/var/log/pods .
|
If no certificate is provided at the time of installation, the installer creates self-issued certificates and configures them in the cluster.
The validity of the self-signed certificates is 90 days.
For details on how to obtain a certificate, see the following:
pwd
to get the path of the directory where files are placed and append the certificate file name to the cluster_config.json
.
In multi-node HA-ready production installations, a certificate is required only on the first node.
Parameter |
Description |
---|---|
|
Absolute path to the Certificate Authority (CA) certificate. This CA is the authority that signs the TLS certificate. A CA bundle must contain only the chain certificates used to sign the TLS certificate. The chain limit is nine certificates. If you use a self-signed certificate, you must specify the path to
rootCA.crt , which you previously created. Leave blank if you want the installer to generate ir.
|
|
Absolute path to the TLS certificate (
server.crt is the self-signed certificate). Leave blank if you want the installer to generate it.
|
|
Absolute path to the certificate key (
server.key is the self-signed certificate). Leave blank if you want the installer to generate it.
|
|
Absolute path to the identity token signing certificate used to sign tokens (
identity.pfx is the self-signed certificate). Leave blank if you want the installer to generate an identity certificate using the server
certificate.
|
|
Plain text password set when exporting the identity token signing certificate. |
|
Absolute path to the file containing the additional CA certificates that you want to be trusted by all the services running
as part of Automation Suite. All certificates in the file must be in valid
PEM format.
For example, you need to provide the file containing the SQL server CA certificate if the certificate is not issued by a public certificate authority. |
The interactive installer automatically creates databases using the following workflow:
-
The interactive installer script checks the value of the
sql.create_db
parameter in thecluster_config.json
file.-
If the
sql.create_db
parameter is set totrue
, the installer automatically generates all the databases on your behalf. In this case, the installer uses the default database names and default templates, and ignores any custom database names you provided.For details, see Automatically create the necessary databases.
-
If
sql.create_db
is set tofalse
, you must bring your own databases. In this case, you must manually set up your databases. Note that you can use custom database names, provided that you follow the provided naming conventions. This step is critical because we use the database name in conjunction with the connection template to form the database connection string. If you do not follow the recommended naming convention, you must provide the SQL connection strings yourself.For details, see Bring your own databases.
-
-
The interactive installer generates the connection strings as follows:
-
If you did not define a connection string for your service, the installer uses the connection template to generate all database connection strings.
-
If you defined a connection string for your service, the installer uses the provided connection string for your database.
-
If you want the installer to create the databases, fill in the following fields of the cluster_config.json
file:
Parameter |
Description |
---|---|
|
Set to
true to allow the installer to create the databases. Note that the installer uses the default database names and default templates,
and ignores any custom database names you provided.
|
|
FQDN of the SQL server, where you want the installer to configure database. |
|
Port number on which a database instance should be hosted in the SQL server. |
|
Username / user ID to connect to the SQL server. |
|
Password of the username provided earlier to connect to the SQL server. |
dbcreator
role. This grants them permission to create the database in SQL Server. Otherwise the installation fails.
ODBC connection does not support usernames that contain special characters. For database usernames for AI Center, Document Understanding, and Apps, use only uppercase and lowercase letters.
If you choose to bring your own databases for a new Automation Suite installation, we strongly recommend setting up new databases rather than using existing ones. This precaution is necessary to prevent any conflicts with the operation of Automation Suite that might occur due to leftover metadata from old databases.
If you bring your own database, you must provide the SQL connection strings for every database. Automation Suite supports the following SQL connection string formats:
Parameter |
Description |
Products |
---|---|---|
|
Full ADO.NET connection string where Catalog name is set to DB_NAME_PLACEHOLDER. The installer will replace this placeholder with the default database names for the installed suite services. |
Platform, Orchestrator, Automation Suite Robots, Test Manager, Automation Hub, Automation Ops, Insights, Task Mining, Data Service, Process Mining, Document Understanding |
|
Full JDBC connection string where database name is set to DB_NAME_PLACEHOLDER. The installer will replace this placeholder with the default database names for the installed suite services. |
AI Center 1 |
|
Full ODBC connection string where database name is set to DB_NAME_PLACEHOLDER. The installer will replace this placeholder with the default database names for the installed suite services. |
Document Understanding, Apps |
|
Full SQL alchemy PYODBC connection string where database name is set to DB_NAME_PLACEHOLDER. The installer will replace this placeholder with the default database names for the installed suite services. |
Process Mining |
sql_connection_string_template_jdbc
: encrypt=true;trustServerCertificate=false;fips=true;
.
db_securityadmin
and db_owner
roles for all Automation Suite databases. If security restrictions do not allow the use of db_owner
, then the SQL account should have the following roles and permissions on all databases:
db_securityadmin
db_ddladmin
db_datawriter
db_datareader
EXECUTE
permission on dbo schema
db_securityadmin
and db_ddladmin
roles during installation or if the databases are reprovisioned, so you may revoke these permission afterwards.
If you manually set the connection strings in the configuration file, you can escape SQL, JDBC, ODBC, or PYODBC passwords as follows:
- for SQL: add
'
at the beginning and end of the password, and double any other'
. - for JDBC/ODBC: add
{
at the beginning of the password and}
at the end, and double any other}
. - for PYODBC:
username
andpassword
should be url encoded to account for special characters. Document Understanding database passwords cannot start with{
.
AutomationSuite_ProcessMining_Airflow
database for Process Mining product must have READ_COMMITTED_SNAPSHOT
enabled.
TrustServerCertificate
is set to False
, and you must provide an additional CA certificate for the SQL Server. This is required if the SQL Server certificate is
self-signed or signed by an internal CA. If you do not provide the SQL Server certificate in this scenario, the prerequisite
check will fail.
See Certificate configuration on this page for more details.
sql_connection_string_template example
Server=tcp:sfdev1804627-c83f074b-sql.database.windows.net:1433;Initial Catalog=DB_NAME_PLACEHOLDER;Persist Security Info=False;User Id=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;
Server=tcp:sfdev1804627-c83f074b-sql.database.windows.net:1433;Initial Catalog=DB_NAME_PLACEHOLDER;Persist Security Info=False;User Id=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;
sql_connection_string_template_jdbc example
jdbc:sqlserver://sfdev1804627-c83f074b-sql.database.windows.net:1433;database=DB_NAME_PLACEHOLDER;user=testadmin;password=***;encrypt=true;trustServerCertificate=false;Connection Timeout=30;hostNameInCertificate=sfdev1804627-c83f074b-sql.database.windows.net"
jdbc:sqlserver://sfdev1804627-c83f074b-sql.database.windows.net:1433;database=DB_NAME_PLACEHOLDER;user=testadmin;password=***;encrypt=true;trustServerCertificate=false;Connection Timeout=30;hostNameInCertificate=sfdev1804627-c83f074b-sql.database.windows.net"
sql_connection_string_template_odbc example
SERVER=sfdev1804627-c83f074b-sql.database.windows.net,1433;DATABASE=DB_NAME_PLACEHOLDER;DRIVER={ODBC Driver 17 for SQL Server};UID=testadmin;PWD=***;MultipleActiveResultSets=False;Encrypt=YES;TrustServerCertificate=NO;Connection Timeout=30;"
SERVER=sfdev1804627-c83f074b-sql.database.windows.net,1433;DATABASE=DB_NAME_PLACEHOLDER;DRIVER={ODBC Driver 17 for SQL Server};UID=testadmin;PWD=***;MultipleActiveResultSets=False;Encrypt=YES;TrustServerCertificate=NO;Connection Timeout=30;"
sql_connection_string_template_sqlalchemy_pyodbc example
mssql+pyodbc://testuser%40sfdev3082457-sql.database.windows.net:_-%29X07_%5E3-%28%3B%25e-T@sfdev3082457-sql.database.windows.net:1433/DB_NAME_PLACEHOLDER?driver=ODBC+Driver+17+for+SQL+Server"
mssql+pyodbc://testuser%40sfdev3082457-sql.database.windows.net:_-%29X07_%5E3-%28%3B%25e-T@sfdev3082457-sql.database.windows.net:1433/DB_NAME_PLACEHOLDER?driver=ODBC+Driver+17+for+SQL+Server"
sql_connection_string_template and sql_connection_string_template_sqlalchemy_pyodbc example (Process Mining)
"sql_connection_string_template": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=DB_NAME_PLACEHOLDER;Persist Security Info=False;User Id=testadmin@sfdev4515230-sql.database.windows.net;Password='***';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;"
"sql_connection_string_template_sqlalchemy_pyodbc": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:07%3Cl%5Bxj-%3D~%3Az%60Ds%26nl@sfdev4515230-sql.database.windows.net:1433/DB_NAME_PLACEHOLDER?driver=ODBC+Driver+17+for+SQL+Server"
"sql_connection_string_template": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=DB_NAME_PLACEHOLDER;Persist Security Info=False;User Id=testadmin@sfdev4515230-sql.database.windows.net;Password='***';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;"
"sql_connection_string_template_sqlalchemy_pyodbc": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:07%3Cl%5Bxj-%3D~%3Az%60Ds%26nl@sfdev4515230-sql.database.windows.net:1433/DB_NAME_PLACEHOLDER?driver=ODBC+Driver+17+for+SQL+Server"
Default and optional DB names for Automation Suite services
{
"orchestrator": "AutomationSuite_Orchestrator",
"orchestrator_ta": "AutomationSuite_Orchestrator",
"asrobots": "AutomationSuite_Orchestrator",
"orchestrator_upd": "AutomationSuite_Platform",
"platform": "AutomationSuite_Platform",
"test_manager": "AutomationSuite_Test_Manager",
"automation_ops": "AutomationSuite_Platform",
"automation_hub": "AutomationSuite_Automation_Hub",
"insights": "AutomationSuite_Insights",
"task_mining": "AutomationSuite_Task_Mining",
"dataservice": "AutomationSuite_DataService",
"aicenter": "AutomationSuite_AICenter",
"documentunderstanding": "AutomationSuite_DU_Datamanager",
"processmining_airflow": "AutomationSuite_Airflow",
"processmining_metadata": "AutomationSuite_ProcessMining_Metadata",
"processmining_warehouse": "AutomationSuite_ProcessMining_Warehouse",
"apps": "AutomationSuite_Apps",
}
{
"orchestrator": "AutomationSuite_Orchestrator",
"orchestrator_ta": "AutomationSuite_Orchestrator",
"asrobots": "AutomationSuite_Orchestrator",
"orchestrator_upd": "AutomationSuite_Platform",
"platform": "AutomationSuite_Platform",
"test_manager": "AutomationSuite_Test_Manager",
"automation_ops": "AutomationSuite_Platform",
"automation_hub": "AutomationSuite_Automation_Hub",
"insights": "AutomationSuite_Insights",
"task_mining": "AutomationSuite_Task_Mining",
"dataservice": "AutomationSuite_DataService",
"aicenter": "AutomationSuite_AICenter",
"documentunderstanding": "AutomationSuite_DU_Datamanager",
"processmining_airflow": "AutomationSuite_Airflow",
"processmining_metadata": "AutomationSuite_ProcessMining_Metadata",
"processmining_warehouse": "AutomationSuite_ProcessMining_Warehouse",
"apps": "AutomationSuite_Apps",
}
sql_connection_str
for that specific service.
You still have to manually create these databases before running the installer.
Overriding the default connection string for Orchestrator and the platform
{
"orchestrator": {
"sql_connection_str": "Server=tcp:sfdev1804627-c83f074b-sql.database.windows.net,1433;Initial Catalog=CustomOrchDB;Persist Security Info=False;User Id=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;"
},
"platform": {
"sql_connection_str": "Server=tcp:sfdev1804627-c83f074b-sql.database.windows.net,1433;Initial Catalog=CustomIDDB;Persist Security Info=False;User Id=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;"
}
}
{
"orchestrator": {
"sql_connection_str": "Server=tcp:sfdev1804627-c83f074b-sql.database.windows.net,1433;Initial Catalog=CustomOrchDB;Persist Security Info=False;User Id=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;"
},
"platform": {
"sql_connection_str": "Server=tcp:sfdev1804627-c83f074b-sql.database.windows.net,1433;Initial Catalog=CustomIDDB;Persist Security Info=False;User Id=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;"
}
}
sql_connection_str
in the corresponding product blocks. The connection string should have a format supported by the respective product.
Example for setting database connection string for AI Center
Parameter |
Description |
---|---|
|
AI Center JDBC connection string (Refer below for the JDBC format) |
"aicenter": {
"enabled": true,
"sql_connection_str": "jdbc:sqlserver://sfdev1804627-c83f074b-sql.database.windows.net;database=aicenter;user=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;password=TFgID_9GsE7_P@srCQp0WemXX_euHQZJ"
}
"aicenter": {
"enabled": true,
"sql_connection_str": "jdbc:sqlserver://sfdev1804627-c83f074b-sql.database.windows.net;database=aicenter;user=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;password=TFgID_9GsE7_P@srCQp0WemXX_euHQZJ"
}
Sample Document Understanding connection string
"documentUnderstanding": {
"datamanager": {
"sql_connection_str": "SERVER=sql-server.database.windows.net;DATABASE=datamanager;DRIVER={ODBC Driver 17 for SQL Server};UID=testadmin;PWD=myPassword"
},
"sql_connection_str": "Server=tcp:database.example.com,1433;Initial Catalog=db;Persist Security Info=False;User Id=testadmin@example.com;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;",
"cjkOcr":
{
"enabled": true
}
}
"documentUnderstanding": {
"datamanager": {
"sql_connection_str": "SERVER=sql-server.database.windows.net;DATABASE=datamanager;DRIVER={ODBC Driver 17 for SQL Server};UID=testadmin;PWD=myPassword"
},
"sql_connection_str": "Server=tcp:database.example.com,1433;Initial Catalog=db;Persist Security Info=False;User Id=testadmin@example.com;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;",
"cjkOcr":
{
"enabled": true
}
}
Handwriting is always enabled for online installation.
max_cpu_per-pod
is 2
, but you can adjust it according to your needs.
Sample Process Mining connection string
"processmining": {
"enabled": true,
"warehouse": {
"sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;User Id=testadmin@sfdev4515230-sql.database.windows.net;Password='***';Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:07%3Cl%5Bxj-%3D~%3Az%60Ds%26nl@sfdev4515230-sql.database.windows.net:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES&Trusted_Connection=yes",
"master_sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=master;Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
},
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:07%3Cl%5Bxj-%3D~%3Az%60Ds%26nl@sfdev4515230-sql.database.windows.net:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES"
"sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Metadata;User Id=testadmin@sfdev4515230-sql.database.windows.net;Password='***';Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
},
"processmining": {
"enabled": true,
"warehouse": {
"sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;User Id=testadmin@sfdev4515230-sql.database.windows.net;Password='***';Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:07%3Cl%5Bxj-%3D~%3Az%60Ds%26nl@sfdev4515230-sql.database.windows.net:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES&Trusted_Connection=yes",
"master_sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=master;Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
},
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:07%3Cl%5Bxj-%3D~%3Az%60Ds%26nl@sfdev4515230-sql.database.windows.net:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES"
"sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Metadata;User Id=testadmin@sfdev4515230-sql.database.windows.net;Password='***';Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
},
sql_connection_string_template_sqlalchemy_pyodbc
and the PYODBC connection string sqlalchemy_pyodbc_sql_connection_str
used when you bring your own database are different. Also connection string names are different for the template SQL sql_connection_string_template
and sql_connection_str
used when you bring your own database.
sql_connection_str
and sqlalchemy_pyodbc_sql_connection_str
connection strings in the processmining
section of the cluster_config.json
file, the template connection strings sql_connection_string_template
and sql_connection_string_template_sqlalchemy_pyodbc
are ignored if specified.
MultiSubnetFailover=True
is not supported. Make sure to remove MultiSubnetFailover=True
from any of the Process Mining connection strings.
app_security_mode
setting either a new SQL user is created for every Process Mining app by the system (app_security_mode="system_managed"
), or a single SQL user account is created that is used for all process apps (app_security_mode="single_account"
). Note that app_security_mode="system_managed"
is the default setting, and that this requires advanced permissions for the database user. See Configuring process app security.
Sample Process Mining connection string
-
Scenario: setup with Kerberos authentication.
"processmining": {
"enabled": true,
"warehouse": {
"sql_connection_str": "Server=tcp:assql2019.autosuitead.local,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://:@assql2019.autosuitead.local:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES&Trusted_Connection=yes",
"master_sql_connection_str": "Server=tcp:assql2019.autosuitead.local,1433;Initial Catalog=master;Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
},
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://:@assql2019.autosuitead.local:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES&Trusted_Connection=yes"
},
"processmining": {
"enabled": true,
"warehouse": {
"sql_connection_str": "Server=tcp:assql2019.autosuitead.local,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://:@assql2019.autosuitead.local:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES&Trusted_Connection=yes",
"master_sql_connection_str": "Server=tcp:assql2019.autosuitead.local,1433;Initial Catalog=master;Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
},
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://:@assql2019.autosuitead.local:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES&Trusted_Connection=yes"
},
Sample Process Mining connection string
-
Scenario: Metadata database and data warehouse use separate SQL server (Non-Kerberos authentication).
"processmining": {
"enabled": true,
"warehouse": {
"sql_connection_str": "Server=tcp:uipath-integration1.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;Persist Security Info=False;User Id=userid;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://userid:password@uipath-integration1.database.windows.net:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
"master_sql_connection_str": "Server=tcp:uipath-integration1.database.windows.net,1433;Initial Catalog=master;Persist Security Info=False;User Id=userid;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
},
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://userid:password@uipath-integration2.database.windows.net:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES"
"sql_connection_str": "Server=tcp:uipath-integration2.database.windows.net,1433;Initial Catalog=AutomationSuite_Airflow;Persist Security Info=False;User Id=userid;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
},
"processmining": {
"enabled": true,
"warehouse": {
"sql_connection_str": "Server=tcp:uipath-integration1.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;Persist Security Info=False;User Id=userid;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://userid:password@uipath-integration1.database.windows.net:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
"master_sql_connection_str": "Server=tcp:uipath-integration1.database.windows.net,1433;Initial Catalog=master;Persist Security Info=False;User Id=userid;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
},
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://userid:password@uipath-integration2.database.windows.net:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES"
"sql_connection_str": "Server=tcp:uipath-integration2.database.windows.net,1433;Initial Catalog=AutomationSuite_Airflow;Persist Security Info=False;User Id=userid;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
},
Sample Process Mining connection string
-
Scenario: using custom
app_security_mode
.
"processmining": {
"enabled": true,
"app_security_mode": "system_managed",
"warehouse": {
"sql_connection_str": "Server=tcp:assql2019.autosuitead.local,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://:@assql2019.autosuitead.local:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES&Trusted_Connection=yes",
"master_sql_connection_str": "Server=tcp:assql2019.autosuitead.local,1433;Initial Catalog=master;Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
},
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://:@assql2019.autosuitead.local:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES&Trusted_Connection=YES"
},
"processmining": {
"enabled": true,
"app_security_mode": "system_managed",
"warehouse": {
"sql_connection_str": "Server=tcp:assql2019.autosuitead.local,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://:@assql2019.autosuitead.local:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES&Trusted_Connection=yes",
"master_sql_connection_str": "Server=tcp:assql2019.autosuitead.local,1433;Initial Catalog=master;Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
},
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://:@assql2019.autosuitead.local:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES&Trusted_Connection=YES"
},
Automation Suite allows you to bring your own external storage provider. You can choose from the following storage providers:
- Azure
- AWS
- S3-compatible
You can configure the external object storage in one of the following ways:
- during installation, using the interactive installer;
- post-installation, using the
cluster_config.json
file.
- For Automation Suite to function properly when using pre-signed URLs, you must make sure that your external objectstore is accessible from the Automation Suite cluster, browsers, and all your machines, including workstations and robot machines.
-
The Server Side Encryption with Key Management Service (SSE-KMS) can only be enabled on the Automation Suite buckets deployed in any region created after January 30, 2014.
SSE-KMS functionality requires pure SignV4 APIs. Regions created before January 30, 2014 do not use pure SignV4 APIs due to backward compatibility with SignV2. Therefore, SSE-KMS is only functional in regions that use SignV4 for communication. To find out when the various regions were provisioned, refer to the AWS documentation.
cluster_config.json
parameters you can use to configure each provider of external object storage:
Parameter |
Azure |
AWS |
S3-compatible |
Description |
---|---|---|---|---|
|
|
|
|
Specify whether you would like to bring your own object store. Possible values:
true and false .
|
|
|
|
|
Specify whether you would like to provision the bucket. Possible values:
true and false .
|
|
|
|
|
Specify the storage provider you would like to configure. The value is case-sensitive. Possible values:
azure and s3 .
Note: Many S3 objectstores require the CORS set to all the traffic from the Automation Suite cluster. You must configure the CORS
policy at the objectstore level to allow the FQDN of the cluster.
|
|
|
|
|
Specify the FQDN of the S3 server. Required in the case of the AWS instance and non-instance profile. |
|
|
|
|
Specify the S3 port. Required in the case of the AWS instance and non-instance profile. |
|
|
|
|
Specify the AWS region where buckets are hosted. Required in the case of the AWS instance and non-instance profile. |
|
|
|
|
Specify the access key for the S3 account. Only required in the case of the AWS non-instance profile. |
|
|
|
|
Specify the secret key for the S3 account. Only required in the case of the AWS non-instance profile. |
|
|
|
|
Specify whether you want to use an instance profile. An AWS Identity and Access Management (IAM) instance profile grants secure access to AWS resources for applications or services running on Amazon Elastic Compute Cloud (EC2) instances. If you opt for AWS S3, an instance profile allows an EC2 instance to interact with S3 buckets without the need for explicit AWS credentials (such as access keys) to be stored on the instance. |
|
|
|
|
Use managed identity with your Azure storage account. Possible values:
true and false .
|
external_object_storage.bucket_name_prefix 1 |
|
|
|
Indicate the prefix for the bucket names. Optional in the case of the AWS non-instance profile. |
external_object_storage.bucket_name_suffix 2 |
|
|
|
Indicate the suffix for the bucket names. Optional in the case of the AWS non-instance profile. |
|
|
|
|
Specify the Azure account key. Only required when using non-managed identity. |
|
|
|
|
Specify the Azure account name. |
|
|
|
|
Specify the Azure FQDN suffix. Optional parameter. |
|
|
|
|
Specify your Azure client ID. Only required when using managed identity. |
bucket_name_prefix
and bucket_name_suffix
. In addition to that, the suffix and prefix must have a combined length of no more than 25 characters, and you must not end
the prefix or start the suffix with a hyphen (-
) as we already add the character for you automatically.
You can use the parameters described in the General configuration section to update the general Automation Suite configuration. This means that all installed products would share the same configuration. If you want to configure one or more products differently, you can override the general configuration. You just need to specify the product(s) you want to set up external object storage for differently, and use the same parameters to define your configuration. Note that all the other installed products would continue to inherit the general configuration.
The following example shows how you can override the general configuration for Orchestrator:
"external_object_storage": {
"enabled": false, // <true/false>
"create_bucket": true, // <true/false>
"storage_type": "s3", // <s3,azure,aws>
"fqdn": "", // <needed in the case of aws instance and non-instance profile>
"port": 443, // <needed in the case of aws instance and non-instance profile>
"region": "",
"access_key": "", // <needed in case of aws non instance profile>
"secret_key": "", // <needed in case of aws non instance profile>
"use_managed_identity": false, // <true/false>
"bucket_name_prefix": "",
"bucket_name_suffix": "",
"account_key": "", // <needed only when using non managed identity>
"account_name": "",
"azure_fqdn_suffix": "core.windows.net",
"client_id": "" // <optional field in case of managed identity>
},
"orchestrator": {
"external_object_storage": {
"enabled": false, // <true/false>
"create_bucket": true, // <true/false>
"storage_type": "s3", // <s3,azure>
"fqdn": "", // <needed in the case of aws instance and non-instance profile>
"port": 443, // <needed in the case of aws instance and non-instance profile>
"region": "",
"access_key": "", // <needed in case of aws non instance profile>
"secret_key": "", // <needed in case of aws non instance profile>
"use_managed_identity": false, // <true/false>
"bucket_name_prefix": "",
"bucket_name_suffix": "",
"account_key": "", // <needed only when using non managed identity>
"account_name": "",
"azure_fqdn_suffix": "core.windows.net",
"client_id": "" // <optional field in case of managed identity>
}
}
"external_object_storage": {
"enabled": false, // <true/false>
"create_bucket": true, // <true/false>
"storage_type": "s3", // <s3,azure,aws>
"fqdn": "", // <needed in the case of aws instance and non-instance profile>
"port": 443, // <needed in the case of aws instance and non-instance profile>
"region": "",
"access_key": "", // <needed in case of aws non instance profile>
"secret_key": "", // <needed in case of aws non instance profile>
"use_managed_identity": false, // <true/false>
"bucket_name_prefix": "",
"bucket_name_suffix": "",
"account_key": "", // <needed only when using non managed identity>
"account_name": "",
"azure_fqdn_suffix": "core.windows.net",
"client_id": "" // <optional field in case of managed identity>
},
"orchestrator": {
"external_object_storage": {
"enabled": false, // <true/false>
"create_bucket": true, // <true/false>
"storage_type": "s3", // <s3,azure>
"fqdn": "", // <needed in the case of aws instance and non-instance profile>
"port": 443, // <needed in the case of aws instance and non-instance profile>
"region": "",
"access_key": "", // <needed in case of aws non instance profile>
"secret_key": "", // <needed in case of aws non instance profile>
"use_managed_identity": false, // <true/false>
"bucket_name_prefix": "",
"bucket_name_suffix": "",
"account_key": "", // <needed only when using non managed identity>
"account_name": "",
"azure_fqdn_suffix": "core.windows.net",
"client_id": "" // <optional field in case of managed identity>
}
}
To rotate the blob storage credentials for Process Mining in Automation Suite the stored secrets must be updated with the new credentials. See Rotating blob storage credentials.
- Some Automation Suite products are not supported in Disaster Recovery- Active/Passive. You can install these products while installing the primary cluster only.
- Once you have provided all the parameters to the installer related to your setup, Installer will prompt you to continue or
quit. You must quit the installer to modify the generated
cluster_config.json
with the advanced multi-site parameters.
You must install the two Automation Suite clusters separately.
cluster_config.json
parameters you must use to install both the primary and secondary clusters:
Parameter |
Description |
---|---|
|
It indicates that HAA is enabled. This is required to allow for Redis in HA mode. It is mandatory and must be set to
true .
|
|
It indicates the Redis (HAA) license. It a mandatory value and must be a valid base64-encoded string. |
|
It indicates the FQDN of the load balancer of the cluster. It is a mandatory value. |
|
It indicates that Automation Suite must be configured to work multi-site. It must be set to
true .
|
|
It indicates that this cluster is a primary cluster and must be set to
true . It defaults to false to denote the secondary cluster.
|
|
It indicates the base64-encoded kubeconfig file of another cluster. While installing the primary Automation Suite cluster, this value is unavailable and can be left as is. However, you must provide the value when rebuilding the primary automation suite later during recovery. |
|
It indicates the address where all the node joining requests must be made. This is usually the same as the
cluster_fqdn value (the FQDN of the load balancer)
|
{
"fabric": {
"redis": {
"ha": true,
"license": "xyz" //base64 encoded redis license
}
},
"cluster_fqdn": "automationsuite-primary.mycompany.com",
"multisite": {
"enabled": true,
"primary": true,
"other_kube_config": xxx, //another cluster kubeconfig
},
"fixed_rke_address": "automationsuite-primary.mycompany.com"
}
{
"fabric": {
"redis": {
"ha": true,
"license": "xyz" //base64 encoded redis license
}
},
"cluster_fqdn": "automationsuite-primary.mycompany.com",
"multisite": {
"enabled": true,
"primary": true,
"other_kube_config": xxx, //another cluster kubeconfig
},
"fixed_rke_address": "automationsuite-primary.mycompany.com"
}
cluster_config.json
file:
Parameter |
Description |
---|---|
|
Specify the external Docker registries you want to use. |
|
The registry URL. |
|
The registry username to login. |
| The registry password to login. |
|
The registry pull secret. |
pull_secret_value
, including the one using Docker. For details, How to generate the encoded pull_secret_value for external registries.
Configuration sample:
{
"registries": {
"docker": {
"url": "registry.domain.io",
"username": "username",
"password": "password",
"pull_secret_value": "pull-secret-value"
},
"helm": {
"url": "registry.domain.io",
"username": "username",
"password": "password"
}
},
}
{
"registries": {
"docker": {
"url": "registry.domain.io",
"username": "username",
"password": "password",
"pull_secret_value": "pull-secret-value"
},
"helm": {
"url": "registry.domain.io",
"username": "username",
"password": "password"
}
},
}
orchestrator.block_classic_executions
flag to true
in the cluster_config.json
file. Using the flag shows that you agree
with blocking classic folders executions. Not using the flag causes the upgrade
operation to fail. This parameter is not required in new installations.
orchestrator.orchestrator_robot_logs_elastic
section. If not provided, robot logs are saved to Orchestrator's database.
orchestrator.orchestrator_robot_logs_elastic
parameters:
Parameter |
Description |
---|---|
orchestrator_robot_logs_elastic |
Elasticsearch configuration. |
|
The address of the Elasticsearch instance that should be used. It should be provided in the form of a URI. If provided, then username and password are also required. |
|
The Elasticsearch username, used for authentication. |
|
The Elasticsearch password, used for authentication. |
Example
"orchestrator": {
"enabled": true,
"block_classic_executions": true,
"orchestrator_robot_logs_elastic": {
"elastic_uri": "https://elastic.example.com:9200",
"elastic_auth_username": "elastic-user",
"elastic_auth_password": "elastic-password"
}
}
"orchestrator": {
"enabled": true,
"block_classic_executions": true,
"orchestrator_robot_logs_elastic": {
"elastic_uri": "https://elastic.example.com:9200",
"elastic_auth_username": "elastic-user",
"elastic_auth_password": "elastic-password"
}
}
If enabling Insights, users can include SMTP server configuration that will be used to send scheduled emails/alert emails. If not provided, scheduled emails and alert emails will not function.
insights.smtp_configuration
fields details:
Parameter |
Description |
---|---|
|
Valid values are
TLSv1_2 , TLSv1_1 , SSLv23 . Omit key altogether if not using TLS.
|
|
Address that alert/scheduled emails will be sent from. |
|
Hostname of the SMTP server. |
|
Port of the SMTP server. |
|
Username for SMTP server authentication. |
|
Password for SMTP server authentication. |
enable_realtime_monitoring | Flag to enable Insights Real-time monitoring. Valid values are true , false . Default value is false .
|
Example
"insights": {
"enabled": true,
"enable_realtime_monitoring": true,
"smtp_configuration": {
"tls_version": "TLSv1_2",
"from_email": "test@test.com",
"host": "smtp.sendgrid.com",
"port": 587,
"username": "login",
"password": "password123"
}
}
"insights": {
"enabled": true,
"enable_realtime_monitoring": true,
"smtp_configuration": {
"tls_version": "TLSv1_2",
"from_email": "test@test.com",
"host": "smtp.sendgrid.com",
"port": 587,
"username": "login",
"password": "password123"
}
}
processmining
section:
Parameter |
Description |
---|---|
|
DotNet formatted connection string with database set as a placeholder:
Initial Catalog=DB_NAME_PLACEHOLDER .
|
|
Sqlalchemy PYODBC formatted connection string for custom airflow metadata database location:
sqlServer:1433/DB_NAME_PLACEHOLDER .
Example:
where user:
testadmin%40myhost Note:
If there is '@' in user name it has to be urlencoded to %40 Example: (SQL Server setup with Kerberos authentication)
|
|
DotNet formatted SQL connection string to the processmining data warehouse SQL Server with placeholder for dbname: Initial Catalog=DB_NAME_PLACEHOLDER .
|
|
Sqlalchemy PYODBC formatted SQL connection string to the processmining data warehouse SQL Server with placeholder for dbname: sqlServer:1433/DB_NAME_PLACEHOLDER .
|
|
If the installer is creating databases through
sql.create_db: true setting, a DotNet formatted master SQL connection string must be provided for the processmining data warehouse SQL Server.
Database in the connection string must be set as master .
|
Sample Process Mining connection string
"processmining": {
"enabled": true,
"app_security_mode": "system_managed",
"sql_connection_str": "Server=tcp:shared_sqlserver_fqdn,1433;Initial Catalog=AutomationSuite_ProcessMining_Metadata;Persist Security Info=False;User Id=username;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://username:password@shared_sqlserver_fqdn:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
"warehouse": {
"sql_connection_str": "Server=tcp:dedicated_sqlserver_fqdn,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;Persist Security Info=False;User Id=username;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://useername:password@dedicated_sqlserver_fqdn:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
"master_sql_connection_str": "Server=tcp:dedicated_sqlserver_fqdn,1433;Initial Catalog=master;Persist Security Info=False;User Id=username;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
},
"blob_storage_account_use_presigned_uri": true
},
"processmining": {
"enabled": true,
"app_security_mode": "system_managed",
"sql_connection_str": "Server=tcp:shared_sqlserver_fqdn,1433;Initial Catalog=AutomationSuite_ProcessMining_Metadata;Persist Security Info=False;User Id=username;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://username:password@shared_sqlserver_fqdn:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
"warehouse": {
"sql_connection_str": "Server=tcp:dedicated_sqlserver_fqdn,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;Persist Security Info=False;User Id=username;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://useername:password@dedicated_sqlserver_fqdn:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
"master_sql_connection_str": "Server=tcp:dedicated_sqlserver_fqdn,1433;Initial Catalog=master;Persist Security Info=False;User Id=username;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
},
"blob_storage_account_use_presigned_uri": true
},
When setting up Microsoft SQL Server make sure that the timezone of the SQL Server machine where the Airflow database and a dedicated Process Mining database are installed, is set to UTC.
sql_connection_string_template_sqlalchemy_pyodbc
and the PYODBC connection string sqlalchemy_pyodbc_sql_connection_str
used when you bring your own database are different. Also connection string names are different for the template SQL sql_connection_string_template
and sql_connection_str
used when you bring your own database.
sql_connection_str
and sqlalchemy_pyodbc_sql_connection_str
connection strings in the processmining
section of the cluster_config.json
file, the template connection strings sql_connection_string_template
and sql_connection_string_template_sqlalchemy_pyodbc
are ignored if specified.
1433
for the following databases:
warehouse.sql_connection_str
warehouse.sqlalchemy_pyodbc_sql_connection_str
warehouse.master_sql_connection_str
Non-standard SQL server ports are not supported.
Automation Suite Robots can use package caching to optimize your process runs and allow them to run faster. NuGet packages are fetched from the filesystem instead of being downloaded from the Internet/network. This requires an additional space of minimum 10GiB and should be allocated to a folder on the host machine filesystem of the dedicated nodes.
cluster_config.json
parameters:
Parameter |
Default value |
Description |
---|---|---|
|
|
When set to
true , robots use a local cache for package resolution.
|
|
|
The disk location on the serverless agent node where the packages are stored. |
aicenter.external_object_storage.port
and
aicenter.external_object_storage.fqdn
parameters in the cluster_config.json
file.
aicenter
section of the cluster_config.json
file even if you
have configured the external_object_storage
section of the file.
cluster_config.json
configuration for AI Center:"aicenter": {
"external_object_storage" {
"port": 443,
"fqdn": "s3.us-west-2.amazonaws.com"
}
},
"external_object_storage": {
"enabled": true,
"create_bucket": false,
"storage_type": "s3",
"region": "us-west-2",
"use_instance_profile": true
}
...
"aicenter": {
"external_object_storage" {
"port": 443,
"fqdn": "s3.us-west-2.amazonaws.com"
}
},
"external_object_storage": {
"enabled": true,
"create_bucket": false,
"storage_type": "s3",
"region": "us-west-2",
"use_instance_profile": true
}
...
To provision enough resources for monitoring (see Using the monitoring stack), you should consider the number of vCPUs in the cluster and the amount of desired metric retention. See below for how to set the following monitoring resource configurations.
The following table describes the monitoring field details:
Parameter |
Description |
---|---|
|
In days. This is the amount of days that metrics will be retained for the purpose of visualization in Grafana and manual querying via the Prometheus console. Default value is
7 .
|
|
In GiB. Amount of storage space to reserve per Prometheus replica. A good rule of thumb is to set this value to:
Example: If you set
prometheus_retention to 14 days, and your cluster has 80 cores spread across 5 machines, this becomes:
Default value is
45 and should not be set lower.
If Prometheus starts to run out of storage space, an alert will be triggered with specific remediation instructions. See here. |
|
In MiB. Amount of memory to limit each Prometheus replica to. A good rule of thumb is to set this value to:
Example: If you've set
prometheus_retention to 14 days, and your cluster has 80 cores spread across 5 machines, this becomes:
Default value is
3200 for the single-node evaluation mode, and 6000 for the multi-node HA-ready production mode, and should not be set lower.
If Prometheus starts to run out of memory, an alert will be triggered with specific remediation instructions. See here. |
Example
"monitoring": {
"prometheus_retention": 14,
"prometheus_memory_limit": 16000,
"prometheus_storage_size": 104
}
"monitoring": {
"prometheus_retention": 14,
"prometheus_memory_limit": 16000,
"prometheus_storage_size": 104
}
Make sure you meet the proxy server requirements before configuring the proxy server during installation.
For details, see Step 2: Adding proxy configuration to each node.
cluster_config.json
during the advanced configuration step.
You need to add the following to the configuration file using vim or your favorite editor to the configuration file:
"proxy": {
"enabled": true,
"http_proxy": "http://<PROXY-SERVER-IP>:<PROXY-PORT>",
"https_proxy": "http://<PROXY-SERVER-IP>:<PROXY-PORT>",
"no_proxy": "alm.<fqdn>,<fixed_rke_address>:9345,<fixed_rke_address>:6443,<named server address>,<metadata server address>,<k8s address range>,<private_subnet_ip>,<sql server host>,<Comma separated list of ips that should not go through proxy server>"
}
"proxy": {
"enabled": true,
"http_proxy": "http://<PROXY-SERVER-IP>:<PROXY-PORT>",
"https_proxy": "http://<PROXY-SERVER-IP>:<PROXY-PORT>",
"no_proxy": "alm.<fqdn>,<fixed_rke_address>:9345,<fixed_rke_address>:6443,<named server address>,<metadata server address>,<k8s address range>,<private_subnet_ip>,<sql server host>,<Comma separated list of ips that should not go through proxy server>"
}
Mandatory parameters |
Description |
---|---|
|
Use
true or false to enable or disable proxy settings.
|
|
Used to route HTTP outbound requests from the cluster. This should be the proxy server FQDN and port. |
|
Used to route HTTPS outbound requests from the cluster. This should be the proxy server FQDN and port. |
|
Comma-separated list of hosts, IP addresses, or IP ranges in CIDR format that you do not want to route via the proxy server.
This should be a private subnet range, sql server host, named server address, metadata server address:
*.<fqdn>,<fixed_rke_address>:9345,<fixed_rke2_address>:6443 .
Important:
If you use AI Center with an external Orchestrator, you must add the external Orchestrator domain to the
no_proxy list.
|
To enable resilience to zonal failures in a multi-node cluster, take the following steps:
cluster_config.json
using the interactive installation wizard, exit at advanced configuration step and add the following to the configuration
file using vim or your favorite editor:
"zone_resilience": true
"zone_resilience": true
Mandatory parameters |
Description |
---|---|
|
Use
true or false to enable or disable resilience to zonal failure.
|
--zone
and --region
arguments is:
- recommended if you provision your machines on AWS, Azure, or GCP, and metadata services are enabled as the installer populates the zone and region details.
- mandatory if you provision your machines on AWS, Azure, or GCP, and metadata services are disabled, or if you opt for a different cloud provider.
/etc/resolv.conf
. Kubernetes does not work with local DNS resolvers (127.0.0.1 or 127.0.0.0/8), so if you have such name servers configured
in /etc/resolv.conf
file, you need to pass a file reference with the correct nameserver entries accessible from anywhere on the VM in the .infra.custom_dns_resolver
parameter in cluster_config.json
.
For details on a known limitation, see Kubernetes documentation.
Optional Parameters |
Description |
---|---|
|
Path to the file with correct name server entries that can be accessed from anywhere on the VM. These name server entries
must not be from
127.0.0.0/8 .
|
fault_tolerance
parameter in the cluster_config.json
file. The parameter modifies the replication factor of in-cluster storage components such as Ceph and Longhorn.
1
, make sure your environment meets the following requirements:
- Your cluster consists of a minimum of
2x+1
server nodes, wherex
is the required server node fault tolerance; - Each server node has raw device configured.
1
, take the following steps:
- Set
fault_tolerance
to the required value in thecluster_config.json
file. If you set it before starting the installation or upgrade operation, you do not need to take any additional steps. - Run the
uipathctl.sh
installer to modify the in-cluster Ceph objectstore replication factor. Wait until the operation completes successfully. - Run the
install-uipath.sh
installer to modify the Longhorn volumes replication factor. Wait until the operation completes successfully.
- Cluster_config.json Sample
- General configuration
- Certificate configuration
- Database configuration
- Database creation workflow
- Automatically create the necessary databases
- Bring your own database
- External Objectstore configuration
- General configuration
- Product-specific configuration
- Rotating the blob storage credentials for Process Mining
- Disaster recovery: Active/Passive configuration
- External Docker registry configuration
- Orchestrator-specific configuration
- Insights-specific configuration
- Process Mining-specific configuration
- Automation Suite Robots-specific configuration
- AI Center-specific configuration
- Monitoring configuration
- Optional: Configuring the proxy server
- Optional: Enabling resilience to zonal failures in a multi-node HA-ready production cluster
- Optional: Passing custom resolv.conf
- Optional: Increasing fault tolerance
- Hardware requirements
- How to increase fault tolerance