The cluster_config.json
file defines the parameters, settings, and preferences applied to the UiPath services deployed via Automation Suite. You need to update this file if you want to change defaults and use any advanced configuration for your cluster.
To edit cluster_config.json
, you can use either:
- a Linux text editor, such as vi or GNU nano, directly on the Linux machine via SSH (e.g., command:
vi cluster_config.json
); - your favorite text editor and then copy paste the file on the machine.
The cluster_config.json
file allows you to configure the UiPath services you want to deploy. There are two types of UiPath services:
- mandatory services: these are installed by default and do not have the
enabled
flag available on them; - optional services: these are not required to complete the install. However, be aware that products may have dependencies. Please see Product dependencies for more details on this.
To enable or disable a service via the cluster_config.json
file, use true
or false
for the enabled
flag.
cluster_config.json
sample
cluster_config.json
sample{
"fqdn": "PLACEHOLDER",
"fixed_rke_address": "PLACEHOLDER",
"multinode": "false",
"admin_username": "PLACEHOLDER",
"admin_password": "PLACEHOLDER",
"profile": "ha",
"telemetry_optout": "true",
"rke_token": "PLACEHOLDER",
"server_certificate": {
"ca_cert_file": "/absolute/path/to/rootCA.crt",
"tls_cert_file": "/absolute/path/to/server.crt",
"tls_key_file": "/absolute/path/to/server.key"
},
"infra": {
"docker_registry": {
"username": "PLACEHOLDER",
"password": "PLACEHOLDER"
},
"custom_dns_resolver": "/path/to/custom-resolv.conf"
},
"identity_certificate": {
"token_signing_cert_file": "/absolute/path/to/identity.pfx",
"token_signing_cert_pass": ""
},
"sql": {
"server_url": "PLACEHOLDER",
"username": "PLACEHOLDER",
"password": "PLACEHOLDER",
"port": "PLACEHOLDER",
"create_db": "PLACEHOLDER"
},
"sql_connection_string_template": "PLACEHOLDER",
"sql_connection_string_template_jdbc": "PLACEHOLDER",
"sql_connection_string_template_odbc": "PLACEHOLDER",
"orchestrator": {
"testautomation": {
"enabled": true
},
"updateserver": {
"enabled": true
}
},
"aicenter": {
"enabled": true,
"sql_connection_str": "PLACEHOLDER"
},
"documentunderstanding": {
"enabled": true,
"datamanager": {
"sql_connection_str": "PLACEHOLDER"
},
"handwriting": {
"enabled": true,
"max_cpu_per_pod": 2
}
},
"insights": {
"enabled": true
},
"test_manager": {
"enabled": true
},
"automation_ops": {
"enabled": true
},
"automation_hub": {
"enabled": true
},
"apps": {
"enabled": true
},
"action_center": {
"enabled": true
},
"task_mining": {
"enabled": true
},
"dataservice": {
"enabled": true
}
}
General configuration
Mandatory parameters | Description |
---|---|
| The load balancer (multi-node HA-ready production mode) or machine (single-node evaluation mode) domain name. |
| Fixed address used to load balance node registration and kube API requests. This should be fqdn if load balancer is configured as recommended. Otherwise FQDN of 1st Server Node. Refer to Configuring the load balancer. Can be the IP/FQDN of the first rke2 server in your setup. |
| Set to |
| The username that you would like to set as admin (such as: admin ) for the host organization. |
| The host tenant admin password to be set. |
| Use a newly generated GUID here. This is a pre-shared, cluster-specific secret. It is needed for all the nodes joining the cluster. |
| Sets the profile of the installation. The available profiles are: |
| The username that you would like to set for the docker registry installation. |
| The password that you would like to set for the docker registry installation. |
Optional parameters | Description |
---|---|
|
If you wish to opt out, then set to |
Setting certificates
Please refer to the prerequisite documents to obtain certificate:
- Single-node evaluation mode: Configuring the certificates
- Multi-node HA-ready production: Configuring the certificates
If no certificate is provided at the time of installation, the installer creates a self-issued certificate and configures it in the cluster. The validity of the certificate is 90 days.
Note:
Make sure to specify the absolute path for the certificate files. Run
pwd
to get the path of the directory where files are placed and append the certificate file name to thecluster_config.json
.In multi-node HA-ready production installations, a certificate is required only on the first node.
Parameter | Description |
---|---|
| Absolute path to the Certificate Authority (CA) certificate. This CA is the authority that signs the TLS certificate. If you are using a self-signed certificate, you need to specify the path to the |
| Absolute path to TLS certificate ( |
| Absolute path to certificate key ( |
| Absolute path to the Identity Service certificate used to sign tokens ( |
| Plain text password set when it was exported. |
| Absolute path to the file containing additional CA certificates that you want to be trusted by all the services running as part of Automation Suite. All certificates in the file should be valid For example, you need to provide the file containing the SQL server CA certificate if the certificate is not issued by a public certificate authority. |
Database configuration
Automatically create the necessary databases
If you want the installer to create the databases, then please fill in the following fields:
Parameter | Description |
---|---|
| Set to |
| FQDN of the SQL server, where you want the installer to configure database. |
| Port number on which a database instance should be hosted in the SQ: server. |
| Username / userid to connect to the SQL server. |
| Password of the username provided earlier to connect to the SQL server. |
Important:
Ensure the user has the
dbcreator
role. This grants them permission to create the database in SQL Server. Otherwise the installation fails.ODBC connection does not support usernames that contain special characters. For database usernames for AI Center and Document Understanding, use only uppercase and lowercase letters.
Bring your own database
If you bring your own database, you must provide the SQL connection strings for every database. Automation Suite supports the following SQL connection string formats:
Parameter | Description | Products |
---|---|---|
| Full ADO.NET connection string where Catalog name is set to DB_NAME_PLACEHOLDER. The installer will replace this placeholder with the default database names for the installed suite services. | Platform, Orchestrator, Test Manager, Automation Hub, Automation Ops, Insights, Task Mining, Data Service |
| Full JDBC connection string where database name is set to DB_NAME_PLACEHOLDER. The installer will replace this placeholder with the default database names for the installed suite services. | AI Center |
| Full ODBC connection string where database name is set to DB_NAME_PLACEHOLDER. The installer will replace this placeholder with the default database names for the installed suite services. | Document Understanding |
Important!
If you manually set the connection strings in the configuration file, you can escape SQL, JDBC, or ODBC passwords as follows:
- for SQL: add
'
at the beginning and end of the password, and double any other'
.- for JDBC/ODBC: add
{
at the beginning of the password and}
at the end, and double any other}
.
Note:
If you set
TrustServerCertificate=False
, then you may have to provide an additional CA certificate for the SQL Server. This is required if the SQL Server certificate is self-signed or signed by an internal CA.See Setting Certificates for more details.
sql_connection_string_template example
Server=tcp:sfdev1804627-c83f074b-sql.database.windows.net:1433;Initial Catalog=DB_NAME_PLACEHOLDER;Persist Security Info=False;User [email protected];Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;
sql_connection_string_template_jdbc example
jdbc:sqlserver://sfdev1804627-c83f074b-sql.database.windows.net:1433;database=DB_NAME_PLACEHOLDER;user=testadmin;password=***;encrypt=true;trustServerCertificate=false;Connection Timeout=30;hostNameInCertificate=sfdev1804627-c83f074b-sql.database.windows.net"
sql_connection_string_template_odbc example
SERVER=sfdev1804627-c83f074b-sql.database.windows.net,1433;DATABASE=DB_NAME_PLACEHOLDER;DRIVER={ODBC Driver 17 for SQL Server};UID=testadmin;PWD=***;MultipleActiveResultSets=False;Encrypt=YES;TrustServerCertificate=NO;Connection Timeout=30;"
Default and optional DB names for Automation Suite services
{
"orchestrator": "AutomationSuite_Orchestrator",
"orchestrator_ta": "AutomationSuite_Orchestrator",
"orchestrator_upd": "AutomationSuite_Platform",
"platform": "AutomationSuite_Platform",
"test_manager": "AutomationSuite_Test_Manager",
"automation_ops": "AutomationSuite_Platform",
"automation_hub": "AutomationSuite_Automation_Hub",
"insights": "AutomationSuite_Insights",
"task_mining": "AutomationSuite_Task_Mining",
"dataservice": "AutomationSuite_DataService",
"aicenter": "AutomationSuite_AICenter",
"documentunderstanding": "AutomationSuite_DU_Datamanager",
}
Note:
If you you want to override the connection string for any of the services above, set the
sql_connection_str
for that specific service.You still have to manually create these databases before running the installer.
Overriding the default connection string for Orchestrator and the platform
{
"orchestrator": {
"sql_connection_str": "Server=tcp:sfdev1804627-c83f074b-sql.database.windows.net,1433;Initial Catalog=CustomOrchDB;Persist Security Info=False;User [email protected];Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;"
},
"platform": {
"sql_connection_str": "Server=tcp:sfdev1804627-c83f074b-sql.database.windows.net,1433;Initial Catalog=CustomIDDB;Persist Security Info=False;User [email protected];Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;"
}
}
To override the database connection strings for other products, set the sql_connection_str
in the corresponding product blocks. The connection string should have a format supported by the respective product.
Example for setting database connection string for AI Center
Parameter | Description |
---|---|
| AICenter JDBC connection string (Refer below for the JDBC format) |
Note:
Please check the JDBC connection string has the format in the sample below.
"aicenter": {
"enabled": true,
"sql_connection_str": "jdbc:sqlserver://sfdev1804627-c83f074b-sql.database.windows.net;database=aicenter;[email protected];[email protected]_euHQZJ"
}
Sample DU connection string
"documentunderstanding": {
"enabled": true,
{
"datamanager": {
"sql_connection_str": "mssql+pyodbc://testadmin:[email protected]:1433/datamanager?driver=ODBC+Driver+17+for+SQL+Server",
}
}
Note: The data manager SQL connection string is optional only if you want to overwrite the default with your own.
Handwriting is always enabled for online installation.
The default value for max_cpu_per-pod
is 2
, but you can adjust it according to your needs. For more information, check the Document Understanding configuration file.
Important!
Make sure the SQL account specified in the connection strings is granted the
db_owner
role for all Automation Suite databases. If security restrictions do not allow the use ofdb_owner
, then the SQL account should have the following roles and permissions on all databases:
db_ddladmin
db_datawriter
db_datareader
EXECUTE
permission on dbo schema
Orchestrator-specific configuration
Orchestrator can save robot logs to an Elasticsearch server. You can configure this functionality in the orchestrator.orchestrator_robot_logs_elastic
section. If not provided, robot logs are saved to Orchestrator's database.
The following table lists out the orchestrator.orchestrator_robot_logs_elastic
fields:
Parameter | Description |
---|---|
| The address of the Elasticsearch instance that should be used. It should be provided in the form of a URI. If provided, then username and password are also required. |
| The Elasticsearch username, used for authentication. |
| The Elasticsearch password, used for authentication. |
Click to see an example
"orchestrator": {
"orchestrator_robot_logs_elastic": {
"elastic_uri": "https://elastic.example.com:9200",
"elastic_auth_username": "elastic-user",
"elastic_auth_password": "elastic-password"
}
}
Insights-specific configuration
If enabling Insights, users can include SMTP server configuration that will be used to send scheduled emails/alert emails. If not provided, scheduled emails and alert emails will not function.
The insights.smtp_configuration
fields details:
Parameter | Description |
---|---|
| Valid values are |
| Address that alert/scheduled emails will be sent from. |
| Hostname of the SMTP server. |
| Port of the SMTP server. |
| Username for SMTP server authentication. |
| Password for SMTP server authentication. |
Click to see an example
"insights": {
"enabled": true,
"smtp_configuration": {
"tls_version": "TLSv1_2",
"from_email": "[email protected]",
"host": "smtp.sendgrid.com",
"port": 587,
"username": "login",
"password": "password123"
}
}
Monitoring configuration
To provision enough resources for monitoring (see Using the monitoring stack), you should consider the number of vCPUs in the cluster and the amount of desired metric retention. See below for how to set the following monitoring resource configurations.
The following table describes the monitoring field details:
Parameter | Description |
---|---|
| In days. Default value is |
| In GB. Example: Default value is If Prometheus starts to run out of storage space, an alert will be triggered with specific remediation instructions. See here. |
| In MB. Example: Default value is If Prometheus starts to run out of memory, an alert will be triggered with specific remediation instructions. See here. |
Click to see an example
"monitoring": {
"prometheus_retention": 14,
"prometheus_memory_limit": 16000,
"prometheus_storage_size": 104
}
Optional: Configuring the proxy server
Note:
Make sure you meet the proxy server requirements before configuring the proxy server during installation.
While running the interactive installer wizard, you need to exit it and update the cluster_config.json
during the advanced configuration step.
You need to add the following to the configuration file using vim or your favorite editor to the configuration file:
"proxy": {
"enabled": "true",
"http_proxy": "http://<PROXY-SERVER-IP>:<PROXY-PORT>",
"https_proxy": "http://<PROXY-SERVER-IP>:<PROXY-PORT>",
"no_proxy": "alm.<fqdn>,<fixed_rke_address>:9345,<fixed_rke_address>:6443,<named server address>,<metadata server address>,10.0.0.0/8,<private_subnet_ip>,<sql server host>,<Comma separated list of ips that should not got though proxy server>"
}
Mandatory parameters | Description |
---|---|
| Use |
| Used to route HTTP outbound requests from the cluster. This should be the proxy server FQDN and port. |
| Used to route HTTPS outbound requests from the cluster. This should be the proxy server FQDN and port. |
| Comma-separated list of hosts, IP addresses, or IP ranges in CIDR format that you do not want to route via the proxy server. This should be a private subnet range, sql server host, named server address, metadata server address: |
Optional: Enabling resilience to zonal failures in a multi-node HA-ready production cluster
To enable resilience to zonal failures in a multi-node cluster, take the following steps:
- Make sure nodes are spread evenly across three availability zones. For a bare-metal server or VM provided by any vendor except for AWS, Azure, or GCP, zone metadata has to be provided via the configuration file at
/etc/default/k8s-node-labels
on every machine in following format.
NODE_REGION_LABEL=<REGION_NAME>
NODE_ZONE_LABEL=<ZONE_NAME>
cat > /etc/default/k8s-node-labels <<EOF
EXTRA_K8S_NODE_LABELS="topology.kubernetes.io/region=$NODE_REGION_LABEL,topology.kubernetes.io/zone=${NODE_ZONE_LABEL}"
EOF
- Update the
cluster_config.json
file during the advanced configuration step.
To update the cluster_config.json
using the interactive installation wizard, exit at advanced configuration step and add the following to the configuration file using vim or your favorite editor:
"zone_resilience": true
Mandatory parameters | Description |
---|---|
| Use |
Optional: Passing custom resolv.conf
resolv.conf
The Kubernetes cluster that Automation Suite deploys uses the name servers configured in /etc/resolv.conf
. Kubernetes does not work with local DNS resolvers (127.0.0.1 or 127.0.0.0/8), so if you have such name servers configured in /etc/resolv.conf
file, you need to pass a file reference with the correct nameserver entries accessible from anywhere on the VM in the .infra.custom_dns_resolver
parameter in cluster_config.json
.
For details on a known limitation, see Kubernetes documentation.
Optional Parameters | Description |
---|---|
| Path to the file with correct name server entries that can be accessed from anywhere on the VM. These name server entries must not be from |
Updated about a month ago