- Getting Started
- Network requirements
- Single-node requirements and installation
- Multi-node requirements and installation
- Post-installation
- Accessing AI Center
- Provision an AI Center tenant
- Updating Orchestrator and Identity Server certificates
- Resizing PVC
- Adding a new node to the cluster
- ML packages offline installation
- Configuring the Cluster
- Configuring the FQDN post-installation
- Backing up and Restoring the Cluster
- Using the Monitoring Stack
- Setting up a Kerberos Authentication
- Provisioning a GPU
- Using the configuration file
- About the configuration file
- Node scheduling
- Migration and Upgrade
- Basic Troubleshooting Guide
About the configuration file
cluster_config.json
file defines the parameters, settings, and preferences applied to the UiPath services. You need to update this file if you
want to change defaults and use any advanced configuration for your cluster.
cluster_config.json
, you can use either:
- a Linux text editor, such as vi or GNU nano, directly on the Linux machine via SSH (e.g., command:
vi cluster_config.json
); - your favorite text editor and then copy-paste the file on the machine.
cluster_config.json
file, use true
or false
for the enabled
flag.
Mandatory parameters | Description |
---|---|
fqdn | The load balancer (multi-node) or machine (single-node) domain name. |
fixed_rke_address | Fixed address used to load balance node registration and kube API requests. This
should be fqdn if load balancer is configured as recommended. Otherwise
FQDN of 1st Server Node. Refer to .
Can be the IP/FQDN of the first rke2 server in your setup. |
multinode | Set to true if going for multi node installation
|
admin_username | The username that you would like to set as admin (such as: admin ) for the host organization. |
admin_password | The host tenant admin password to be set. |
rke_token | Use a newly generated GUID here. This is a pre-shared, cluster-specific secret. It is needed for all the nodes joining the cluster. |
profile | Sets the profile of the installation. The available profiles are:
|
gpu_support | true or false - enable or disable GPU support for the cluster.
Set to 'true' if you have agent nodes with GPUs. By default it is set to 'false' |
infra.docker_registry.username | The username that you would like to set for the docker registry installation. |
infra.docker_registry.password | The password that you would like to set for the docker registry installation. |
Optional parameters | Description |
---|---|
telemetry_optout | true or false - used to opt-out of sending telemetry back to UiPath. It is set to false by default.
If you wish to opt out, then set to
true .
|
AI Center parameters | Description |
---|---|
sql_connection_str | The SQL connection name. |
orchestrator_url | The Orchestrator URL address. |
identity_server_url | The Identity Server URL adress. |
orchestrator_cert_file_path | The absolute path to Orchestrator certificate file. |
identity_cert_file_path | The absolute path to Identity Server certificate file. |
identity_access_token | The access token taken from Identity Server. |
Please refer to the prerequisite document to obtain certificate: .
If no certificate is provided at time of installation, the installer creates a self-issued certificate and configures it in the cluster. The validity of the certificate is 90 days.
In multi-node installations, a certificate is required only on the first node.
pwd
to get the path of the directory where
files are placed and append the certificate file name in the
cluster_config.json
.
In multi-node installations, a certificate is required only on the first node.
Parameter | Description |
---|---|
server_certificate.ca_cert_file | Absolute path to Certificate Authority certificate. This certificate is the
authority that signs the TLS certificate. In case of self-signed is the
rootCA.crt created in earlier steps. Leave blank, in case you want
installer to generate.
|
server_certificate.tls_cert_file | Absolute path to TLS certificate (server.crt for self-signed
created in earlier steps). Leave blank, in case you want installer to
generate.
|
server_certificate.tls_key_file | Absolute path to certificate key (server.key for self-signed
created in earlier steps). Leave blank, in case you want installer to
generate.
|
additional_ca_certs | Absolute path to the file containing additional CA certificates that you want to
be trusted by all the services running as part of Automation Suite. All certificates
in the file should be valid PEM format.
For example, you need to provide the file containing the SQL server CA certificate if the certificate is not issued by a public certificate authority. |
If you want the installer to create the databases, then please fill in the following fields
Parameter | Description |
---|---|
sql.create_db | Set to true .
|
sql.server_url | FQDN of sql server, where you wish to installer to configure database |
sql.port | port number on which a database instance should be hosted in sql server |
sql.username | username / userid to connect to sql server. |
sql.password | password of username provided earlier to connect to sql server. |
dbcreator
role. This grants them permission to create the database in SQL Server. Otherwise the installation fails.
ODBC connection does not support usernames that contain special characters. For database usernames for AI Center and Document Understanding, use only uppercase and lowercase letters.
In case if you are providing the database, then we need SQLconnection strings for every database. The following SQL connection string formats are supported.
Parameter | Description |
---|---|
sql_connection_string_template | Full ADO.NET connection string where Catalog name is set to DB_NAME_PLACEHOLDER. The installer will replace this placeholder with the default database names for the installed suite services. |
sql_connection_string_template_jdbc | Full JDBC connection string where database name is set to DB_NAME_PLACEHOLDER. The installer will replace this placeholder with the default database names for the installed suite services. |
sql_connection_string_template_odbc | Full ODBC connection string where database name is set to DB_NAME_PLACEHOLDER. The installer will replace this placeholder with the
default database names for the installed suite services.
This parameter is used by Document Understanding. |
If you manually set the connection strings in the configuration file, you can escape SQL, JDBC, or ODBC passwords as follows:
- for SQL: add
'
at the beginning and end of the password, and double any other'
. - for JDBC/ODBC: add
{
at the beginning of the password and}
at the end, and double any other}
.
ql_connection_string_template example
Server=tcp:sfdev1804627-c83f074b-sql.database.windows.net:1433;Initial Catalog=DB_NAME_PLACEHOLDER;Persist Security Info=False;User Id=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;
Server=tcp:sfdev1804627-c83f074b-sql.database.windows.net:1433;Initial Catalog=DB_NAME_PLACEHOLDER;Persist Security Info=False;User Id=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;
ql_connection_string_template_jdbc example
jdbc:sqlserver://sfdev1804627-c83f074b-sql.database.windows.net:1433;database=DB_NAME_PLACEHOLDER;user=testadmin;password=***;encrypt=true;trustServerCertificate=false;Connection Timeout=30;hostNameInCertificate=sfdev1804627-c83f074b-sql.database.windows.net"
jdbc:sqlserver://sfdev1804627-c83f074b-sql.database.windows.net:1433;database=DB_NAME_PLACEHOLDER;user=testadmin;password=***;encrypt=true;trustServerCertificate=false;Connection Timeout=30;hostNameInCertificate=sfdev1804627-c83f074b-sql.database.windows.net"
sql_connection_string_template_odbc example</summary>
SERVER=sfdev1804627-c83f074b-sql.database.windows.net,1433;DATABASE=DB_NAME_PLACEHOLDER;DRIVER={ODBC Driver 17 for SQL Server};UID=testadmin;PWD=***;MultipleActiveResultSets=False;Encrypt=YES;TrustServerCertificate=NO;Connection Timeout=30;"
SERVER=sfdev1804627-c83f074b-sql.database.windows.net,1433;DATABASE=DB_NAME_PLACEHOLDER;DRIVER={ODBC Driver 17 for SQL Server};UID=testadmin;PWD=***;MultipleActiveResultSets=False;Encrypt=YES;TrustServerCertificate=NO;Connection Timeout=30;"
aicenter
SQL configuration parameter:
Parameter | Description |
---|---|
aicenter.ai_appmanager.sql_connection_str | AI app manager JDBC connection string (Refer below for the JDBC format) |
Sample AI Center connection string
"aicenter": {
"enabled": true,
"sql_connection_str": "jdbc:sqlserver://sfdev1804627-c83f074b-sql.database.windows.net;instanceName=instance;database=aicenter;user=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;password=TFgID_9GsE7_P@srCQp0WemXX_euHQZJ"
}
"aicenter": {
"enabled": true,
"sql_connection_str": "jdbc:sqlserver://sfdev1804627-c83f074b-sql.database.windows.net;instanceName=instance;database=aicenter;user=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;password=TFgID_9GsE7_P@srCQp0WemXX_euHQZJ"
}
Sample DU connection string
"documentunderstanding":
{
"datamanager": {
"sql_connection_str": "mssql+pyodbc://testadmin:myPassword@mydev-sql.database.windows.net:1433/datamanager?driver=ODBC+Driver+17+for+SQL+Server",
}
"documentunderstanding":
{
"datamanager": {
"sql_connection_str": "mssql+pyodbc://testadmin:myPassword@mydev-sql.database.windows.net:1433/datamanager?driver=ODBC+Driver+17+for+SQL+Server",
}
db_owner
role for all databases. If security restrictions do not allow the use of db_owner
, then the SQL account should have the following roles and permissions on all databases:
db_ddladmin
db_datawriter
db_datareader
EXECUTE
permission on dbo schema
To provision enough resources for monitoring (see Using the monitoring stack), you should consider the number of vCPUs in the cluster and the amount of desired metric retention. See below for how to set the following monitoring resource configurations.
The following table describes the monitoring field details:
Parameter | Description |
---|---|
prometheus_retention | In days.
This is the amount of days that metrics will be retained for the purpose of visualization in Grafana and manual querying via the Prometheus console. Default value is
7 .
|
prometheus_storage_size |
In GB. Amount of storage space to reserve per Prometheus replica. A good rule of thumb is to set this value to:
Example: If you set
prometheus_retention to 14 days, and your cluster has 80 cores spread across 5 machines, this becomes:
Default value is
45 and should not be set lower.
If Prometheus starts to run out of storage space, an alert will be triggered with specific remediation instructions. |
prometheus_memory_limit |
In MB. Amount of memory to limit each Prometheus replica to. A good rule of thumb is to set this value to:
Example: If you've set
prometheus_retention to 14 days, and your cluster has 80 cores spread across 5 machines, this becomes:
Default value is
3200 for the single-node evaluation mode, and 6000 for the multi-node HA-ready production mode, and should not be set lower.
If Prometheus starts to run out of memory, an alert will be triggered with specific remediation instructions. See here. |
Example:
"monitoring": {
"prometheus_retention": 14,
"prometheus_memory_limit": 16000,
"prometheus_storage_size": 104
}
"monitoring": {
"prometheus_retention": 14,
"prometheus_memory_limit": 16000,
"prometheus_storage_size": 104
}
cluster_config.json
during the advanced configuration step.
You need to add the following to the configuration file using vim or your favorite editor to the configuration file:
"proxy": {
"enabled": "true",
"http_proxy": "http://<PROXY-SERVER-IP>:<PROXY-PORT>",
"https_proxy": "http://<PROXY-SERVER-IP>:<PROXY-PORT>",
"no_proxy": "<Comma separated list of ips that should not got though proxy server>"
}
"proxy": {
"enabled": "true",
"http_proxy": "http://<PROXY-SERVER-IP>:<PROXY-PORT>",
"https_proxy": "http://<PROXY-SERVER-IP>:<PROXY-PORT>",
"no_proxy": "<Comma separated list of ips that should not got though proxy server>"
}
- Allow
30070
port on the VM for inbound and outbound calls.Mandatory parameters Description enabled
Use true
orfalse
to enable or disable proxy settings.http_proxy
Used to route HTTP outbound requests from the cluster. This should be the proxy server FQDN and port. https_proxy
Used to route HTTPS outbound requests from the cluster. This should be the proxy server FQDN and port. no_proxy
Comma-separated list of hosts, IP addresses, or IP ranges in CIDR format that you do not want to route via the proxy server. This should be a private subnet range, sql server host, named server address, metadata server address: *.<fqdn>,<fixed_rke_address>:9345,<fixed_rke2_address>:6443
.fqdn
- the cluster FQDN defined incluster_config.json
fixed_rke_address
- thefixed_rke_address
defined incluster_config.json
named server address
- the IP address from/etc/resolv.conf
private_subnet_ip
- the cluster VNetsql server host
- sql server hostmetadata server address
- the IP address169.254.169.254
used to fetch machine metadata by cloud services such as Azure and AWS
To enable resilience to zonal failures in a multi-node cluster, take the following steps:
- Make sure nodes are spread evenly across three availability zones. For a bare-metal server or VM provided by any vendor except
for AWS, Azure, or GCP, zone metadata has to be provided via the configuration file at
/etc/default/k8s-node-labels
on every machine in following format.NODE_REGION_LABEL=<REGION_NAME> NODE_ZONE_LABEL=<ZONE_NAME> cat > /etc/default/k8s-node-labels <<EOF EXTRA_K8S_NODE_LABELS="topology.kubernetes.io/region=$NODE_REGION_LABEL,topology.kubernetes.io/zone=${NODE_ZONE_LABEL}" EOF
NODE_REGION_LABEL=<REGION_NAME> NODE_ZONE_LABEL=<ZONE_NAME> cat > /etc/default/k8s-node-labels <<EOF EXTRA_K8S_NODE_LABELS="topology.kubernetes.io/region=$NODE_REGION_LABEL,topology.kubernetes.io/zone=${NODE_ZONE_LABEL}" EOF - Update the
cluster_config.json
file during the advanced configuration step.
cluster_config.json
using the interactive installation wizard, exit at advanced configuration step and add the following to the configuration
file using vim or your favorite editor:
"zone_resilience": true
"zone_resilience": true
Mandatory parameters | Description |
---|---|
zone_resilience | Use true or false to enable or disable resilience to zonal failure.
|
/etc/resolv.conf
. Kubernetes does not work with local DNS resolvers (127.0.0.1 or 127.0.0.0/8), so if you have such name servers configured
in /etc/resolv.conf
file, you need to pass a file reference with the correct nameserver entries accessible from anywhere on the VM in the .infra.custom_dns_resolver
parameter in cluster_config.json
.
For details on a known limitation, see Kubernetes documentation.
Optional parameter | Description |
---|---|
.infra.custom_dns_resolver | Path to the file with correct name server entries that can be accessed from anywhere on the VM. These nameserver entries must
not be from 127.0.0.0/8 .
|
- General Configuration
- Setting Certificates
- Database configuration
- Automatically create the necessary databases
- Bring your own database
- Monitoring configuration
- Optional: Adding proxy configuration
- Optional: Enabling resilience to zonal failures in a multi-node HA-ready production cluster
- Optional: Passing custom Resolv.conf