automation-suite
2022.10
false
UiPath logo, featuring letters U and I in white

Automation Suite Installation Guide

Last updated Dec 16, 2024

Manual: Advanced installation experience

The cluster_config.json file defines the parameters, settings, and preferences applied to the UiPath products deployed via Automation Suite. You must update this file to change defaults and use any advanced configuration for your cluster.
To edit cluster_config.json, you can use either:
  • a Linux text editor, such as vi or GNU nano, directly on the Linux machine via SSH (e.g., command: vi cluster_config.json);
  • your favorite text editor and then copy paste the file on the machine.
The cluster_config.json file allows you to configure the UiPath products you want to deploy. Be aware that products may have dependencies. For details, see Cross-product dependencies.
To enable or disable a products via the cluster_config.json file, use true or false for the enabled flag.

Cluster_config.json Sample

{
  "fqdn": "PLACEHOLDER",
  "cluster_fqdn": "PLACEHOLDER",
  "fixed_rke_address": "PLACEHOLDER",
  "admin_username": "PLACEHOLDER",
  "admin_password": "PLACEHOLDER",
  "rke_token": "PLACEHOLDER ",
  "zone_resilience": false,
  "registries": {
    "docker": {
      "url": "registry.uipath.com"
    },
    "helm": {
      "url": "registry.uipath.com"
    }
  },
  "sql_connection_string_template": "PLACEHOLDER",
  "sql_connection_string_template_jdbc": "PLACEHOLDER",
  "sql_connection_string_template_odbc": "PLACEHOLDER",
  "sql_connection_string_template_sqlalchemy_pyodbc": "PLACEHOLDER",
  "orchestrator": {
    "testautomation": {
      "enabled": true
    },
    "updateserver": {
      "enabled": true
    },
    "enabled": true
  },
  "infra": {
    "docker_registry": {
      "username": " PLACEHOLDER ",
      "password": " PLACEHOLDER "
   },  
     "pod_log_path": ""
  },
  "platform": {
    "enabled": true
  },
  "automation_hub": {
    "enabled": true
  },
  "automation_ops": {
    "enabled": true
  },
  "action_center": {
    "enabled": true
  },
  "aicenter": {
    "enabled": true
  },
  "documentunderstanding": {
    "enabled": true,
    "datamanager": {}
  },
  "task_mining": {
    "enabled": true
  },
  "apps": {
    "enabled": true
  },
  "test_manager": {
    "enabled": true
  },
  "insights": {
    "enabled": true
  },
  "dataservice": {
    "enabled": true
  },
  "asrobots": {
    "enabled": true,
    "packagecaching": true,
    "packagecachefolder": "/uipath_asrobots_package_cache"
  },
  "processmining": {
    "enabled": true
  },
  "external_object_storage": {
    "enabled": false
  },
  "identity_certificate": {},
  "profile": "ha",
  "telemetry_optout": false,
  "alternative_fqdn": "",
  "server_certificate": {
    "ca_cert_file": "/opt/UiPathAutomationSuite/UiPath_Installer/rootCA.crt",
    "tls_cert_file": "/opt/UiPathAutomationSuite/UiPath_Installer/server.crt",
    "tls_key_file": "/opt/UiPathAutomationSuite/UiPath_Installer/server.key"
  },
  "alternative_certificate": {
    "ca_cert_file": "/opt/UiPathAutomationSuite/UiPath_Installer/alt-rootCA.crt",
    "tls_cert_file": "/opt/UiPathAutomationSuite/UiPath_Installer/alt-server.crt",
    "tls_key_file": "/opt/UiPathAutomationSuite/UiPath_Installer/alt-server.key"
  }
}{
  "fqdn": "PLACEHOLDER",
  "cluster_fqdn": "PLACEHOLDER",
  "fixed_rke_address": "PLACEHOLDER",
  "admin_username": "PLACEHOLDER",
  "admin_password": "PLACEHOLDER",
  "rke_token": "PLACEHOLDER ",
  "zone_resilience": false,
  "registries": {
    "docker": {
      "url": "registry.uipath.com"
    },
    "helm": {
      "url": "registry.uipath.com"
    }
  },
  "sql_connection_string_template": "PLACEHOLDER",
  "sql_connection_string_template_jdbc": "PLACEHOLDER",
  "sql_connection_string_template_odbc": "PLACEHOLDER",
  "sql_connection_string_template_sqlalchemy_pyodbc": "PLACEHOLDER",
  "orchestrator": {
    "testautomation": {
      "enabled": true
    },
    "updateserver": {
      "enabled": true
    },
    "enabled": true
  },
  "infra": {
    "docker_registry": {
      "username": " PLACEHOLDER ",
      "password": " PLACEHOLDER "
   },  
     "pod_log_path": ""
  },
  "platform": {
    "enabled": true
  },
  "automation_hub": {
    "enabled": true
  },
  "automation_ops": {
    "enabled": true
  },
  "action_center": {
    "enabled": true
  },
  "aicenter": {
    "enabled": true
  },
  "documentunderstanding": {
    "enabled": true,
    "datamanager": {}
  },
  "task_mining": {
    "enabled": true
  },
  "apps": {
    "enabled": true
  },
  "test_manager": {
    "enabled": true
  },
  "insights": {
    "enabled": true
  },
  "dataservice": {
    "enabled": true
  },
  "asrobots": {
    "enabled": true,
    "packagecaching": true,
    "packagecachefolder": "/uipath_asrobots_package_cache"
  },
  "processmining": {
    "enabled": true
  },
  "external_object_storage": {
    "enabled": false
  },
  "identity_certificate": {},
  "profile": "ha",
  "telemetry_optout": false,
  "alternative_fqdn": "",
  "server_certificate": {
    "ca_cert_file": "/opt/UiPathAutomationSuite/UiPath_Installer/rootCA.crt",
    "tls_cert_file": "/opt/UiPathAutomationSuite/UiPath_Installer/server.crt",
    "tls_key_file": "/opt/UiPathAutomationSuite/UiPath_Installer/server.key"
  },
  "alternative_certificate": {
    "ca_cert_file": "/opt/UiPathAutomationSuite/UiPath_Installer/alt-rootCA.crt",
    "tls_cert_file": "/opt/UiPathAutomationSuite/UiPath_Installer/alt-server.crt",
    "tls_key_file": "/opt/UiPathAutomationSuite/UiPath_Installer/alt-server.key"
  }
}

General configuration

Mandatory parameters

Description

fqdn

The load balancer (multi-node HA-ready production mode) or machine (single-node evaluation mode) domain name.

fixed_rke_address

Fixed address used to load balance node registration and kube API requests. This should be fqdn if load balancer is configured as recommended. Otherwise FQDN of 1st Server Node. Refer to Configuring the load balancer.

Can be the IP/FQDN of the first rke2 server in your setup.

multinode

Set to true when choosing a multi-node HA-ready production profile. The value of this flag is set automatically by the interactive installer. It is used for internal purposes only and should not be modified manually.

admin_username

The username that you would like to set as admin (such as: admin ) for the host organization.

admin_password

The host admin password to be set.

rke_token

Use a newly generated GUID here. This is a pre-shared, cluster-specific secret. It is needed for all the nodes joining the cluster.

profile

Sets the profile of the installation. The available profiles are:

  • default: single-node evaluation profile.
  • ha: multi-node HA-ready production profile.

The value of this flag is set automatically by the interactive installer. It is used for internal purposes only and should not be modified manually.

infra.docker_registry.username

The username that you would like to set for the docker registry installation.

infra.docker_registry.password

The password that you would like to set for the docker registry installation.

Optional parameters

Description

telemetry_optout

true or false - used to opt-out of sending telemetry back to UiPath. It is set to false by default.
If you wish to opt out, then set to true.

infra.pod_log_path

Enables you to change the /var/log/pods default path of the pod logs to a custom path of your choice.
Note:
Updating the log path discards the logs of the existing container from /var/log/pods.

Certificate configuration

If no certificate is provided at the time of installation, the installer creates self-issued certificates and configures them in the cluster.

The validity of the self-signed certificates is 90 days.

For details on how to obtain a certificate, see the following:

Note:
Make sure to specify the absolute path for the certificate files. Run pwd to get the path of the directory where files are placed and append the certificate file name to the cluster_config.json.

In multi-node HA-ready production installations, a certificate is required only on the first node.

Parameter

Description

server_certificate.ca_cert_file

Absolute path to the Certificate Authority (CA) certificate. This CA is the authority that signs the TLS certificate. A CA bundle must contain only the chain certificates used to sign the TLS certificate. The chain limit is nine certificates.

If you use a self-signed certificate, you must specify the path to rootCA.crt, which you previously created. Leave blank if you want the installer to generate ir.

server_certificate.tls_cert_file

Absolute path to the TLS certificate (server.crt is the self-signed certificate). Leave blank if you want the installer to generate it.

server_certificate.tls_key_file

Absolute path to the certificate key (server.key is the self-signed certificate). Leave blank if you want the installer to generate it.

identity_certificate.token_signing_cert_file

Absolute path to the identity token signing certificate used to sign tokens (identity.pfx is the self-signed certificate). Leave blank if you want the installer to generate an identity certificate using the server certificate.

identity_certificate.token_signing_cert_pass

Plain text password set when exporting the identity token signing certificate.

additional_ca_certs

Absolute path to the file containing the additional CA certificates that you want to be trusted by all the services running as part of Automation Suite. All certificates in the file must be in valid PEM format.

For example, you need to provide the file containing the SQL server CA certificate if the certificate is not issued by a public certificate authority.

Database configuration

Database creation workflow

The interactive installer automatically creates databases using the following workflow:

  1. The interactive installer script checks the value of the sql.create_db parameter in the cluster_config.json file.
    • If the sql.create_db parameter is set to true, the installer automatically generates all the databases on your behalf. In this case, the installer uses the default database names and default templates, and ignores any custom database names you provided.
    • If sql.create_db is set to false, you must bring your own databases. In this case, you must manually set up your databases. Note that you can use custom database names, provided that you follow the provided naming conventions. This step is critical because we use the database name in conjunction with the connection template to form the database connection string. If you do not follow the recommended naming convention, you must provide the SQL connection strings yourself.

      For details, see Bring your own databases.

  2. The interactive installer generates the connection strings as follows:

    • If you did not define a connection string for your service, the installer uses the connection template to generate all database connection strings.

    • If you defined a connection string for your service, the installer uses the provided connection string for your database.

Automatically Create the Necessary Databases

If you want the installer to create the databases, fill in the following fields:

Parameter

Description

sql.create_db

Set to true to allow the installer to create the databases. Note that the installer uses the default database names and default templates, and ignores any custom database names you provided.

sql.server_url

FQDN of the SQL server where you want the installer to configure database.

sql.port

Port number on which a database instance should be hosted in the SQL server.

sql.username

Username / userid to connect to the SQL server.

sql.password

Password of the username provided earlier to connect to the SQL server.

Note:
Ensure the user has the dbcreator role. This grants them permission to create the database in SQL Server. Otherwise the installation fails.

ODBC connection does not support usernames that contain special characters. For database usernames for AI Center and Document Understanding, use only uppercase and lowercase letters.

Bring Your Own Database

If you choose to bring your own databases for a new Automation Suite installation, we strongly recommend setting up new databases rather than using existing ones. This precaution is necessary to prevent any conflicts with the operation of Automation Suite that might occur due to leftover metadata from old databases.

If you bring your own database, you must provide the SQL connection strings for every database. Automation Suite supports the following SQL connection string formats:

Parameter

Description

Products

sql_connection_string_template

Full ADO.NET connection string where Catalog name is set to DB_NAME_PLACEHOLDER. The installer will replace this placeholder with the default database names for the installed suite services.

Platform, Orchestrator, Automation Suite Robots, Test Manager, Automation Hub, Automation Ops, Insights, Task Mining, Data Service, Process Mining, Document Understanding

sql_connection_string_template_jdbc

Full JDBC connection string where database name is set to DB_NAME_PLACEHOLDER. The installer will replace this placeholder with the default database names for the installed suite services.

AI Center

sql_connection_string_template_odbc

Full ODBC connection string where database name is set to DB_NAME_PLACEHOLDER. The installer will replace this placeholder with the default database names for the installed suite services.

Document Understanding

sql_connection_string_template_sqlalchemy_pyodbc

Full SQL alchemy PYODBC connection string where database name is set to DB_NAME_PLACEHOLDER. The installer will replace this placeholder with the default database names for the installed suite services.

Process Mining

Important:
Make sure the SQL account specified in the connection strings is granted the db_securityadmin and db_owner roles for all Automation Suite databases. If security restrictions do not allow the use of db_owner, then the SQL account should have the following roles and permissions on all databases:
  • db_ddladmin
  • db_datawriter
  • db_datareader
  • EXECUTE permission on dbo schema
Important:

If you manually set the connection strings in the configuration file, you can escape SQL, JDBC, ODBC, or PYODBC passwords as follows:

  • for SQL: add ' at the beginning and end of the password, and double any other '.
  • for JDBC/ODBC: add { at the beginning of the password and } at the end, and double any other }.
  • for PYODBC: username and password should be url encoded to account for special characters.
Important: The AutomationSuite_ProcessMining_Airflow database for Process Mining product must have READ_COMMITTED_SNAPSHOT enabled.
Note:
By default, TrustServerCertificate is set to False, and you must provide an additional CA certificate for the SQL Server. This is required if the SQL Server certificate is self-signed or signed by an internal CA.

See Certificate configuration on this page for more details.

sql_connection_string_template example

Server=tcp:sfdev1804627-c83f074b-sql.database.windows.net:1433;Initial Catalog=DB_NAME_PLACEHOLDER;Persist Security Info=False;User Id=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;Server=tcp:sfdev1804627-c83f074b-sql.database.windows.net:1433;Initial Catalog=DB_NAME_PLACEHOLDER;Persist Security Info=False;User Id=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;

sql_connection_string_template_jdbc example

jdbc:sqlserver://sfdev1804627-c83f074b-sql.database.windows.net:1433;database=DB_NAME_PLACEHOLDER;user=testadmin;password=***;encrypt=true;trustServerCertificate=false;Connection Timeout=30;hostNameInCertificate=sfdev1804627-c83f074b-sql.database.windows.net"jdbc:sqlserver://sfdev1804627-c83f074b-sql.database.windows.net:1433;database=DB_NAME_PLACEHOLDER;user=testadmin;password=***;encrypt=true;trustServerCertificate=false;Connection Timeout=30;hostNameInCertificate=sfdev1804627-c83f074b-sql.database.windows.net"

sql_connection_string_template_odbc example

SERVER=sfdev1804627-c83f074b-sql.database.windows.net,1433;DATABASE=DB_NAME_PLACEHOLDER;DRIVER={ODBC Driver 17 for SQL Server};UID=testadmin;PWD=***;MultipleActiveResultSets=False;Encrypt=YES;TrustServerCertificate=NO;Connection Timeout=30;"SERVER=sfdev1804627-c83f074b-sql.database.windows.net,1433;DATABASE=DB_NAME_PLACEHOLDER;DRIVER={ODBC Driver 17 for SQL Server};UID=testadmin;PWD=***;MultipleActiveResultSets=False;Encrypt=YES;TrustServerCertificate=NO;Connection Timeout=30;"

sql_connection_string_template_sqlalchemy_pyodbc

mssql+pyodbc://testuser%40sfdev3082457-sql.database.windows.net:_-%29X07_%5E3-%28%3B%25e-T@sfdev3082457-sql.database.windows.net:1433/DB_NAME_PLACEHOLDER?driver=ODBC+Driver+17+for+SQL+Server"mssql+pyodbc://testuser%40sfdev3082457-sql.database.windows.net:_-%29X07_%5E3-%28%3B%25e-T@sfdev3082457-sql.database.windows.net:1433/DB_NAME_PLACEHOLDER?driver=ODBC+Driver+17+for+SQL+Server"

Default and optional DB names for Automation Suite services

{
  "orchestrator": "AutomationSuite_Orchestrator",
  "orchestrator_ta": "AutomationSuite_Orchestrator",
  "asrobots": "AutomationSuite_Orchestrator",
  "orchestrator_upd": "AutomationSuite_Platform",
  "platform": "AutomationSuite_Platform",
  "test_manager": "AutomationSuite_Test_Manager",
  "automation_ops": "AutomationSuite_Platform",
  "automation_hub": "AutomationSuite_Automation_Hub",
  "insights": "AutomationSuite_Insights",
  "task_mining": "AutomationSuite_Task_Mining",
  "dataservice": "AutomationSuite_DataService", 
  "aicenter": "AutomationSuite_AICenter",
  "documentunderstanding": "AutomationSuite_DU_Datamanager",
  "processmining_airflow": "AutomationSuite_Airflow",
  "processmining_metadata": "AutomationSuite_ProcessMining_Metadata",
  "processmining_warehouse": "AutomationSuite_ProcessMining_Warehouse",
    
}{
  "orchestrator": "AutomationSuite_Orchestrator",
  "orchestrator_ta": "AutomationSuite_Orchestrator",
  "asrobots": "AutomationSuite_Orchestrator",
  "orchestrator_upd": "AutomationSuite_Platform",
  "platform": "AutomationSuite_Platform",
  "test_manager": "AutomationSuite_Test_Manager",
  "automation_ops": "AutomationSuite_Platform",
  "automation_hub": "AutomationSuite_Automation_Hub",
  "insights": "AutomationSuite_Insights",
  "task_mining": "AutomationSuite_Task_Mining",
  "dataservice": "AutomationSuite_DataService", 
  "aicenter": "AutomationSuite_AICenter",
  "documentunderstanding": "AutomationSuite_DU_Datamanager",
  "processmining_airflow": "AutomationSuite_Airflow",
  "processmining_metadata": "AutomationSuite_ProcessMining_Metadata",
  "processmining_warehouse": "AutomationSuite_ProcessMining_Warehouse",
    
}
Note:
If you you want to override the connection string for any of the services above, set the sql_connection_str for that specific service.

You still have to manually create these databases before running the installer.

Overriding the default connection string for Orchestrator and the platform

{
  "orchestrator": {
    "sql_connection_str": "Server=tcp:sfdev1804627-c83f074b-sql.database.windows.net,1433;Initial Catalog=CustomOrchDB;Persist Security Info=False;User Id=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;"
  },
  "platform": {
    "sql_connection_str": "Server=tcp:sfdev1804627-c83f074b-sql.database.windows.net,1433;Initial Catalog=CustomIDDB;Persist Security Info=False;User Id=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;"
  }
}{
  "orchestrator": {
    "sql_connection_str": "Server=tcp:sfdev1804627-c83f074b-sql.database.windows.net,1433;Initial Catalog=CustomOrchDB;Persist Security Info=False;User Id=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;"
  },
  "platform": {
    "sql_connection_str": "Server=tcp:sfdev1804627-c83f074b-sql.database.windows.net,1433;Initial Catalog=CustomIDDB;Persist Security Info=False;User Id=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;"
  }
}
To override the database connection strings for other products, set the sql_connection_str in the corresponding product blocks. The connection string should have a format supported by the respective product.

Example for setting database connection string for AI Center

Parameter

Description

aicenter.sql_connection_str

AICenter JDBC connection string (Refer below for the JDBC format)

Note: Please check the JDBC connection string has the format in the sample below.
"aicenter": {
    "enabled": true,
    "sql_connection_str": "jdbc:sqlserver://sfdev1804627-c83f074b-sql.database.windows.net;database=aicenter;user=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;password=TFgID_9GsE7_P@srCQp0WemXX_euHQZJ"
}"aicenter": {
    "enabled": true,
    "sql_connection_str": "jdbc:sqlserver://sfdev1804627-c83f074b-sql.database.windows.net;database=aicenter;user=testadmin@sfdev1804627-c83f074b-sql.database.windows.net;password=TFgID_9GsE7_P@srCQp0WemXX_euHQZJ"
}

Sample Document Understanding connection string

"documentUnderstanding": {
  "datamanager": {
    "sql_connection_str": "SERVER=sql-server.database.windows.net;DATABASE=datamanager;DRIVER={ODBC Driver 17 for SQL Server};UID=testadmin;PWD=myPassword"
  },
  "sql_connection_str": "Server=tcp:database.example.com,1433;Initial Catalog=db;Persist Security Info=False;User Id=testadmin@example.com;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;",
  "cjkOcr": 
  {
    "enabled": true
  }
}"documentUnderstanding": {
  "datamanager": {
    "sql_connection_str": "SERVER=sql-server.database.windows.net;DATABASE=datamanager;DRIVER={ODBC Driver 17 for SQL Server};UID=testadmin;PWD=myPassword"
  },
  "sql_connection_str": "Server=tcp:database.example.com,1433;Initial Catalog=db;Persist Security Info=False;User Id=testadmin@example.com;Password=***;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Max Pool Size=100;",
  "cjkOcr": 
  {
    "enabled": true
  }
}
Note: The data manager SQL connection string is optional only if you want to overwrite the default with your own.

Handwriting is always enabled for online installation.

The default value for max_cpu_per-pod is 2, but you can adjust it according to your needs. For more information, check the Document Understanding configuration file.

External Objectstore configuration

General configuration

Automation Suite allows you to bring your own external storage provider. You can choose from the following storage providers:

  • Azure
  • AWS
  • S3-compatible

You can configure the external object storage in one of the following ways:

  • during installation, using the interactive installer;
  • post-installation, using the cluster_config.json file.
Note:
  • For Automation Suite to function properly when using pre-signed URLs, you must make sure that your external objectstore is accessible from the Automation Suite cluster, browsers, and all your machines, including workstations and robot machines.
  • The Server Side Encryption with Key Management Service (SSE-KMS) can only be enabled on the Automation Suite buckets deployed in any region created after January 30, 2014.

    SSE-KMS functionality requires pure SignV4 APIs. Regions created before January 30, 2014 do not use pure SignV4 APIs due to backward compatibility with SignV2. Therefore, SSE-KMS is only functional in regions that use SignV4 for communication. To find out when the various regions were provisioned, refer to the AWS documentation.

The following table lists out the cluster_config.json parameters you can use to configure each provider of external object storage:

Parameter

Azure

AWS

S3-compatible

Description

external_object_storage.enabled

available

available

available

Specify whether you would like to bring your own object store. Possible values: true and false.

external_object_storage.create_bucket

available

available

available

Specify whether you would like to provision the bucket. Possible values: true and false.

external_object_storage.storage_type

available

available

available

Specify the storage provider you would like to configure. The value is case-sensitive. Possible values: azure and s3.
Note: Many S3 objectstores require the CORS set to all the traffic from the Automation Suite cluster. You must configure the CORS policy at the objectstore level to allow the FQDN of the cluster.

external_object_storage.fqdn

not available

available

available

Specify the FQDN of the S3 server. Required in the case of the AWS instance and non-instance profile.

external_object_storage.port

not available

available

available

Specify the S3 port. Required in the case of the AWS instance and non-instance profile.

external_object_storage.region

not available

available

available

Specify the AWS region where buckets are hosted. Required in the case of the AWS instance and non-instance profile.

external_object_storage.access_key

not available

available

available

Specify the access key for the S3 account. Only required in the case of the AWS non-instance profile.

external_object_storage.secret_key

not available

available

available

Specify the secret key for the S3 account. Only required in the case of the AWS non-instance profile.

external_object_storage.use_instance_profile

not available

available

available

Specify whether you want to use an instance profile. An AWS Identity and Access Management (IAM) instance profile grants secure access to AWS resources for applications or services running on Amazon Elastic Compute Cloud (EC2) instances. If you opt for AWS S3, an instance profile allows an EC2 instance to interact with S3 buckets without the need for explicit AWS credentials (such as access keys) to be stored on the instance.

external_object_storage.use_managed_identity

available

not available

not available

Use managed identity with your Azure storage account. Possible values: true and false.
external_object_storage.bucket_name_prefix 1

not available

available

available

Indicate the prefix for the bucket names. Optional in the case of the AWS non-instance profile.

external_object_storage.bucket_name_suffix 2

not available

available

available

Indicate the suffix for the bucket names. Optional in the case of the AWS non-instance profile.

external_object_storage.account_key

available

not available

not available

Specify the Azure account key. Only required when using non-managed identity.

external_object_storage.account_name

available

not available

not available

Specify the Azure account name.

external_object_storage.azure_fqdn_suffix

available

not available

not available

Specify the Azure FQDN suffix. Optional parameter.

external_object_storage.client_id

available

not available

not available

Specify your Azure client ID. Only required when using managed identity.

1, 2 When configuring the external object storage, you must follow the naming rules and conventions from your provider for both bucket_name_prefix and bucket_name_suffix. In addition to that, the suffix and prefix must have a combined length of no more than 25 characters, and you must not end the prefix or start the suffix with a hyphen (-) as we already add the character for you automatically.

Product-specific configuration

You can use the parameters described in the General configuration section to update the general Automation Suite configuration. This means that all installed products would share the same configuration. If you want to configure one or more products differently, you can override the general configuration. You just need to specify the product(s) you want to set up external object storage for differently, and use the same parameters to define your configuration. Note that all the other installed products would continue to inherit the general configuration.

The following example shows how you can override the general configuration for Orchestrator:

"external_object_storage": {
  "enabled": false, // <true/false>
  "create_bucket": true, // <true/false>
  "storage_type": "s3", // <s3,azure,aws>
  "fqdn": "",  // <needed in the case of aws instance and non-instance profile>
  "port": 443, // <needed in the case of aws instance and non-instance profile>
  "region": "", 
  "access_key": "", // <needed in case of aws non instance profile>
  "secret_key": "", // <needed in case of aws non instance profile>
  "use_managed_identity": false, // <true/false>
  "bucket_name_prefix": "",
  "bucket_name_suffix": "",
  "account_key": "", // <needed only when using non managed identity>
  "account_name": "",
  "azure_fqdn_suffix": "core.windows.net",
  "client_id": "" // <optional field in case of managed identity>
},

"orchestrator": {
  "external_object_storage": {
    "enabled": false, // <true/false>
    "create_bucket": true, // <true/false>
    "storage_type": "s3", // <s3,azure>
    "fqdn": "",  // <needed in the case of aws instance and non-instance profile>
    "port": 443, // <needed in the case of aws instance and non-instance profile>
    "region": "", 
    "access_key": "", // <needed in case of aws non instance profile>
    "secret_key": "", // <needed in case of aws non instance profile>
    "use_managed_identity": false, // <true/false>
    "bucket_name_prefix": "",
    "bucket_name_suffix": "",
    "account_key": "", // <needed only when using non managed identity>
    "account_name": "",
    "azure_fqdn_suffix": "core.windows.net",
    "client_id": "" // <optional field in case of managed identity>
  }
}"external_object_storage": {
  "enabled": false, // <true/false>
  "create_bucket": true, // <true/false>
  "storage_type": "s3", // <s3,azure,aws>
  "fqdn": "",  // <needed in the case of aws instance and non-instance profile>
  "port": 443, // <needed in the case of aws instance and non-instance profile>
  "region": "", 
  "access_key": "", // <needed in case of aws non instance profile>
  "secret_key": "", // <needed in case of aws non instance profile>
  "use_managed_identity": false, // <true/false>
  "bucket_name_prefix": "",
  "bucket_name_suffix": "",
  "account_key": "", // <needed only when using non managed identity>
  "account_name": "",
  "azure_fqdn_suffix": "core.windows.net",
  "client_id": "" // <optional field in case of managed identity>
},

"orchestrator": {
  "external_object_storage": {
    "enabled": false, // <true/false>
    "create_bucket": true, // <true/false>
    "storage_type": "s3", // <s3,azure>
    "fqdn": "",  // <needed in the case of aws instance and non-instance profile>
    "port": 443, // <needed in the case of aws instance and non-instance profile>
    "region": "", 
    "access_key": "", // <needed in case of aws non instance profile>
    "secret_key": "", // <needed in case of aws non instance profile>
    "use_managed_identity": false, // <true/false>
    "bucket_name_prefix": "",
    "bucket_name_suffix": "",
    "account_key": "", // <needed only when using non managed identity>
    "account_name": "",
    "azure_fqdn_suffix": "core.windows.net",
    "client_id": "" // <optional field in case of managed identity>
  }
}

Rotating the blob storage credentials for Process Mining

To rotate the blob storage credentials for Process Mining in Automation Suite the stored secrets must be updated with the new credentials. See Rotating blob storage credentials.

Orchestrator-specific Configuration

Orchestrator can save robot logs to an Elasticsearch server. You can configure this functionality in the orchestrator.orchestrator_robot_logs_elastic section. If not provided, robot logs are saved to Orchestrator's database.
The following table lists out the orchestrator.orchestrator_robot_logs_elastic parameters:

Parameter

Description

elastic_uri

The address of the Elasticsearch instance that should be used. It should be provided in the form of a URI. If provided, then username and password are also required.

elastic_auth_username

The Elasticsearch username, used for authentication.

elastic_auth_password

The Elasticsearch password, used for authentication.

Example

"orchestrator": {
    "orchestrator_robot_logs_elastic": {
        "elastic_uri": "https://elastic.example.com:9200",
        "elastic_auth_username": "elastic-user",
        "elastic_auth_password": "elastic-password"
    }
}"orchestrator": {
    "orchestrator_robot_logs_elastic": {
        "elastic_uri": "https://elastic.example.com:9200",
        "elastic_auth_username": "elastic-user",
        "elastic_auth_password": "elastic-password"
    }
}

Insights-specific Configuration

If enabling Insights, users can include SMTP server configuration that will be used to send scheduled emails/alert emails. If not provided, scheduled emails and alert emails will not function.

The insights.smtp_configuration fields details:

Parameter

Description

tls_version

Valid values are TLSv1_2, TLSv1_1, SSLv23. Omit key altogether if not using TLS.

from_email

Address that alert/scheduled emails will be sent from.

host

Hostname of the SMTP server.

port

Port of the SMTP server.

username

Username for SMTP server authentication.

password

Password for SMTP server authentication.

Example

"insights": {
    "enabled": true,
    "smtp_configuration": {
      "tls_version": "TLSv1_2",
      "from_email": "test@test.com",
      "host": "smtp.sendgrid.com",
      "port": 587,
      "username": "login",
      "password": "password123"
    }
  }"insights": {
    "enabled": true,
    "smtp_configuration": {
      "tls_version": "TLSv1_2",
      "from_email": "test@test.com",
      "host": "smtp.sendgrid.com",
      "port": 587,
      "username": "login",
      "password": "password123"
    }
  }

Process Mining-specific Configuration

If enabling Process Mining, we recommend users to specify a SECONDARY SQL server to act as a data warehouse that is separate from the primary Automation Suite SQL Server. The data warehouse SQL Server will be under heavy load and can be configured in the processmining section:

Parameter

Description

sql_connection_str

DotNet formatted connection string with database set as a placeholder: Initial Catalog=DB_NAME_PLACEHOLDER.

sqlalchemy_pyodbc_sql_connection_str

Sqlalchemy PYODBC formatted connection string for custom metadata location:

sqlServer:1433/DB_NAME_PLACEHOLDER.

warehouse.sql_connection_str

DotNet formatted SQL connection string to the processmining data warehouse SQL Server with placeholder for dbname:

Initial Catalog=DB_NAME_PLACEHOLDER.

warehouse.sqlalchemy_pyodbc_sql_connection_str

Sqlalchemy PYODBC formatted SQL connection string to the processmining data warehouse SQL Server with placeholder for dbname:

sqlServer:1433/DB_NAME_PLACEHOLDER.

warehouse.master_sql_connection_str

If the installer is creating databases through sql.create_db: true setting, a DotNet formatted master SQL connection string must be provided for the processmining data warehouse SQL Server. Database in the connection string must be set as master.

Sample Process Mining connection string

"processmining": {
    "enabled": true,
    "app_security_mode": "system_managed",
    "sql_connection_str": "Server=tcp:shared_sqlserver_fqdn,1433;Initial Catalog=AutomationSuite_ProcessMining_Metadata;Persist Security Info=False;User Id=username;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
    "sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://username:password@shared_sqlserver_fqdn:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
    "warehouse": {
      "sql_connection_str": "Server=tcp:dedicated_sqlserver_fqdn,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;Persist Security Info=False;User Id=username;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
      "sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://useername:password@dedicated_sqlserver_fqdn:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
      "master_sql_connection_str": "Server=tcp:dedicated_sqlserver_fqdn,1433;Initial Catalog=master;Persist Security Info=False;User Id=username;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
    },
    "blob_storage_account_use_presigned_uri": true
  }, "processmining": {
    "enabled": true,
    "app_security_mode": "system_managed",
    "sql_connection_str": "Server=tcp:shared_sqlserver_fqdn,1433;Initial Catalog=AutomationSuite_ProcessMining_Metadata;Persist Security Info=False;User Id=username;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
    "sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://username:password@shared_sqlserver_fqdn:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
    "warehouse": {
      "sql_connection_str": "Server=tcp:dedicated_sqlserver_fqdn,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;Persist Security Info=False;User Id=username;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
      "sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://useername:password@dedicated_sqlserver_fqdn:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
      "master_sql_connection_str": "Server=tcp:dedicated_sqlserver_fqdn,1433;Initial Catalog=master;Persist Security Info=False;User Id=username;Password='password';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
    },
    "blob_storage_account_use_presigned_uri": true
  },
Important:
You must use the default server port 1433 for the following databases:
  • warehouse.sql_connection_str
  • warehouse.sqlalchemy_pyodbc_sql_connection_str
  • warehouse.master_sql_connection_str

Non-standard SQL server ports are not supported.

Attention:

When configuring the connection strings for the processmining data warehouse SQL Server, the named instance of the SQL Server should be omitted.

Named instances of SQL Server cannot operate on the same TCP port. Therefore, the port number alone is sufficient to distinguish between instances.

For example, use tcp:server,1433 instead of tcp:server\namedinstance,1433.

Automation Suite Robots-specific configuration

Automation Suite Robots can use package caching to optimize your process runs and allow them to run faster. NuGet packages are fetched from the filesystem instead of being downloaded from the Internet/network. This requires an additional space of minimum 10GiB and should be allocated to a folder on the host machine filesystem of the dedicated nodes.

To enable package caching, you need to update the following cluster_config.json parameters:

Parameter

Default value

Description

packagecaching

true

When set to true, robots use a local cache for package resolution.

packagecachefolder

/uipath_asrobots_package_cache

The disk location on the serverless agent node where the packages are stored.

AI Center-specific configuration

For AI Center to function properly, you must configure the aicenter.external_object_storage.port and aicenter.external_object_storage.fqdn parameters in the cluster_config.json file.
Note: You must configure the parameters in the aicenter section of the cluster_config.json file even if you have configured the external_object_storage section of the file.
The following sample shows a valid cluster_config.json configuration for AI Center:
"aicenter": {
  "external_object_storage" {
    "port": 443,
    "fqdn": "s3.us-west-2.amazonaws.com"
  }
},
"external_object_storage": {
  "enabled": true,
  "create_bucket": false,
  "storage_type": "s3", 
  "region": "us-west-2", 
  "use_instance_profile": true
}
..."aicenter": {
  "external_object_storage" {
    "port": 443,
    "fqdn": "s3.us-west-2.amazonaws.com"
  }
},
"external_object_storage": {
  "enabled": true,
  "create_bucket": false,
  "storage_type": "s3", 
  "region": "us-west-2", 
  "use_instance_profile": true
}
...

Monitoring configuration

To provision enough resources for monitoring (see Using the monitoring stack), you should consider the number of vCPUs in the cluster and the amount of desired metric retention. See below for how to set the following monitoring resource configurations.

The following table describes the monitoring field details:

Parameter

Description

prometheus_retention

In days.

This is the amount of days that metrics will be retained for the purpose of visualization in Grafana and manual querying via the Prometheus console.

Default value is 7.

prometheus_storage_size

In GiB.

Amount of storage space to reserve per Prometheus replica.

A good rule of thumb is to set this value to:

0.65 * vCPU cores * (prometheus_retention / 7)

Example:

If you set prometheus_retention to 14 days, and your cluster has 80 cores spread across 5 machines, this becomes:

0.65 * 80 * (14 / 7)

52 * (2)

104

Default value is 45 and should not be set lower.

If Prometheus starts to run out of storage space, an alert will be triggered with specific remediation instructions. See here.

prometheus_memory_limit

In MiB.

Amount of memory to limit each Prometheus replica to.

A good rule of thumb is to set this value to:

100 * vCPU cores * (prometheus_retention / 7)

Example:

If you've set prometheus_retention to 14 days, and your cluster has 80 cores spread across 5 machines, this becomes:

100 * 80 * (14 / 7)

8000 * (2)

16000

Default value is 3200 for the single-node evaluation mode, and 6000 for the multi-node HA-ready production mode, and should not be set lower.

If Prometheus starts to run out of memory, an alert will be triggered with specific remediation instructions. See here.

Example

"monitoring": {
  "prometheus_retention": 14,
  "prometheus_memory_limit": 16000,
  "prometheus_storage_size": 104
}"monitoring": {
  "prometheus_retention": 14,
  "prometheus_memory_limit": 16000,
  "prometheus_storage_size": 104
}

Optional: Configuring the proxy server

Note:

Make sure you meet the proxy server requirements before configuring the proxy server during installation.

While running the interactive installer wizard, you need to exit it and update the cluster_config.json during the advanced configuration step.

You need to add the following to the configuration file using vim or your favorite editor to the configuration file:

"proxy": {
  "enabled": true,
  "http_proxy": "http://<PROXY-SERVER-IP>:<PROXY-PORT>",
  "https_proxy": "http://<PROXY-SERVER-IP>:<PROXY-PORT>",
  "no_proxy": "alm.<fqdn>,<fixed_rke_address>:9345,<fixed_rke_address>:6443,<named server address>,<metadata server address>,<k8s address range>,<private_subnet_ip>,<sql server host>,<Comma separated list of ips that should not go through proxy server>"
}"proxy": {
  "enabled": true,
  "http_proxy": "http://<PROXY-SERVER-IP>:<PROXY-PORT>",
  "https_proxy": "http://<PROXY-SERVER-IP>:<PROXY-PORT>",
  "no_proxy": "alm.<fqdn>,<fixed_rke_address>:9345,<fixed_rke_address>:6443,<named server address>,<metadata server address>,<k8s address range>,<private_subnet_ip>,<sql server host>,<Comma separated list of ips that should not go through proxy server>"
}

Mandatory parameters

Description

enabled

Use true or false to enable or disable proxy settings.

http_proxy

Used to route HTTP outbound requests from the cluster. This should be the proxy server FQDN and port.

https_proxy

Used to route HTTPS outbound requests from the cluster. This should be the proxy server FQDN and port.

no_proxy

Comma-separated list of hosts, IP addresses, or IP ranges in CIDR format that you do not want to route via the proxy server. This should be a private subnet range, sql server host, named server address, metadata server address: *.<fqdn>,<fixed_rke_address>:9345,<fixed_rke2_address>:6443.
  • fqdn - the cluster FQDN defined in cluster_config.json
  • fixed_rke_address - the fixed_rke_address defined in cluster_config.json
  • named server address - the IP address from /etc/resolv.conf
  • private_subnet_ip - the cluster VNet
  • sql server host - sql server host
  • metadata server address - the IP address 169.254.169.254 used to fetch machine metadata by cloud services such as Azure and AWS
  • k8s address range - the IP address ranges used by the Kubernetes nodes, i.e. 10.0.0.0/8
Important:
If you use AI Center with an external Orchestrator, you must add the external Orchestrator domain to the no_proxy list.

Optional: Enabling resilience to zonal failures in a multi-node HA-ready production cluster

To enable resilience to zonal failures in a multi-node cluster, take the following steps:

  1. Make sure nodes are spread evenly across three availability zones. For a bare-metal server or VM provided by any vendor except for AWS, Azure, or GCP, zone metadata has to be provided via the configuration file at /etc/default/k8s-node-labels on every machine in following format.
    NODE_REGION_LABEL=<REGION_NAME>
    NODE_ZONE_LABEL=<ZONE_NAME>
    cat > /etc/default/k8s-node-labels <<EOF
    EXTRA_K8S_NODE_LABELS="topology.kubernetes.io/region=$NODE_REGION_LABEL,topology.kubernetes.io/zone=${NODE_ZONE_LABEL}"
    EOFNODE_REGION_LABEL=<REGION_NAME>
    NODE_ZONE_LABEL=<ZONE_NAME>
    cat > /etc/default/k8s-node-labels <<EOF
    EXTRA_K8S_NODE_LABELS="topology.kubernetes.io/region=$NODE_REGION_LABEL,topology.kubernetes.io/zone=${NODE_ZONE_LABEL}"
    EOF
  2. Update the cluster_config.json file during the advanced configuration step.
To update the cluster_config.json using the interactive installation wizard, exit at advanced configuration step and add the following to the configuration file using vim or your favorite editor:
"zone_resilience": true"zone_resilience": true

Mandatory parameters

Description

zone_resilience

Use true or false to enable or disable resilience to zonal failure.
Important:
If you enable resilience to zonal failure, passing the --zone and --region arguments is:
  • recommended if you provision your machines on AWS, Azure, or GCP, and metadata services are enabled as the installer populates the zone and region details.
  • mandatory if you provision your machines on AWS, Azure, or GCP, and metadata services are disabled, or if you opt for a different cloud provider.

Optional: Passing custom resolv.conf

The Kubernetes cluster that Automation Suite deploys uses the name servers configured in /etc/resolv.conf. Kubernetes does not work with local DNS resolvers (127.0.0.1 or 127.0.0.0/8), so if you have such name servers configured in /etc/resolv.conf file, you need to pass a file reference with the correct nameserver entries accessible from anywhere on the VM in the .infra.custom_dns_resolver parameter in cluster_config.json.

For details on a known limitation, see Kubernetes documentation.

Optional Parameters

Description

.infra.custom_dns_resolver

Path to the file with correct name server entries that can be accessed from anywhere on the VM. These name server entries must not be from 127.0.0.0/8.

Optional: Increasing fault tolerance

By default, the multi-node HA-ready production profile allows for one server node going down without causing a deployment failure. You can increase the tolerance to server node failure using the fault_tolerance parameter in the cluster_config.json file. The parameter modifies the replication factor of in-cluster storage components such as Ceph and Longhorn.

Hardware requirements

To increase fault tolerance beyond 1, make sure your environment meets the following requirements:
  • Your cluster consists of a minimum of 2x+1 server nodes, where x is the required server node fault tolerance;
  • Each server node has raw device configured.

How to increase fault tolerance

To increase fault tolerance beyond 1, take the following steps:
  1. Set fault_tolerance to the required value in the cluster_config.json file. If you set it before starting the installation or upgrade operation, you do not need to take any additional steps.
  2. Run the uipathctl.sh installer to modify the in-cluster Ceph objectstore replication factor. Wait until the operation completes successfully.
  3. Run the install-uipath.sh installer to modify the Longhorn volumes replication factor. Wait until the operation completes successfully.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.