- Overview
- Requirements
- Recommended: Deployment templates
- Manual: Preparing the installation
- Manual: Preparing the installation
- Step 1: Configuring the OCI-compliant registry for offline installations
- Step 2: Configuring the external objectstore
- Step 3: Configuring High Availability Add-on
- Step 4: Configuring Microsoft SQL Server
- Step 5: Configuring the load balancer
- Step 6: Configuring the DNS
- Step 7: Configuring kernel and OS level settings
- Step 8: Configuring the disks
- Step 9: Configuring the node ports
- Step 10: Applying miscellaneous settings
- Step 12: Validating and installing the required RPM packages
- Step 13: Generating cluster_config.json
- Certificate configuration
- Database configuration
- External Objectstore configuration
- Pre-signed URL configuration
- External OCI-compliant registry configuration
- Disaster recovery: Active/Passive and Active/Active configurations
- High Availability Add-on configuration
- Orchestrator-specific configuration
- Insights-specific configuration
- Process Mining-specific configuration
- Document Understanding-specific configuration
- Automation Suite Robots-specific configuration
- Monitoring configuration
- Optional: Configuring the proxy server
- Optional: Enabling resilience to zonal failures in a multi-node HA-ready production cluster
- Optional: Passing custom resolv.conf
- Optional: Increasing fault tolerance
- install-uipath.sh parameters
- Adding a dedicated agent node with GPU support
- Adding a dedicated agent Node for Task Mining
- Connecting Task Mining application
- Adding a Dedicated Agent Node for Automation Suite Robots
- Step 15: Configuring the temporary Docker registry for offline installations
- Step 16: Validating the prerequisites for the installation
- Manual: Performing the installation
- Post-installation
- Cluster administration
- Managing products
- Getting Started with the Cluster Administration portal
- Migrating objectstore from persistent volume to raw disks
- Migrating from in-cluster to external High Availability Add-on
- Migrating data between objectstores
- Migrating in-cluster objectstore to external objectstore
- Switching to the secondary cluster manually in an Active/Passive setup
- Disaster Recovery: Performing post-installation operations
- Converting an existing installation to multi-site setup
- Guidelines on upgrading an Active/Passive or Active/Active deployment
- Guidelines on backing up and restoring an Active/Passive or Active/Active deployment
- Redirecting traffic for the unsupported services to the primary cluster
- Monitoring and alerting
- Migration and upgrade
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Orchestrator
- Step 7: Migrating standalone Insights
- Step 8: Deleting the default tenant
- B) Single tenant migration
- Migrating from Automation Suite on Linux to Automation Suite on EKS/AKS
- Upgrading Automation Suite
- Downloading the installation packages and getting all the files on the first server node
- Retrieving the latest applied configuration from the cluster
- Updating the cluster configuration
- Configuring the OCI-compliant registry for offline installations
- Migrating to an external OCI-compliant registry
- Executing the upgrade
- Performing post-upgrade operations
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to troubleshoot services during installation
- How to uninstall the cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bundle
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to disable NIC checksum offloading
- How to upgrade from Automation Suite 2022.10.10 and 2022.4.11 to 2023.10.2
- How to manually set the ArgoCD log level to Info
- Unable to run an offline installation on RHEL 8.4 OS
- Error in downloading the bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- First installation fails during Longhorn setup
- SQL connection string validation error
- Prerequisite check for selinux iscsid module fails
- Azure disk not marked as SSD
- Failure after certificate update
- Antivirus causes installation issues
- Automation Suite not working after OS upgrade
- Automation Suite requires backlog_wait_time to be set to 0
- Volume unable to mount due to not being ready for workloads
- Cluster unhealthy after automated upgrade from 2021.10
- Upgrade fails due to unhealthy Ceph
- RKE2 not getting started due to space issue
- Volume unable to mount and remains in attach/detach loop state
- Upgrade fails due to classic objects in the Orchestrator database
- Ceph cluster found in a degraded state after side-by-side upgrade
- Unhealthy Insights component causes the migration to fail
- Service upgrade fails for Apps
- In-place upgrade timeouts
- Docker registry migration stuck in PVC deletion stage
- AI Center provisioning failure after upgrading to 2023.10
- Upgrade fails in offline environments
- Setting a timeout interval for the management portals
- Authentication not working after migration
- Kinit: Cannot find KDC for realm <AD Domain> while getting initial credentials
- Kinit: Keytab contains no suitable keys for *** while getting initial credentials
- GSSAPI operation failed due to invalid status code
- Alarm received for failed Kerberos-tgt-update job
- SSPI provider: Server not found in Kerberos database
- Login failed for AD user due to disabled account
- ArgoCD login failed
- Update the underlying directory connections
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis probe failure
- RKE2 server fails to start
- Secret not found in UiPath namespace
- ArgoCD goes into progressing state after first installation
- MongoDB pods in CrashLoopBackOff or pending PVC provisioning after deletion
- Unhealthy services after cluster restore or rollback
- Pods stuck in Init:0/X
- Missing Ceph-rook metrics from monitoring dashboards
- Running the diagnostics tool
- Using the Automation Suite Support Bundle Tool
- Exploring Logs
High Availability Add-on configuration
Automation Suite supports High Availability Add-on (HAA) installed either in the same cluster or on external machines.
You must configure HAA to enable the actual HA setup for the multi-node setup. To do that, you can either provide the HAA license to the installer or install HAA on the external machines and give the HAA configurations to the installer.
In a multi-node HA-ready production setup, High Availability (HA) is enabled by default. However, the Redis-based in-memory cache used by cluster services is running on a single node and represents a single point of failure. To mitigate the impact of a cache node failure or restart, you can purchase the High Availability Add-on (HAA), which enables redundant, multi-node HA-ready production deployment of the cache.
All installations include the HAA software with a single-node license. This license is free of cost, no purchase required.
If you wish to enable HAA across multiple nodes, then purchasing an HAA license is required. This will implement full high availability for the cluster in a multi-node HA-ready production setup.
HAA is based on Redis technology.
To do that, take the following steps:
- Purchase an HAA license. Contact UiPath® for details.
-
Update the following fields in the
cluster_config.json
file:fabric.redis.license
- enter the HAA license converted to a single base64 string. In bash you can do that usingecho 'license_text_here' | base64 -w0
.-
fabric.redis.ha
- usetrue
to enable HAA and make sure to also configure thefabric.redis.license
parameter. This enables HAA database replication and increases the number of HAA pods to 3. By default,fabric.redis.ha
is set tofalse
.Note: Ifredis.ha
is enabled,redis.license
needs to be set to a license that supports more than two shards."fabric": { "redis": { "ha": "true", "license": Base64String } }
"fabric": { "redis": { "ha": "true", "license": Base64String } }
- Rerun the fabric installer:
./install-uipath.sh -i cluster_config.json -f -o output.json --accept-license-agreement
./install-uipath.sh -i cluster_config.json -f -o output.json --accept-license-agreement
When opting for an Active/Active configuration of Automation Suite, an external cluster-hosted High Availability Add-on is mandatory. In all other situations, it is merely optional.
To configure High Availability Add-on, you must update the following parameters in the cluster_config.json file:
Parameter |
Description |
---|---|
|
Provide the FQDN of the High Availability Add-on (HAA) server. |
| Provide the password to connect to the HAA server. |
| Provide the port for the HAA server. |
fabric.redis.tls |
Enable the TLS protocol. By default, TLS is enabled. Note:
If a certificate is required when TLS is enabled, make sure to provide it via the
additional_ca_certs flag. For details, see Certificate configuration.
|
"fabric": {
"redis": {
"hostname": "redis_fqdn",
"password": "credential_to_connect_redis",
"port": 6380,
"tls": true,
}
}
"fabric": {
"redis": {
"hostname": "redis_fqdn",
"password": "credential_to_connect_redis",
"port": 6380,
"tls": true,
}
}