- Overview
- Requirements
- Installation
- Post-installation
- Cluster administration
- Managing products
- Managing the cluster in ArgoCD
- Setting up the external NFS server
- Automated: Enabling the Backup on the Cluster
- Automated: Disabling the Backup on the Cluster
- Automated, Online: Restoring the Cluster
- Automated, Offline: Restoring the Cluster
- Manual: Enabling the Backup on the Cluster
- Manual: Disabling the Backup on the Cluster
- Manual, Online: Restoring the Cluster
- Manual, Offline: Restoring the Cluster
- Additional configuration
- Migrating objectstore from persistent volume to raw disks
- Monitoring and alerting
- Migration and upgrade
- Migration options
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Insights
- Step 7: Deleting the default tenant
- B) Single tenant migration
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to Troubleshoot Services During Installation
- How to Uninstall the Cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bundle
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to automatically clean up Longhorn snapshots
- How to disable NIC checksum offloading
- Unable to run an offline installation on RHEL 8.4 OS
- Error in Downloading the Bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- First installation fails during Longhorn setup
- SQL connection string validation error
- Prerequisite check for selinux iscsid module fails
- Azure disk not marked as SSD
- Failure After Certificate Update
- Automation Suite not working after OS upgrade
- Automation Suite Requires Backlog_wait_time to Be Set 1
- Volume unable to mount due to not being ready for workloads
- RKE2 fails during installation and upgrade
- Failure to upload or download data in objectstore
- PVC resize does not heal Ceph
- Failure to Resize Objectstore PVC
- Rook Ceph or Looker pod stuck in Init state
- StatefulSet volume attachment error
- Failure to create persistent volumes
- Storage reclamation patch
- Backup failed due to TooManySnapshots error
- All Longhorn replicas are faulted
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Cannot Log in After Migration
- Kinit: Cannot Find KDC for Realm <AD Domain> While Getting Initial Credentials
- Kinit: Keytab Contains No Suitable Keys for *** While Getting Initial Credentials
- GSSAPI Operation Failed With Error: An Invalid Status Code Was Supplied (Client's Credentials Have Been Revoked).
- Alarm Received for Failed Kerberos-tgt-update Job
- SSPI Provider: Server Not Found in Kerberos Database
- Login Failed for User <ADDOMAIN><aduser>. Reason: The Account Is Disabled.
- ArgoCD login failed
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis Probe Failure
- RKE2 Server Fails to Start
- Secret Not Found in UiPath Namespace
- After the Initial Install, ArgoCD App Went Into Progressing State
- MongoDB pods in CrashLoopBackOff or pending PVC provisioning after deletion
- Unexpected Inconsistency; Run Fsck Manually
- Degraded MongoDB or Business Applications After Cluster Restore
- Missing Self-heal-operator and Sf-k8-utils Repo
- Unhealthy Services After Cluster Restore or Rollback
- RabbitMQ pod stuck in CrashLoopBackOff
- Prometheus in CrashloopBackoff state with out-of-memory (OOM) error
- Missing Ceph-rook metrics from monitoring dashboards
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite Support Bundle Tool
- Exploring Logs
Enabling Redis High Availability Add-on for the Cluster
In a multi-node HA-ready production setup, High Availability (HA) is enabled by default. However, the Redis-based in-memory cache used by cluster services is running on a single node and represents a single point of failure. To mitigate the impact of a cache node failure or restart, you can purchase the High Availability Add-on (HAA), which enables redundant, multi-node HA-ready production deployment of the cache.
All installations include the HAA software with a single-node license. This license is free of cost, no purchase required.
If you wish to enable HAA across multiple nodes, then purchasing an HAA license is required. This will implement full high availability for the cluster in a multi-node HA-ready production setup.
HAA is based on Redis technology.
To do that, take the following steps:
- Purchase an HAA license. Contact UiPath for details.
- Update the following fields in the
cluster.config.json
file:fabric.redis.license
- enter the HAA license converted to a single base64 string. In bash you can do that usingecho 'license_text_here' | base64 -w0
.fabric.redis.ha
- usetrue
to enable HAA and make sure to also configure thefabric.redis.license
parameter. This enables HAA database replication and increases the number of HAA pods to 3. By default,fabric.redis.ha
is set tofalse
.Note: Ifredis.ha
is enabled,redis.license
needs to be set to a license that supports more than two shards."fabric": { "redis": { "ha": "true", "license": Base64String } }
"fabric": { "redis": { "ha": "true", "license": Base64String } }
- Rerun the fabric installer:
- online
installation:
./install-uipath.sh -i cluster_config.json -f -o output.json --accept-license-agreement
./install-uipath.sh -i cluster_config.json -f -o output.json --accept-license-agreement - offline
installation:
./install-uipath.sh -i cluster_config.json -f --install-type offline -o output.json --accept-license-agreement
./install-uipath.sh -i cluster_config.json -f --install-type offline -o output.json --accept-license-agreement