- Overview
- Requirements
- Installation
- Post-installation
- Cluster administration
- Managing products
- Managing the cluster in ArgoCD
- Setting up the external NFS server
- Automated: Enabling the Backup on the Cluster
- Automated: Disabling the Backup on the Cluster
- Automated, Online: Restoring the Cluster
- Automated, Offline: Restoring the Cluster
- Manual: Enabling the Backup on the Cluster
- Manual: Disabling the Backup on the Cluster
- Manual, Online: Restoring the Cluster
- Manual, Offline: Restoring the Cluster
- Additional configuration
- Migrating objectstore from persistent volume to raw disks
- Monitoring and alerting
- Migration and upgrade
- Migration options
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Insights
- Step 7: Deleting the default tenant
- B) Single tenant migration
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to Troubleshoot Services During Installation
- How to Uninstall the Cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bundle
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to automatically clean up Longhorn snapshots
- How to disable NIC checksum offloading
- Unable to run an offline installation on RHEL 8.4 OS
- Error in Downloading the Bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- First installation fails during Longhorn setup
- SQL connection string validation error
- Prerequisite check for selinux iscsid module fails
- Azure disk not marked as SSD
- Failure After Certificate Update
- Automation Suite not working after OS upgrade
- Automation Suite Requires Backlog_wait_time to Be Set 1
- Volume unable to mount due to not being ready for workloads
- RKE2 fails during installation and upgrade
- Failure to upload or download data in objectstore
- PVC resize does not heal Ceph
- Failure to Resize Objectstore PVC
- Rook Ceph or Looker pod stuck in Init state
- StatefulSet volume attachment error
- Failure to create persistent volumes
- Storage reclamation patch
- Backup failed due to TooManySnapshots error
- All Longhorn replicas are faulted
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Cannot Log in After Migration
- Kinit: Cannot Find KDC for Realm <AD Domain> While Getting Initial Credentials
- Kinit: Keytab Contains No Suitable Keys for *** While Getting Initial Credentials
- GSSAPI Operation Failed With Error: An Invalid Status Code Was Supplied (Client's Credentials Have Been Revoked).
- Alarm Received for Failed Kerberos-tgt-update Job
- SSPI Provider: Server Not Found in Kerberos Database
- Login Failed for User <ADDOMAIN><aduser>. Reason: The Account Is Disabled.
- ArgoCD login failed
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis Probe Failure
- RKE2 Server Fails to Start
- Secret Not Found in UiPath Namespace
- After the Initial Install, ArgoCD App Went Into Progressing State
- MongoDB pods in CrashLoopBackOff or pending PVC provisioning after deletion
- Unexpected Inconsistency; Run Fsck Manually
- Degraded MongoDB or Business Applications After Cluster Restore
- Missing Self-heal-operator and Sf-k8-utils Repo
- Unhealthy Services After Cluster Restore or Rollback
- RabbitMQ pod stuck in CrashLoopBackOff
- Prometheus in CrashloopBackoff state with out-of-memory (OOM) error
- Missing Ceph-rook metrics from monitoring dashboards
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite Support Bundle Tool
- Exploring Logs
Automated, Offline: Restoring the Cluster
- Make sure you have followed the Prerequisites.
- Make sure the backup is disabled on the backup cluster. Inconsistencies will occur if a new backup is created while restoring the cluster. For details, see Automated: Disabling the backup on the cluster.
- Make sure package wget, unzip, jq are available on all the restore nodes.
- All external data sources must be the same (SQL Server).
- You must restart the NFS server before cluster restoration. To do that, run the following command on the NFS Server node:
systemctl restart nfs-server
. - The restore cluster must have the same FQDN as the backup cluster.
- Make sure you have prepared your environment for using
uipathctl.sh
command. For details, see Using uipathctl.sh. -
At the time of cluster restore, you must provide the
inventory.ini
file containing the IP address of the Ansible host node and information on how to SSH to all nodes. Theuipathctl.sh
script cannot generate this file automatically as the cluster is not present at the time of the restore process.Installation type
Configuration instructions
Requirements
Offline single-node evaluation mode
Download the following files:
sf-installer.zip
– Mandatory. See sf-installer.zip for download instructions.sf-infra.tar.gz
– Mandatory. See sf-infra.tar.gz for download instructions.
Offline multi-node HA-ready production mode
Download below files
sf-installer.zip
– Mandatory. See sf-installer.zip for download instructions.sf-infra.tar.gz
– Mandatory. See sf-infra.tar.gz for download instructions.
Server 1
, which is the Ansible host node, uses Ansible to orchestrate the restore of all the nodes in the cluster.
uipathctl.sh
script:
inventory.ini
– This file contains information on all the nodes in the cluster and SSH details. Theuipathctl.sh
script does not generate theinventory.ini
file automatically as the cluster is not present at the time of cluster restore.restore.json
– This file contains the restore configuration. For details, see Preparing the restore configuration section.
To restore the cluster, run the following command with the basic parameters. Make sure to use the parameter values applicable to you.
./uipathctl.sh restore --install-type online --restore-config ./restore.json --inventory ./inventory.ini --offline-infra-bundle ../sf-infra.tar.gz
./uipathctl.sh restore --install-type online --restore-config ./restore.json --inventory ./inventory.ini --offline-infra-bundle ../sf-infra.tar.gz
Parameter |
Description |
---|---|
|
Possible values:
online and offline .
Since this page provides restore instructions for online, choose the
offline value.
|
|
The file contains the restore configuration. For details, see Preparing the restore configuration. |
|
The file contains information on all the nodes in the cluster, and SSH details. For details, see Generating the Ansible inventory.ini file. |
Parameter |
Description |
---|---|
|
Specify the username that you will use for SSH connections to all the nodes. Defaults to the current user. If you use a different username for all the nodes, instead of using this parameter, set the username for all the nodes in
inventory.ini and pass it to the script.
Example: While running the script you are logged in as the
myadminuser username. However, if you want to use the testadmin username to connect via SSH, you must provide testadmin to a value for this parameter.
|
cluster_config.json
for future reference. You may need the file when adding new nodes to cluster, when upgrading, etc.
cluster_config.json
, run the following command from any of the server nodes:
./configureUiPathAS.sh config get -o ./cluster_config.json
./configureUiPathAS.sh config get -o ./cluster_config.json
The backup is not enabled on the restored cluster. If you wish to enable it, refer to Enabling the backup on the cluster.
Restoring the cluster is not an idempotent operation. In case of a failed restore, take one of the following steps before retrying the operation:
- Uninstall Automation Suite on all the nodes. For instructions, see Troubleshooting.
- Follow the manual restore steps. For instructions, see Manual, online: Restoring the cluster.
After restoring the cluster, make sure to add your CA certificates to the trust store of the restored VMs. For details, see: