- Overview
- Requirements
- Recommended: Deployment templates
- Manual: Preparing the installation
- Manual: Preparing the installation
- Step 1: Configuring the OCI-compliant registry for offline installations
- Step 2: Configuring the external objectstore
- Step 3: Configuring High Availability Add-on
- Step 4: Configuring Microsoft SQL Server
- Step 5: Configuring the load balancer
- Step 6: Configuring the DNS
- Step 7: Configuring kernel and OS level settings
- Step 8: Configuring the disks
- Step 9: Configuring the node ports
- Step 10: Applying miscellaneous settings
- Step 12: Validating and installing the required RPM packages
- Step 13: Generating cluster_config.json
- Certificate configuration
- Database configuration
- External Objectstore configuration
- Pre-signed URL configuration
- External OCI-compliant registry configuration
- Disaster recovery: Active/Passive and Active/Active configurations
- High Availability Add-on configuration
- Orchestrator-specific configuration
- Insights-specific configuration
- Process Mining-specific configuration
- Document Understanding-specific configuration
- Automation Suite Robots-specific configuration
- Monitoring configuration
- Optional: Configuring the proxy server
- Optional: Enabling resilience to zonal failures in a multi-node HA-ready production cluster
- Optional: Passing custom resolv.conf
- Optional: Increasing fault tolerance
- install-uipath.sh parameters
- Adding a dedicated agent node with GPU support
- Adding a dedicated agent Node for Task Mining
- Connecting Task Mining application
- Adding a Dedicated Agent Node for Automation Suite Robots
- Step 15: Configuring the temporary Docker registry for offline installations
- Step 16: Validating the prerequisites for the installation
- Manual: Performing the installation
- Post-installation
- Cluster administration
- Managing products
- Getting Started with the Cluster Administration portal
- Migrating objectstore from persistent volume to raw disks
- Migrating from in-cluster to external High Availability Add-on
- Migrating data between objectstores
- Migrating in-cluster objectstore to external objectstore
- Switching to the secondary cluster manually in an Active/Passive setup
- Disaster Recovery: Performing post-installation operations
- Converting an existing installation to multi-site setup
- Guidelines on upgrading an Active/Passive or Active/Active deployment
- Guidelines on backing up and restoring an Active/Passive or Active/Active deployment
- Redirecting traffic for the unsupported services to the primary cluster
- Monitoring and alerting
- Migration and upgrade
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Orchestrator
- Step 7: Migrating standalone Insights
- Step 8: Deleting the default tenant
- B) Single tenant migration
- Migrating from Automation Suite on Linux to Automation Suite on EKS/AKS
- Upgrading Automation Suite
- Downloading the installation packages and getting all the files on the first server node
- Retrieving the latest applied configuration from the cluster
- Updating the cluster configuration
- Configuring the OCI-compliant registry for offline installations
- Migrating to an external OCI-compliant registry
- Executing the upgrade
- Performing post-upgrade operations
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to troubleshoot services during installation
- How to uninstall the cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bundle
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to disable NIC checksum offloading
- How to upgrade from Automation Suite 2022.10.10 and 2022.4.11 to 2023.10.2
- How to manually set the ArgoCD log level to Info
- Unable to run an offline installation on RHEL 8.4 OS
- Error in downloading the bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- First installation fails during Longhorn setup
- SQL connection string validation error
- Prerequisite check for selinux iscsid module fails
- Azure disk not marked as SSD
- Failure after certificate update
- Antivirus causes installation issues
- Automation Suite not working after OS upgrade
- Automation Suite requires backlog_wait_time to be set to 0
- Volume unable to mount due to not being ready for workloads
- Cluster unhealthy after automated upgrade from 2021.10
- Upgrade fails due to unhealthy Ceph
- RKE2 not getting started due to space issue
- Volume unable to mount and remains in attach/detach loop state
- Upgrade fails due to classic objects in the Orchestrator database
- Ceph cluster found in a degraded state after side-by-side upgrade
- Unhealthy Insights component causes the migration to fail
- Service upgrade fails for Apps
- In-place upgrade timeouts
- Docker registry migration stuck in PVC deletion stage
- AI Center provisioning failure after upgrading to 2023.10
- Upgrade fails in offline environments
- Setting a timeout interval for the management portals
- Authentication not working after migration
- Kinit: Cannot find KDC for realm <AD Domain> while getting initial credentials
- Kinit: Keytab contains no suitable keys for *** while getting initial credentials
- GSSAPI operation failed due to invalid status code
- Alarm received for failed Kerberos-tgt-update job
- SSPI provider: Server not found in Kerberos database
- Login failed for AD user due to disabled account
- ArgoCD login failed
- Update the underlying directory connections
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis probe failure
- RKE2 server fails to start
- Secret not found in UiPath namespace
- ArgoCD goes into progressing state after first installation
- MongoDB pods in CrashLoopBackOff or pending PVC provisioning after deletion
- Unhealthy services after cluster restore or rollback
- Pods stuck in Init:0/X
- Missing Ceph-rook metrics from monitoring dashboards
- Running the diagnostics tool
- Using the Automation Suite Support Bundle Tool
- Exploring Logs
Restoring the backup
Once a cluster is restored, the snapshot backup is not enabled. To enable it after restore, see Enabling the backup snapshot.
Restoring the cluster does not restore external data sources such as SQL Server, objectstore, or the OCI-compliant registry. Make sure to restore these datasource to the relevant snapshot.
To restore the cluster, take the following steps:
- Install the cluster infrastructure on all the server nodes. The hardware you provide for the restore cluster must be similar to the backup cluster hardware. For details, see Hardware and software requirements.
- Configure the snapshot on the restored cluster.
- Select the snapshot to restore.
- Restore the data and settings.
- Download the restore installer. You can find it
inside the
as-installer.zip
package. For download instructions, see Downloading the installation packages. - In offline environments, you must provide an
external OCI-compliant registry or a temporary registry. Note that the registry
configuration must remain the same as the one of the original cluster. To configure the
registry, see the following instructions:
- Configuring the external OCI-compliant registry
- Configuring the temporary Docker registry. Choose this option only if you did not use an external OCI-compliant registry before the disaster occurred.
- Prepare the configuration file and make it
available on all the cluster nodes. To prepare the configuration file, take one of the
following steps:
- Option A: Reuse the
cluster_config.json
file that you applied to the cluster before the disaster occurred; - Option B: Create a minimal
cluster_config.json
file with the required parameters, as shown in the following example:{ "fixed_rke_address": "fqdn", "fqdn": " fqdn ", "rke_token": "guid", "profile" : "cluster_profile" "external_object_storage": { "enabled": false }, "install_type": "offline or online" }
{ "fixed_rke_address": "fqdn", "fqdn": " fqdn ", "rke_token": "guid", "profile" : "cluster_profile" "external_object_storage": { "enabled": false }, "install_type": "offline or online" }
- Option A: Reuse the
cluster_config.json
file. Make sure to provide the same parameter values as the ones used in the orginal cluster. You can change the parameter
values post-restore.
cluster_config.json
parameters listed in the following table, you must also provide the external OCI-compliant registry configuration. For details,
see External OCI-compliant registry configuration.
Parameter |
Value |
---|---|
|
FQDN of the Automation Suite cluster. The value must be the same as the old FQDN. Providing a different FQDN value may cause the restoration to fail. |
|
The fixed address used to load balance node registration and Kube API requests. If the load balancer is configured as recommended, the value should be the same value as the one for
fqdn . Otherwise, use the fqdn value of the first server node. Refer to Configuring the load balancer for more details.
|
|
Use a newly generated GUID here. This is a pre-shared, cluster-specific secret. It is needed for all the nodes joining the cluster. |
|
Sets the profile of the installation. The available profiles are:
|
|
Indicates the type of installation you plan to perform. Your options are:
|
cluster_config.json
, see Manual: Advanced installation experience.
Installing the cluster infrastructure on the primary restore cluster node
To install the infrastructure on the primary restore cluster node, run the following commands:
cd <installer directory>
./install-uipath.sh -i ../../cluster_config.json -o output.json --restore --accept-license-agreement
cd <installer directory>
./install-uipath.sh -i ../../cluster_config.json -o output.json --restore --accept-license-agreement
cluster_config.json
from the primary server node to the remaining server/agents nodes. The infrastructure installation step on the primary server
node adds extra values that the remaining nodes need.
Installing the cluster infrastructure on secondary servers
To install the infrastructure on the secondary servers:
cd <installer directory>
./install-uipath.sh -i ../../cluster_config.json -o output.json --restore -j server --accept-license-agreement
cd <installer directory>
./install-uipath.sh -i ../../cluster_config.json -o output.json --restore -j server --accept-license-agreement
Installing the cluster infrastructure on agent nodes
To install the infrastructure on the agent nodes:
cd <installer directory>
./install-uipath.sh -i ../../cluster_config.json -o output.json --restore -j agent --accept-license-agreement
cd <installer directory>
./install-uipath.sh -i ../../cluster_config.json -o output.json --restore -j agent --accept-license-agreement
Installing the cluster infrastructure on Task Mining nodes
To install the cluster infrastructure on Task Mining nodes:
cd <installer directory>
./install-uipath.sh -i ../../cluster_config.json -o output.json --restore -j task-mining --accept-license-agreement
cd <installer directory>
./install-uipath.sh -i ../../cluster_config.json -o output.json --restore -j task-mining --accept-license-agreement
Installing the cluster infrastructure on Automation Suite Robots nodes
To install the cluster infrastructure on Automation Suite Robots nodes:
cd <installer directory>
./install-uipath.sh -i ../../cluster_config.json -o output.json --restore -j asrobots --accept-license-agreement
cd <installer directory>
./install-uipath.sh -i ../../cluster_config.json -o output.json --restore -j asrobots --accept-license-agreement
Installing the cluster infrastructure on GPU nodes
To install the cluster infrastructure on GPU nodes:
cd <installer directory>
./install-uipath.sh -i ../../cluster_config.json -o output.json --restore -j gpu --accept-license-agreement
cd <installer directory>
./install-uipath.sh -i ../../cluster_config.json -o output.json --restore -j gpu --accept-license-agreement
Once the infrastructure is installed, configure the snapshot while providing the minimum data, such as target, endpoint, and location. These values are used at the time of restoration.
To configure the backup of the restored cluster, follow the steps in the Configure the cluster snapshot section.
After configuring the snapshot, list the existing snapshots and decide on the one you want to use as a restoring point.
--from-snapshot <snapshot-name>
flag.
./configureUiPathAS.sh snapshot restore create --from-snapshot <snapshot name>
./configureUiPathAS.sh snapshot restore create --from-snapshot <snapshot name>
If you do not specify the snapshot name, the cluster restores the latest successful snapshot. See the snapshot list for available snapshots.
cluster_config.json
file for future usage, such as adding new nodes to the cluster, upgrading, etc.
cluster_config.json
:
uipathctl manifest get-revision >> ./cluster_config.json
uipathctl manifest get-revision >> ./cluster_config.json
After restoring the cluster, make sure to add your CA certificates to the trust store of the restored VMs. For details, see:
After restoring an Automation Suite cluster, you need to retrieve the new monitoring password. For this, follow the steps from Accessing the monitoring tools.
After restoring an Automation Suite cluster with AI Center™ enabled, follow the steps from the Enabling AI Center on the Restored Cluster procedure.
- Step 1: Installing the cluster infrastructure
- Preparation
- Execution
- Step 2: Configuring the snapshot on the restored cluster
- Step 3: Selecting the snapshot to restore
- Step 4: Restoring data and settings
- Restoring cluster_config.json
- Adding CA certificates to the trust store
- Retrieving new monitoring password
- Enabling AI Center on the Restored Cluster