automation-suite
2024.10
true
UiPath logo, featuring letters U and I in white

Automation Suite on Linux Installation Guide

Last updated Jan 8, 2025

Restoring the backup

Note:

Once a cluster is restored, the snapshot backup is not enabled. To enable it after restore, see Enabling the backup snapshot.

Restoring the cluster does not restore external data sources such as SQL Server, objectstore, or the OCI-compliant registry. Make sure to restore these datasource to the relevant snapshot.

To restore the cluster, take the following steps:

  1. Install the cluster infrastructure on all the server nodes. Details...
    Note:

    The hardware you provide for the restore cluster must be similar to the backup cluster hardware. For details, see Hardware and software requirements.

  2. Configure the snapshot on the restored cluster. Details...
  3. Select the snapshot to restore. Details...
  4. Restore the data and settings. Details...

Step 1: Installing the cluster infrastructure

Preparation

  1. Download the restore installer. You can find it inside the as-installer.zip package. For download instructions, see Downloading the installation packages.
  2. In offline environments, you must provide an external OCI-compliant registry or a temporary registry. Note that the registry configuration must remain the same as the one of the original cluster. To configure the registry, see the following instructions:
  3. Prepare the configuration file and make it available on all the cluster nodes. To prepare the configuration file, take one of the following steps:
    • Option A: Reuse the cluster_config.json file that you applied to the cluster before the disaster occurred;
    • Option B: Create a minimal cluster_config.json file with the required parameters, as shown in the following example:
      {
        "fixed_rke_address": "fqdn",
        "fqdn": "fqdn",
        "rke_token": "guid",
        "profile": "cluster_profile",
        "external_object_storage": { 
          "enabled": false 
        },
        "install_type": "offline or online",
        "snapshot": {
          "enabled": true,
          "nfs": {
            "server": "nfs_server_endpoint",
            "location": "nfs_server_mountpoint"
          }
        },
        "proxy": { "enabled": false }
      }{
        "fixed_rke_address": "fqdn",
        "fqdn": "fqdn",
        "rke_token": "guid",
        "profile": "cluster_profile",
        "external_object_storage": { 
          "enabled": false 
        },
        "install_type": "offline or online",
        "snapshot": {
          "enabled": true,
          "nfs": {
            "server": "nfs_server_endpoint",
            "location": "nfs_server_mountpoint"
          }
        },
        "proxy": { "enabled": false }
      }
      
The following table describes all the parameters that you must include in the minimal cluster_config.json file. Make sure to provide the same parameter values as the ones used in the orginal cluster. You can change the parameter values post-restore.
Important:
In offline environments, aside from setting the cluster_config.json parameters listed in the following table, you must also provide the external OCI-compliant registry configuration. For details, see External OCI-compliant registry configuration.

Parameter

Value

fqdn

FQDN of the Automation Suite cluster. The value must be the same as the old FQDN. Providing a different FQDN value may cause the restoration to fail.

fixed_rke_address

The fixed address used to load balance node registration and Kube API requests.

If the load balancer is configured as recommended, the value should be the same value as the one for fqdn. Otherwise, use the fqdn value of the first server node. Refer to Configuring the load balancer for more details.

rke_token

Use a newly generated GUID here. This is a pre-shared, cluster-specific secret. It is needed for all the nodes joining the cluster.

profile

Sets the profile of the installation. The available profiles are:

  • default – single-node evaluation profile
  • ha – multi-node HA-ready production profile

install_type

Indicates the type of installation you plan to perform. Your options are:

  • online – an installation with access to the Internet
  • offline – an installation with no access to the Internet

server

The FQDN or the IP address of the snapshot storage location (such as mynfs.mycompany.com or 192.23.222.81).

location

The location or path to the snapshot storage location.

infra.pod_log_path

The path to the custom directory used for pod logs. This is required if the cluster was configured with custom pod logs path.

proxy

This parameter is mandatory only if the proxy is enabled. For details, see Optional: Configuring the proxy server.

For more details on how to configure cluster_config.json, see Manual: Advanced installation experience.

Execution

Installing the cluster infrastructure on the primary server node

To install the infrastructure on the primary restore cluster node, run the following commands:

cd <installer directory>
./bin/uipathctl rke2 install -i ../../cluster_config.json -o output.json --accept-license-agreement --restorecd <installer directory>
./bin/uipathctl rke2 install -i ../../cluster_config.json -o output.json --accept-license-agreement --restore
Important: Copy cluster_config.json from the primary server node to the remaining server/agents nodes. The infrastructure installation step on the primary server node adds extra values that the remaining nodes need.

Installing the cluster infrastructure on secondary servers

To install the infrastructure on the secondary servers:

cd <installer directory>
./bin/uipathctl rke2 install -i ../../cluster_config.json -o output.json -j server --accept-license-agreement --restorecd <installer directory>
./bin/uipathctl rke2 install -i ../../cluster_config.json -o output.json -j server --accept-license-agreement --restore

Installing the infrastructure on all the agent machines

To install the infrastructure on the agent nodes:

cd <installer directory>
./bin/uipathctl rke2 install -i ../../cluster_config.json -o output.json -j agent --accept-license-agreement --restorecd <installer directory>
./bin/uipathctl rke2 install -i ../../cluster_config.json -o output.json -j agent --accept-license-agreement --restore

Hydrating the in-cluster registry for offline installations

This step is required only if you use an in-cluster registry for offline installations. You must hydrate the registry before you trigger the restore, by using the following command:

./bin/uipathctl rke2 registry hydrate-registry /opt/UiPathAutomationSuite/cluster_config.json./bin/uipathctl rke2 registry hydrate-registry /opt/UiPathAutomationSuite/cluster_config.json 

Installing the cluster infrastructure on service nodes

Installing the cluster infrastructure on Task Mining nodes

To install the cluster infrastructure on Task Mining nodes:

cd <installer directory>
./bin/uipathctl rke2 install -i ../../cluster_config.json -o output.json -j task-mining --accept-license-agreement --restorecd <installer directory>
./bin/uipathctl rke2 install -i ../../cluster_config.json -o output.json -j task-mining --accept-license-agreement --restore

Installing the cluster infrastructure on Automation Suite Robots nodes

To install the cluster infrastructure on Automation Suite Robots nodes:

cd <installer directory>
./bin/uipathctl rke2 install -i ../../cluster_config.json -o output.json -j asrobots --accept-license-agreement --restorecd <installer directory>
./bin/uipathctl rke2 install -i ../../cluster_config.json -o output.json -j asrobots --accept-license-agreement --restore

Installing the cluster infrastructure on GPU nodes

To install the cluster infrastructure on GPU nodes:

cd <installer directory>
./bin/uipathctl rke2 install -i ../../cluster_config.json -o output.json -j gpu --accept-license-agreement --restorecd <installer directory>
./bin/uipathctl rke2 install -i ../../cluster_config.json -o output.json -j gpu --accept-license-agreement --restore

Step 2: Preparing the cluster for restore

Once the infrastructure is installed, you need to prepare the cluster snapshot for restore. Based on your scenario, you must run the following commands:

  • If you use an external objectore:

    ./bin/uipathctl manifest apply /opt/UiPathAutomationSuite/cluster_config.json --only velero --versions versions/helm-charts.json
    ./bin/uipathctl manifest apply /opt/UiPathAutomationSuite/cluster_config.json --only velero --versions versions/helm-charts.json
  • If you use an in-cluster ceph-objectstore:

    ./bin/uipathctl manifest apply /opt/UiPathAutomationSuite/cluster_config.json --only base,rook-ceph-operator,rook-ceph-object-store,velero --versions versions/helm-charts.json
    ./bin/uipathctl manifest apply /opt/UiPathAutomationSuite/cluster_config.json --only base,rook-ceph-operator,rook-ceph-object-store,velero --versions versions/helm-charts.json

To configure the backup of the restored cluster, follow the steps in the Configure the cluster snapshot section.

Step 3: Selecting the snapshot to restore

After configuring the snapshot, list the existing snapshots and decide on the one you want to use as a restoring point.

Step 4: Restoring data and settings

To restore to a previous cluster, provide the snapshot name you want to convert from using the --from-snapshot <snapshot-name> flag.
./bin/uipathctl snapshot restore create <restore_name> --from-snapshot <snapshot_name>./bin/uipathctl snapshot restore create <restore_name> --from-snapshot <snapshot_name>
The command triggers the restore process in the background. To wait for command completion until the restore process is complete, pass the --wait option. To check the status of the restore process, run the following command:
./bin/uipathctl snapshot restore history./bin/uipathctl snapshot restore history

If you do not specify the snapshot name, the cluster restores the latest successful snapshot. See the snapshot list for available snapshots.

Restoring cluster_config.json

After the Automation Suite cluster recovery, you may want to recover the cluster_config.json file for future usage, such as adding new nodes to the cluster, upgrading, etc.
To restore cluster_config.json, take the following steps:
  1. You need to find the last applied configuration, by running the following command:

    ./bin/uipathctl manifest list-revisions./bin/uipathctl manifest list-revisions
    The following sample is an example of the command output:
    VERSION  UPDATED                        STATUS
    1        2024-11-07 00:46:41 +0000 UTC  successful
    2        2024-11-07 01:14:20 +0000 UTC  successful
    3        2024-11-07 01:23:23 +0000 UTC  successfulVERSION  UPDATED                        STATUS
    1        2024-11-07 00:46:41 +0000 UTC  successful
    2        2024-11-07 01:14:20 +0000 UTC  successful
    3        2024-11-07 01:23:23 +0000 UTC  successful
  2. Select the correct VERSION number deployed before backup creation and run the following command to retrieve the cluster_config.json file:
    ./bin/uipathctl manifest get-revision --version <VERSION>./bin/uipathctl manifest get-revision --version <VERSION>

    The following sample is an example of the command output:

    ./bin/uipathctl manifest get-revision --version 1 > ./cluster_config.json./bin/uipathctl manifest get-revision --version 1 > ./cluster_config.json

Adding CA certificates to the trust store

After restoring the cluster, make sure to add your CA certificates to the trust store of the restored VMs. For details, see:

Retrieving new monitoring password

After restoring an Automation Suite cluster, you need to retrieve the new monitoring password. For this, follow the steps from Accessing the monitoring tools.

Enabling AI Center on the Restored Cluster

After restoring an Automation Suite cluster with AI Center™ enabled, follow the steps from the Enabling AI Center on the Restored Cluster procedure.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2025 UiPath. All rights reserved.