Warning!
Upgrading Automation Suite enables maintenance mode on the cluster, which causes downtime for the entire upgrade duration.
If your Automation Suite cluster was ever on 21.10.3 or 21.10.4, you must take additional steps to downgrade the Ceph objectstore version before upgrading to a newer Automation Suite version. For instructions, see Downgrading Ceph from 16.2.6 to 15.2.9
Known issue
If Automation Suite is configured with AD integration, when upgrading to either 21.10.1 or 21.10.2, the Kerberos configuration is lost from ArgoCD settings, and the values need to be set again after upgrade. Make sure to save the values before starting the upgrade, or follow the instructions again to set up Kerberos.
Deployment mode | Upgrade instructions |
---|---|
Online single-node evaluation | Online single-node evaluation mode Preparation Execution Rollback on error |
Offline single-node evaluation | Offline single-node evaluation mode Preparation Execution Rollback on error |
Online multi-node HA-ready production | Online multi-node HA-ready production mode Preparation Execution Rollback on error |
Offline multi-node HA-ready production | Offline multi-node HA-ready production mode Preparation Execution Rollback on error |
All | (Optional) Migrating a Longhorn physical disk to LVM This step is optional but highly recommended when upgrading Automation Suite. |
Important!
For instructions on how to download the installation packages required during upgrade, see Downloading installation packages.
Online single-node evaluation mode
Preparation
-
Make sure that there is enough disk space on the node. For more details, see Hardware requirements.
-
Download and unzip the new installer (
installer.zip
) on the server.
Click here for detailed instructions
- Connect to the machine using SSH.
If you set a password, the command is as follows:
ssh <user>@<dns_of_vm>
If you used an SSH key, the command is as follows:
ssh -i <path/to/Key.pem> <user>@<dns_of_vm>
- Become root:
sudo su -
- Move to home directory:
cd ~
- Download the installation package. Make sure to keep the
'
(single quotes) around the download URL.
wget 'https://download.uipath.com/automation-suite/installer.zip' -O installer.zip
- Create an installation folder and unzip the installation package:
mkdir /opt/UiPathAutomationSuite/<installer-folder> -R
unzip ./installer.zip -d /opt/UiPathAutomationSuite/<installer-folder>
- Give proper permissions to the folder by running the following command:
sudo chmod 755 -R /opt/UiPathAutomationSuite/<installer-folder>
-
Make the original
cluster_config.json
file available on the server. -
Generate the new
cluster_config.json
file as follows:- If you have the old
cluster_config.json
, use the following command to generate the configuration file from the cluster:cd /path/to/new-installer ./configureUiPathAS.sh config get -i /path/to/old/cluster_config.json -o /path/to/store/generated/cluster_config.json
- If you do not have the old
cluster_config.json
file, run the following command:
For details on how to configure thecd /path/to/new-installer ./configureUiPathAS.sh config get -o /path/to/store/generated/cluster_config.json
cluster_config.json
parameters, see Advanced installation experience.
- If you have the old
Execution
Maintenance and backup
-
Make sure you enabled the backup on the cluster. For details, see Backing up and restoring the cluster.
-
Connect to the server node via SSH.
-
Verify that all desired volumes have backups in the cluster by running the following command:
/path/to/new-installer/configureUiPathAS.sh verify-volumes-backup
Note:
The backup might take some time, so wait for approximately 15-20 minutes, and then verify the volumes backup again.
- To verify if Automation Suite is healthy, run:
kubectl get applications -n argocd
-
Put the cluster in maintenance mode as follows:
a. Execute the following command:/path/to/new-installer/configureUiPathAS.sh enable-maintenance-mode
b. Verify that the cluster is in maintenance mode by running the following command:
/path/to/new-installer/configureUiPathAS.sh is-maintenance-enabled
-
Make an SQL database backup.
Upgrade infrastructure and services on servers
-
Connect to the server via SSH.
-
Become root by running
sudo su -
. -
Upgrade the infrastructure and servers by running the following command:
/path/to/new-installer/install-uipath.sh --upgrade -k -f -s -i /path/to/cluster_config.json --accept-license-agreement -o /path/to/output.json
Note:
This command disables the maintenance mode that you enabled before the upgrade because all services are required to be up during the upgrade. The command also creates a backup of the cluster state and pauses all other scheduled backups.
- After the successful upgrade and verification, resume the backup scheduling on the node by running the following command:
/path/to/new-installer/configureUiPathAS.sh resume-scheduled-backups
Rollback on error
Preparation
-
Create a separate folder to store older bundles, and perform the following operations inside that folder.
-
Download and unzip the installer of the older version (
installer.zip
) on the node.
Note:
Give proper permissions to the folder by running
sudo chmod 755 -R <installer-folder>
.
-
Create a
restore.json
file and copy it to all the nodes. For details, see Backing up and restoring the cluster. -
Verify that the etcd backup data is present on the primary server at the following location:
/mnt/backup/backup/<etcdBackupPath>/<node-name>/snapshots
.etcdBackupPath
- this is the same as the one specified inbackup.json
while enabling the backup'snode-name
;node-name
- the hostname of the primary server VM.
Cluster cleanup
-
Copy and run the dedicated script to uninstall everything from that node. Do this for all the nodes. For details, see Troubleshooting.
-
Restore all UiPath databases to the backup created before the upgrade.
Restore infra on server nodes
-
Connect to the server via SSH.
-
Restore infra by running the following command:
/path/to/older-installer/install-uipath.sh -i /path/to/restore.json -o /path/to/output.json -r --accept-license-agreement --install-type online
Restore volumes data
-
Connect to the server via SSH.
-
Go to the new installer folder.
Note:
The previous infra restore commands were executed using the old installer, and the following commands are executed using the new installer bundle.
- Disable the maintenance mode on the cluster by running the following command:
/path/to/new-installer/configureUiPathAS.sh disable-maintenance-mode
- Verify that maintenance mode is disabled by running the following command:
/path/to/new-installer/configureUiPathAS.sh is-maintenance-enabled
-
Copy the
restore.json
file that was used in the infra restore stage to the new installer bundle folder. -
Restore volumes by running the following command from the newer installer bundle:
/path/to/new-installer/install-uipath.sh -i /path/to/new-installer/restore.json -o /path/to/new-installer/output.json -r --volume-restore --accept-license-agreement --install-type online
-
Once the restore is completed, verify if everything is restored and working properly.
-
During the upgrade, scheduled backups were disabled on the primary node. To enable them again, run the following command:
/path/to/new-installer/configureUiPathAS.sh resume-scheduled-backups
Offline single-node evaluation mode
Preparation
-
Make sure that there is enough disk space on the node.
-
Download the full offline bundle (
sf.tar.gz
) on the chosen server. -
Download and unzip the new installer (
installer.zip
) on the primary server.
Note:
Give proper permissions to the folder by running
sudo chmod 755 -R <installer-folder>
.
-
Make the original
cluster_config.json
file available on the server. -
Generate the new
cluster_config.json
file as follows:- If you have the old
cluster_config.json
, use the following command to generate the configuration file from the cluster.cd /path/to/new-installer ./configureUiPathAS.sh config get -i /path/to/old/cluster_config.json -o /path/to/store/generated/cluster_config.json
- If you do not have the old configuration file, run the following command:
See Advanced installation experience to fill up the remaining parameters.cd /path/to/new-installer ./configureUiPathAS.sh config get -o /path/to/store/generated/cluster_config.json
- If you have the old
Execution
Maintenance and backup
-
Make sure you have enabled the backup on the cluster. For details, see Backing up and restoring the cluster.
-
Connect to server node via SSH.
-
Verify that all volumes have backups in the cluster by running the following command:
/path/to/new-installer/configureUiPathAS.sh verify-volumes-backup
Note:
The backup might take some time, so wait for approximately 15-20 minutes, and then verify the volumes backup again.
- To verify if Automation Suite is healthy, run:
kubectl get applications -n argocd
-
Put the cluster in maintenance mode as follows:
a. Execute the following command:/path/to/new-installer/configureUiPathAS.sh enable-maintenance-mode
b. Verify that the cluster is in maintenance mode by running the following command:
/path/to/new-installer/configureUiPathAS.sh is-maintenance-enabled
-
Create the SQL database backup.
Upgrade infrastructure and services on servers
-
Connect to server via SSH.
-
Become root by running
sudo su -
. -
Run the following command:
/path/to/new-installer/install-uipath.sh --upgrade -k -i /path/to/cluster_config.json --offline-bundle "/path/to/sf.tar.gz" --offline-tmp-folder /uipath/tmp --install-offline-prereqs --accept-license-agreement -o /path/to/output.json
/path/to/new-installer/install-uipath.sh --upgrade -f -s -i /path/to/cluster_config.json --offline-bundle "/path/to/sf.tar.gz" --offline-tmp-folder /uipath/tmp --install-offline-prereqs --accept-license-agreement -o /path/to/output.json
Note:
This command disables the maintenance mode that you enabled before the upgrade because all services are required to be up during the upgrade. Note that this command also creates a backup of the cluster state and pauses all other scheduled backups.
- After the successful upgrade and verification, resume the backup scheduling on the node by running the following command:
/path/to/new-installer/configureUiPathAS.sh resume-scheduled-backups
Rollback on error
Preparation
-
Create a separate folder to store old bundles, and perform the following operations inside that folder.
-
Download the infra-only offline bundle (
sf-infra-bundle.tar.gz
) corresponding to the old version on all the nodes. -
Download and unzip the installer of the old version (
installer.zip
) on all the nodes.
Note:
Give proper permissions to the folder by running
sudo chmod 755 -R <installer-folder>
.
-
Create a
restore.json
file and copy it to the node. For details, see Backing up and restoring the cluster. -
Verify that the etcd backup data is present on the primary server at the following location:
/mnt/backup/backup/<etcdBackupPath>/<node-name>/snapshots
.etcdBackupPath
- this is the same as the one specified inbackup.json
while enabling the backup'snode-name
;node-name
- the hostname of the primary server VM.
Cluster cleanup
-
Copy and run the dedicated script to uninstall everything from that node. For details, see Troubleshooting.
-
Restore all UiPath databases to the older backup created before the upgrade.
Restore infra on server nodes
-
Connect to the server node.
-
Restore infra by running the following command:
/path/to/older-installer/install-uipath.sh -i /path/to/restore.json -o /path/to/output.json -r --offline-bundle "/path/to/older-version/sf-infra-bundle.tar.gz" --offline-tmp-folder /uipath --install-offline-prereqs --accept-license-agreement --install-type offline
Restore volumes data
-
Connect to the server via SSH.
-
Go to the new installer folder.
Note:
The previous infra restore commands were executed using the old installer, and the following commands are executed using the new installer bundle.
- Disable the maintenance mode on the cluster by running the following command:
/path/to/new-installer/configureUiPathAS.sh disable-maintenance-mode
- Verify that maintenance mode is disabled by executing the following command:
/path/to/new-installer/configureUiPathAS.sh is-maintenance-enabled
-
Copy the
restore.json
file that was used in the infra restore stage to the new installer bundle folder. -
Restore volumes from the new installer bundle by executing the following command:
/path/to/new-installer/install-uipath.sh -i /path/to/new-installer/restore.json -o /path/to/new-installer/output.json -r --volume-restore --accept-license-agreement --install-type offline
-
Once the restore is complete, verify if everything is restored and working properly.
-
During the upgrade, scheduled backups were disabled on this node. To enable them again, run the following command:
/path/to/new-installer/configureUiPathAS.sh resume-scheduled-backups
Online multi-node HA-ready production mode
Preparation
- Identify any server (not agent) that meets the disk requirements for an online installation. Referenced as primary server throughout this document. If you are using a self-signed certificate, run the following command:
### Please replace /path/to/cert with path to location where you want to store certificates.
sudo ./configureUiPathAS.sh tls-cert get --outpath /path/to/cert
### Now copy the ca.crt file generated in above location to trust store location
sudo cp --remove-destination /part/to/cert/ca.crt /etc/pki/ca-trust/source/anchors/
### Update the trust store
sudo update-ca-trust
- Download and unzip the new installer (
installer.zip
) on all the nodes.
Note:
Give proper permissions to the folder by running
sudo chmod 755 -R <installer-folder>
.
-
Make the original
cluster_config.json
file available on the primary server. -
Generate the new
cluster_config.json
file as follows:- If you have the old
cluster_config.json
file, use the following command to generate the configuration file from the cluster:cd /path/to/new-installer ./configureUiPathAS.sh config get -i /path/to/old/cluster_config.json -o /path/to/store/generated/cluster_config.json
- If you do not have the old configuration file, run the following command:
See Advanced installation experience to fill up the remaining parameters.cd /path/to/new-installer ./configureUiPathAS.sh config get -o /path/to/store/generated/cluster_config.json
- If you have the old
-
Copy this
cluster_config.json
to the installer folder on all nodes.
Execution
Maintenance and backup
-
Make sure you have enabled the backup on the cluster. For details, see Enabling the cluster backup.
-
Connect to one of the server nodes via SSH.
-
Verify that all desired volumes have backups in the cluster by running
/path/to/new-installer/configureUiPathAS.sh verify-volumes-backup
.
Note:
The backup might take some time, so wait for approximately 15-20 minutes, and then verify the volumes backup again.
- To verify if Automation Suite is healthy, run:
kubectl get applications -n argocd
-
Put the cluster in maintenance mode as follows:
a. Execute the following command:/path/to/new-installer/configureUiPathAS.sh enable-maintenance-mode
b. Verify that the cluster is in maintenance mode by running the following command:
/path/to/new-installer/configureUiPathAS.sh is-maintenance-enabled
-
Create the SQL database backup.
Upgrade infrastructure on servers
-
Connect to each server via SSH.
-
Become root by running
sudo su -
. -
Execute the following command on all servers:
/path/to/new-installer/install-uipath.sh --upgrade -k -i /path/to/cluster_config.json --accept-license-agreement -o /path/to/output.json
As part of this command, we take the backup of the Cluster state as well and pause all the further scheduled backups.
Note:
This command also creates a backup of the cluster state and pauses all other scheduled backups.
Upgrade infrastructure on agents
-
Connect to each server via SSH.
-
Become root by running
sudo su -
. -
Execute the following command:
/path/to/new-installer/install-uipath.sh --upgrade -k -i /path/to/cluster_config.json --accept-license-agreement -o /path/to/output.json
Execute rest of the upgrade on the primary server
-
Connect to primary server via SSH.
-
Become root by running
sudo su -
. -
Execute the following command:
/path/to/new-installer/install-uipath.sh --upgrade -f -s -i /path/to/cluster_config.json --accept-license-agreement -o /path/to/output.json
Note:
This command disables the maintenance mode that you enabled before the upgrade because all services are required to be up during the upgrade.
- After the successful upgrade and verification, resume the backup scheduling on the node by running the following command:
/path/to/new-installer/configureUiPathAS.sh resume-scheduled-backups
Rollback on error
Preparation
-
Create a separate folder to store the old bundles, and perform the following operations inside that folder.
-
Download and unzip the installer's old version (
installer.zip
) on all the nodes.
Note:
Give proper permissions to the folder by running
sudo chmod 755 -R <installer-folder>
.
-
Create the
restore.json
file and copy it to all the nodes. For details, see Backing up and restoring the cluster. -
Verify that the etcd backup data is present on the primary node at the following location:
/mnt/backup/backup/<etcdBackupPath>/<node-name>/snapshots
etcdBackupPath
- this is the same as the one specified inbackup.json
while enabling the backup'snode-name
;
b.node-name
- the hostname of the primary server VM.
Cluster cleanup
-
Copy and run the dedicated script to uninstall everything from that node. Do this for all the nodes. For details, see Troubleshooting.
-
Restore all UiPath databases to the older backup that was created before the upgrade.
Restore infra on server nodes
-
Connect to the primary server. This should be the same server node you selected during the upgrade.
-
Restore infra by running the following command:
/path/to/older-installer/install-uipath.sh -i /path/to/restore.json -o /path/to/output.json -r --accept-license-agreement --install-type online
-
Connect to the rest of the server nodes one by one via SSH.
-
Restore infra on these nodes by running the following command on the server nodes one by one. Executing them in parallel is not supported.
/path/to/older-installer/install-uipath.sh -i /path/to/restore.json -o /path/to/output.json -r -j server --accept-license-agreement --install-type online
Restore infra on agent nodes
-
Connect to each agent VM via SSH.
-
Restore infra on these nodes by running the following command:
/path/to/older-installer/install-uipath.sh -i /path/to/restore.json -o /path/to/output.json -r -j agent --accept-license-agreement --install-type online
Restore volumes data
-
Connect to the primary server via SSH.
-
Go to the newer installer folder.
Note:
The previous infra restore commands were executed using the old installer, and the following commands are executed using the new installer bundle.
- Disable the maintenance mode on the cluster by running the following command:
/path/to/new-installer/configureUiPathAS.sh disable-maintenance-mode
- Verify that maintenance mode is disabled by running the following command:
/path/to/new-installer/configureUiPathAS.sh is-maintenance-enabled
-
Copy the
restore.json
file that was used in the infra restore stage to the new installer bundle folder. -
Run volumes restore from the new installer bundle by executing the following command:
/path/to/new-installer/install-uipath.sh -i /path/to/new-installer/restore.json -o /path/to/new-installer/output.json -r --volume-restore --accept-license-agreement --install-type online
-
Once the restore is completed, verify if everything is restored and working properly.
-
During the upgrade, scheduled backups were disabled on the primary node. To enable them again, run the following command:
/path/to/new-installer/configureUiPathAS.sh resume-scheduled-backups
Offline multi-node HA-ready production mode
Preparation
- Identify any server (not agent) that meets the disk requirements for an offline installation. Referenced as a primary server throughout this document.
If you are using a self-signed certificate, run the following command:
### Please replace /path/to/cert with path to location where you want to store certificates.
sudo ./configureUiPathAS.sh tls-cert get --outpath /path/to/cert
### Now copy the ca.crt file generated in above location to trust store location
sudo cp --remove-destination /part/to/cert/ca.crt /etc/pki/ca-trust/source/anchors/
### Update the trust store
sudo update-ca-trust
-
Download the full offline bundle (
sf.tar.gz
) on the selected server. -
Download the infra-only offline bundle (
sf-infra.tar.gz
) on all the other nodes. -
Download and unzip the new installer (
installer.zip
) on all the nodes.
Note:
Give proper permissions to the folder by running
sudo chmod 755 -R <installer-folder>
.
-
Make the original
cluster_config.json
available on the primary server. -
Generate the new
cluster_config.json
file as follows:- If you have the old `
cluster_config.json
file, use the following command to generate the configuration file from the cluster:cd /path/to/new-installer ./configureUiPathAS.sh config get -i /path/to/old/cluster_config.json -o /path/to/store/generated/cluster_config.json
- If you do not have the old
cluster config
file, run the following command:
See Advanced installation experience to fill up the remaining parameters.cd /path/to/new-installer ./configureUiPathAS.sh config get -o /path/to/store/generated/cluster_config.json
- If you have the old `
-
Copy this
cluster_config.json
to the installer folder on all nodes.
Execution
Maintenance and backup
-
Make sure you have enabled the backup on the cluster. For details, see Backing up and restoring the cluster.
-
Connect to one of the server nodes via SSH.
-
Verify that all desired volumes have backups in the cluster by running the following command:
/path/to/new-installer/configureUiPathAS.sh verify-volumes-backup
Note:
The backup might take some time, so wait for approximately 15-20 minutes, and then verify the volumes backup again.
- To verify if Automation Suite is healthy, run:
kubectl get applications -n argocd
-
Put the cluster in maintenance mode as follows:
a. Execute the following command:/path/to/new-installer/configureUiPathAS.sh enable-maintenance-mode
b. Verify that the cluster is in maintenance mode by running the following command:
/path/to/new-installer/configureUiPathAS.sh is-maintenance-enabled
-
Create the SQL database backup.
Upgrade infrastructure on servers
Note:
Upgrading the infrastructure on servers and agents simultaneously is not supported and will result in an error. Make sure to carry out these steps successively.
-
Connect to each server via SSH.
-
Become root by running
sudo su -
. -
Execute the following command on all server nodes:
/path/to/new-installer/install-uipath.sh --upgrade -k -i /path/to/cluster_config.json --offline-bundle "/path/to/sf-infra.tar.gz" --offline-tmp-folder /uipath/tmp --install-offline-prereqs --accept-license-agreement -o /path/to/output.json
Note:
This command also creates a backup of the cluster state and pauses all other scheduled backups.
Upgrade infrastructure on agents
Note:
Upgrading the infrastructure on servers and agents simultaneously is not supported and will result in an error. Make sure to carry out these steps successively.
-
Connect to each server via SSH.
-
Become root by running
sudo su -
. -
Execute the following command:
/path/to/new-installer/install-uipath.sh --upgrade -k -i /path/to/cluster_config.json --offline-bundle "/path/to/sf-infra.tar.gz" --offline-tmp-folder /uipath/tmp --install-offline-prereqs --accept-license-agreement -o /path/to/output.json
Execute the rest of the upgrade on the primary server
-
Connect to primary server via SSH.
-
Become root by running
sudo su -
. -
Execute the following command:
/path/to/new-installer/install-uipath.sh --upgrade -f -s -i /path/to/cluster_config.json --offline-bundle "/path/to/sf.tar.gz" --offline-tmp-folder /uipath/tmp --install-offline-prereqs --accept-license-agreement -o /path/to/output.json
Note:
This command disables the maintenance mode that you enabled before the upgrade because all services are required to be up during the upgrade.
- After the successful upgrade and verification, resume the backup scheduling on the node by running the following command:
/path/to/new-installer/configureUiPathAS.sh resume-scheduled-backups
Rollback on error
Preparation
-
Create a separate folder to store the old bundles, and perform the following operations inside that folder.
-
Download and unzip the installer's older version (
installer.zip
) on all the nodes.
Note:
Give proper permissions to the folder by running
sudo chmod 755 -R <installer-folder>
.
-
Create
restore.json
file and copy it to all the nodes. For details, see Backing up and restoring the cluster. -
Verify that the etcd backup data is present on the primary server at the following location:
/mnt/backup/backup/<etcdBackupPath>/<node-name>/snapshots
etcdBackupPath
- this is the same as the one specified inbackup.json
while enabling the backup'snode-name
;node-name
- the hostname of the primary server VM.
Cluster cleanup
-
Copy and run the dedicated script to uninstall everything from that node. Do this for all the nodes. For details, see Troubleshooting.
-
Restore all UiPath databases to the older backup that was created before the upgrade.
Restore infra on server nodes
-
Connect to the primary server (which is the same as the one chosen during upgrade).
-
Restore infra by running the following command:
/path/to/older-installer/install-uipath.sh -i /path/to/restore.json -o /path/to/output.json -r --accept-license-agreement --install-type online
-
Connect to the rest of the server nodes one by one via SSH.
-
Restore infra on these nodes by running the following command:
/path/to/older-installer/install-uipath.sh -i /path/to/restore.json -o /path/to/output.json -r -j server --accept-license-agreement --install-type online
Note:
Run this command on server nodes one by one. Executing them in parallel is not supported.
Restore infra on agent nodes
-
Connect to each agent VM via SSH.
-
Restore infra on these nodes by running the following command:
/path/to/older-installer/install-uipath.sh -i /path/to/restore.json -o /path/to/output.json -r -j agent --accept-license-agreement --install-type online
Restore volumes data
-
Connect to the primary server via SSH.
-
Go to the new installer folder.
Note:
The previous infra restore commands were executed using older installer, and the following commands are executed using newer installer bundle.
- Disable the maintenance mode on the cluster by running the following command:
/path/to/new-installer/configureUiPathAS.sh disable-maintenance-mode
- Verify that maintenance mode is disabled by executing the following command:
/path/to/new-installer/configureUiPathAS.sh is-maintenance-enabled
-
Copy the
restore.jsonfile
that was used in the infra restore stage to the new installer bundle folder. -
Restore volumes from the newer installer bundle by executing the following command:
/path/to/new-installer/install-uipath.sh -i /path/to/new-installer/restore.json -o /path/to/new-installer/output.json -r --volume-restore --accept-license-agreement --install-type online
-
Once the restore is completed, verify if everything is restored and working properly.
-
During the upgrade, scheduled backups were disabled on the primary node. To enable them again, run the following command:
/path/to/new-installer/configureUiPathAS.sh resume-scheduled-backups
Updated about a month ago