- Overview
- Requirements
- Installation
- Post-installation
- Cluster administration
- Managing products
- Managing the cluster in ArgoCD
- Setting up the external NFS server
- Automated: Enabling the Backup on the Cluster
- Automated: Disabling the Backup on the Cluster
- Automated, Online: Restoring the Cluster
- Automated, Offline: Restoring the Cluster
- Manual: Enabling the Backup on the Cluster
- Manual: Disabling the Backup on the Cluster
- Manual, Online: Restoring the Cluster
- Manual, Offline: Restoring the Cluster
- Additional configuration
- Migrating objectstore from persistent volume to raw disks
- Monitoring and alerting
- Migration and upgrade
- Migration options
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Insights
- Step 7: Deleting the default tenant
- B) Single tenant migration
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to Troubleshoot Services During Installation
- How to Uninstall the Cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bundle
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to automatically clean up Longhorn snapshots
- How to disable NIC checksum offloading
- Unable to run an offline installation on RHEL 8.4 OS
- Error in Downloading the Bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- First installation fails during Longhorn setup
- SQL connection string validation error
- Prerequisite check for selinux iscsid module fails
- Azure disk not marked as SSD
- Failure After Certificate Update
- Automation Suite not working after OS upgrade
- Automation Suite Requires Backlog_wait_time to Be Set 1
- Volume unable to mount due to not being ready for workloads
- RKE2 fails during installation and upgrade
- Failure to upload or download data in objectstore
- PVC resize does not heal Ceph
- Failure to Resize Objectstore PVC
- Rook Ceph or Looker pod stuck in Init state
- StatefulSet volume attachment error
- Failure to create persistent volumes
- Storage reclamation patch
- Backup failed due to TooManySnapshots error
- All Longhorn replicas are faulted
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Cannot Log in After Migration
- Kinit: Cannot Find KDC for Realm <AD Domain> While Getting Initial Credentials
- Kinit: Keytab Contains No Suitable Keys for *** While Getting Initial Credentials
- GSSAPI Operation Failed With Error: An Invalid Status Code Was Supplied (Client's Credentials Have Been Revoked).
- Alarm Received for Failed Kerberos-tgt-update Job
- SSPI Provider: Server Not Found in Kerberos Database
- Login Failed for User <ADDOMAIN><aduser>. Reason: The Account Is Disabled.
- ArgoCD login failed
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis Probe Failure
- RKE2 Server Fails to Start
- Secret Not Found in UiPath Namespace
- After the Initial Install, ArgoCD App Went Into Progressing State
- MongoDB pods in CrashLoopBackOff or pending PVC provisioning after deletion
- Unexpected Inconsistency; Run Fsck Manually
- Degraded MongoDB or Business Applications After Cluster Restore
- Missing Self-heal-operator and Sf-k8-utils Repo
- Unhealthy Services After Cluster Restore or Rollback
- RabbitMQ pod stuck in CrashLoopBackOff
- Prometheus in CrashloopBackoff state with out-of-memory (OOM) error
- Missing Ceph-rook metrics from monitoring dashboards
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite Support Bundle Tool
- Exploring Logs
Using uipathctl.sh
uipathctl.sh
script provides an automated way to upgrade a cluster to a newer version, configure a backup, restore a cluster, and more,
by running it on one node. The automation of these tasks is done via the IT automation tool Ansible.
Ansible uses the SSH mechanism to log into any of the host nodes or machines to perform tasks.
uipathctl.sh
script, you must take the following steps:
- Identify the Ansible host node.
- Add the SSH signature of your nodes to the known hosts.
- Set up the SSH authentication method.
- Install Ansible and other prerequisite tools.
For instructions, see the following sections.
The Ansible host node is the machine where you install Ansible. This machine must be a server node so that it has the required permissions to perform all the automations in the cluster.
In online installations, the Ansible host node can be any of the server nodes.
/uipath
location. If there is no server node with the UiPath bundle disk attached, you can simply mount an additional disk to any
of the existing server nodes and consider it to be the Ansible host node.
known_hosts
file in the Ansible host node.
<node-private-ip>
with the private IP address of each node in the cluster, one at a time.
ssh-keyscan -H <node-private-ip> >> ~/.ssh/known_hosts
ssh-keyscan -H <node-private-ip> >> ~/.ssh/known_hosts
Ansible supports two SSH mechanisms:
- Option 1: Key-based SSH authentication – Recommended. It uses private and public keys.
- Option 2: Password-based SSH authentication
Step 1: Setting up the SSH key
The SSH key authentication mechanism uses a combination of private and public keys.
Make sure you grant access to the public key of the Ansible host node on all other nodes by copying it.
ssh-keygen
command.
Generating SSH keys
ssh-keygen
command. To generate the new SSH key, just run ssh-keygen command and follow the instructions.
To generate a new SSH key, take the following steps:
- Generate a new SSH key using the
ssh-keygen
command. -
Write down the location of your key. Default values are:
- Public Key:
~/.ssh/id_rsa.pub
- Private Key:
~/.ssh/id_rsa
- Public Key:
- Grant access to the public key of the Ansible host node on all other nodes by copying it.
Granting access to the public key on each node
ssh-copy-id
command. If the path of the SSH public key is not ~/.ssh/id_rsa.pub
, make sure to replace it accordingly.
ssh-copy-id -i ~/.ssh/id_rsa.pub username@node-private-ip
ssh-copy-id -i ~/.ssh/id_rsa.pub username@node-private-ip
Step 2: Providing SSH key access to Ansible
Ansible uses the SSH mechanism to log into host machines and perform the required installation. For this reason, you must provide the SSH key access to Ansible.
Choose between the following methods:
ssh-agent
(recommended)
ssh-agent
to obtain access to nodes. For more information about ssh-agent, see ssh-agent manual.
ssh-agent
, run the following command:
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_rsa
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_rsa
Option 2: Using non-protected private key
--ansible-private-key
parameter to your the uipathctl.sh
script. The parameter takes the absolute path of private key and uses it for authentication.
You must ensure that Ansible and the following other supporting tools are installed on the previously selected Ansible host node.
-
ansible
(v2.8+)To check ifansible
was installed, run:ansible --version &>/dev/null || echo "Error: Ansible is not installed"
ansible --version &>/dev/null || echo "Error: Ansible is not installed" -
ansible-playbook
To check ifansible-playbook
was installed, run:ansible-playbook --version &>/dev/null || echo "Error: Ansible Playbook is not installed"
ansible-playbook --version &>/dev/null || echo "Error: Ansible Playbook is not installed" -
sshpass
– Required only when using password for authentication along with the--ansible-ask-password
parameter.To check ifsshpass
was installed, run:sshpass -V &>/dev/null || echo "Error: sshpass is not installed"
sshpass -V &>/dev/null || echo "Error: sshpass is not installed" -
zip
– To check ifzip
was installed, run:zip --version &>/dev/null || echo "Error: zip is not installed"
zip --version &>/dev/null || echo "Error: zip is not installed"Important: If any of the previous commands errors out, the targeted package is not installed on your machine. To install all the desired tools, take the steps in the following section. If all tools are already installed, you can skip these steps.
To install Ansible and other related packages, take the following steps:
- Navigate to the installer folder on the Ansible host node. The installer is usually located in the
/opt/UiPathAutomationSuite/{version}
folder. - Run the following commands:
-
For online:
./uipathctl.sh install-prerequisites --install-type online --accept-license-agreement
./uipathctl.sh install-prerequisites --install-type online --accept-license-agreement- For offline:
ansible.tar.gz
and store it anywhere outside the installer folder. For download instructions, see ansible.tar.gz.
ansible.tar.gz
is located in some different location, update the absolute location for the --offline-prerequisites-bundle
parameter.
./uipathctl.sh install-prerequisites --install-type offline --offline-prerequisites-bundle ../ansible.tar.gz --accept-license-agreement
./uipathctl.sh install-prerequisites --install-type offline --offline-prerequisites-bundle ../ansible.tar.gz --accept-license-agreement
inventory.ini
file for Ansible. However, there are few situations where you must build and provide the inventory.ini
file to Ansible. For example:
- While restoring the cluster from the backup data. This is because at the time of restore there is no healthy cluster to derive
inventory.ini
. - When you want to provide advanced configuration such as username and SSH key, which is very specific to each node.
For more details, see How to build your own inventory.
uipathctl.sh
script understands.
[FIRST_SERVER]
'10.0.1.1'
[SECONDARY_SERVERS]
'10.0.1.2'
'10.0.1.3'
[AGENTS]
'10.0.1.4'
'10.0.1.5'
[TASKMINING]
'10.0.1.6'
[GPU]
'10.0.1.7'
[all:vars]
ansible_connection=ssh
ansible_timeout=10
ansible_user=admin
[FIRST_SERVER]
'10.0.1.1'
[SECONDARY_SERVERS]
'10.0.1.2'
'10.0.1.3'
[AGENTS]
'10.0.1.4'
'10.0.1.5'
[TASKMINING]
'10.0.1.6'
[GPU]
'10.0.1.7'
[all:vars]
ansible_connection=ssh
ansible_timeout=10
ansible_user=admin
Group |
Value |
---|---|
|
The starting point where you run the
uipathctl.sh script. This is also called the Ansible host node.
You must provide the Private IP address for this node. |
|
The group of other server nodes in the cluster. You must provide the Private IP address of all the other server nodes. |
|
The group of agent nodes in the cluster. You must provide the Private IP address of all agent nodes. |
|
The group of Task Mining nodes in the cluster. You must provide the Private IP address of all the Task Mining nodes. |
|
The group of GPU nodes in the cluster. You must provide the Private IP address of all the GPU nodes. |
|
The group of variables applied to all the previously defined host groups.
You can provide the variables per group or per host node. For details, see Assigning a variable to many machines: group variables. |
Yaml
file and provided to Ansible.
uipathctl.sh
using --ansible-variables-file
.
#Path where installer zip is available. By default, uipathctl.sh takes the current folder and compress it to zip before copying it to other nodes.
installer_path: /path/to/installer.zip
# Path where installer will be copied on nodes
target_installer_base_path: /opt/UiPathAutomationSuite/<version>/installer
# Install type - online or offline
install_type: online
# Path on nodes where offline bundles will be copied
target_bundle_base_path: /opt/UiPathAutomationSuite/{version}
# Path on nodes where offline bundles will be extracted
target_tmp_path: /opt/UiPathAutomationSuite/tmp
# Basepath and filname for the various config files and bundles on the local machine
cluster_config_filename: cluster_config.json
cluster_config_basepath: /var/tmp/uipathctl_{version}
backup_config_filename: backup_config.json
backup_config_basepath: /var/tmp/uipathctl_{version}
restore_config_filename: restore.json
restore_config_basepath: /var/tmp/uipathctl_{version}
infra_bundle_filename: sf-infra.tar.gz
infra_bundle_basepath: /var/tmp/uipath_upgrade_<version>
#Path where installer zip is available. By default, uipathctl.sh takes the current folder and compress it to zip before copying it to other nodes.
installer_path: /path/to/installer.zip
# Path where installer will be copied on nodes
target_installer_base_path: /opt/UiPathAutomationSuite/<version>/installer
# Install type - online or offline
install_type: online
# Path on nodes where offline bundles will be copied
target_bundle_base_path: /opt/UiPathAutomationSuite/{version}
# Path on nodes where offline bundles will be extracted
target_tmp_path: /opt/UiPathAutomationSuite/tmp
# Basepath and filname for the various config files and bundles on the local machine
cluster_config_filename: cluster_config.json
cluster_config_basepath: /var/tmp/uipathctl_{version}
backup_config_filename: backup_config.json
backup_config_basepath: /var/tmp/uipathctl_{version}
restore_config_filename: restore.json
restore_config_basepath: /var/tmp/uipathctl_{version}
infra_bundle_filename: sf-infra.tar.gz
infra_bundle_basepath: /var/tmp/uipath_upgrade_<version>
Variable |
Value |
---|---|
|
The path of
sf-installer.zip . If you do not provide this value, uipathctl.sh will zip the current directory and copy it to other nodes, as an installer zip.
|
|
The path where the installer will be copied on the nodes. The default value is `/opt/UiPathAutomationSuite/{version |
|
The installation method. Possible values are:
online and offline .
|
|
The path on the nodes where the offline bundles will be copied. The default location is
/var/tmp/uipathctl_{version} .
|
|
The path on the nodes where the bundle is extracted. The default location is
/opt/UiPathAutomationSuite/tmp .
|
|
The name of the cluster configuration file. The default value is
cluster_config.json .
|
|
The location where
cluster_config.json will be stored temporarily on the nodes during orchestration. The default value is /var/tmp/uipathctl_{version} .
|
|
The name of the backup configuration file. The default value is
backup.json .
|
|
The location where
backup.json will be stored temporarily on the nodes during orchestration. The default value is /var/tmp/uipathctl_{version} .
|
|
The name of the restore configuration file. The default value is
restore.json .
|
|
The location where
restore.json will be stored temporarily on the nodes during orchestration. The default value is `/var/tmp/uipathctl_{version
|
|
The name of the bundle containing the infrastructure layers. This is the same as the one you provided to the
uipathctl.sh script.
|
|
The location where
sf-infra.tar.gz will be stored temporarily on the nodes during orchestration. This is the same as the one you provided to the uipathctl.sh script.
|
- Requirements
- Identifying the Ansible host node
- Add the SSH Signature of Your Nodes to the Known Hosts
- Setting up SSH authentication method
- Option 1: Key-based SSH authentication (recommended)
- Option 2: Password-based SSH authentication
- Installing Ansible and Other Prerequisite Tools
- Checking If Ansible and Other Prerequisite Tools Are Installed
- Installing Ansible and Other Prerequisite Tools
- Advanced Ansible configuration
- Generating the Ansible Inventory.ini File
- Generating the Ansible Variable File