- Overview
- Requirements
- Installation
- Q&A: Deployment templates
- Configuring the machines
- Configuring the external objectstore
- Configuring an external Docker registry
- Configuring the load balancer
- Configuring the DNS
- Configuring Microsoft SQL Server
- Configuring the certificates
- Online multi-node HA-ready production installation
- Offline multi-node HA-ready production installation
- Disaster recovery - Installing the secondary cluster
- Downloading the installation packages
- install-uipath.sh parameters
- Enabling Redis High Availability Add-On for the cluster
- Document Understanding configuration file
- Adding a dedicated agent node with GPU support
- Adding a dedicated agent Node for Task Mining
- Connecting Task Mining application
- Adding a Dedicated Agent Node for Automation Suite Robots
- Post-installation
- Cluster administration
- Monitoring and alerting
- Migration and upgrade
- Migration options
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Insights
- Step 7: Deleting the default tenant
- B) Single tenant migration
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to troubleshoot services during installation
- How to uninstall the cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bundle
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to automatically clean up Longhorn snapshots
- How to disable TX checksum offloading
- How to manually set the ArgoCD log level to Info
- How to generate the encoded pull_secret_value for external registries
- How to address weak ciphers in TLS 1.2
- Unable to run an offline installation on RHEL 8.4 OS
- Error in downloading the bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- First installation fails during Longhorn setup
- SQL connection string validation error
- Prerequisite check for selinux iscsid module fails
- Azure disk not marked as SSD
- Failure after certificate update
- Antivirus causes installation issues
- Automation Suite not working after OS upgrade
- Automation Suite requires backlog_wait_time to be set to 0
- GPU node affected by resource unavailability
- Volume unable to mount due to not being ready for workloads
- Support bundle log collection failure
- Failure to upload or download data in objectstore
- PVC resize does not heal Ceph
- Failure to resize PVC
- Failure to resize objectstore PVC
- Rook Ceph or Looker pod stuck in Init state
- StatefulSet volume attachment error
- Failure to create persistent volumes
- Storage reclamation patch
- Backup failed due to TooManySnapshots error
- All Longhorn replicas are faulted
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Authentication not working after migration
- Kinit: Cannot find KDC for realm <AD Domain> while getting initial credentials
- Kinit: Keytab contains no suitable keys for *** while getting initial credentials
- GSSAPI operation failed due to invalid status code
- Alarm received for failed Kerberos-tgt-update job
- SSPI provider: Server not found in Kerberos database
- Login failed for AD user due to disabled account
- ArgoCD login failed
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis probe failure
- RKE2 server fails to start
- Secret not found in UiPath namespace
- ArgoCD goes into progressing state after first installation
- Issues accessing the ArgoCD read-only account
- MongoDB pods in CrashLoopBackOff or pending PVC provisioning after deletion
- Unhealthy services after cluster restore or rollback
- Pods stuck in Init:0/X
- Prometheus in CrashloopBackoff state with out-of-memory (OOM) error
- Missing Ceph-rook metrics from monitoring dashboards
- Running High Availability with Process Mining
- Process Mining ingestion failed when logged in using Kerberos
- Unable to connect to AutomationSuite_ProcessMining_Warehouse database using a pyodbc format connection string
- Airflow installation fails with sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''
- How to add an IP table rule to use SQL Server port 1433
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite Support Bundle Tool
- Exploring Logs
Manual: Online upgrade
Perform the following steps on the first server node and then on all the other nodes (both server and agent) in the cluster.
/opt/UiPathAutomationSuite
folder on all the nodes. If you do not have enough space, you can either increase the capacity of this folder or remove all
the previous installer files except for cluster_config.json
. You can always download the previous installer again.
df -h /opt/UiPathAutomationSuite
.
To prepare for the upgrade, take the following steps:
To configure the backup, take the following steps:
Once the backup is created, continue with the following steps.
Putting the cluster in maintenance mode will shut down the ingress controller and all the UiPath services, blocking all the incoming traffic to the Automation Suite cluster.
You must perform the infrastructure upgrade on all the nodes in the cluster.
You cannot perform this step on multiple nodes at the same time; you must wait for the upgrade to finish on each node before moving to another.
zone_resilience
flag to false
in the cluster_config.json
file in /opt/UiPathAutomationSuite/Installer
.
This step upgrades the fabric and service components running with the cluster. You must follow this steps only once from any of the server nodes.
After performing the upgrade, you can take the following additional steps:
-
To verify if Automation Suite is healthy, run:
kubectl get applications -n argocd
kubectl get applications -n argocd -
When upgrading from an Automation Suite version prior to 2023.4.0, verify if Apps is running, then remove MongoDB:
./configureUiPathAS.sh mongodb uninstall --force
./configureUiPathAS.sh mongodb uninstall --force -
If you face an error when removing MongoDB with
./configureUiPathAS.sh mongodb uninstall --force
command, run the following command:kubectl patch application "fabric-installer" -n argocd --type=merge -p '{"spec" : {"syncPolicy" : {"automated" : {"selfHeal": false }}}}' ./configureUiPathAS.sh mongodb uninstall --force kubectl patch application "fabric-installer" -n argocd --type=merge -p '{"spec" : {"syncPolicy" : {"automated" : {"selfHeal": true }}}}'
kubectl patch application "fabric-installer" -n argocd --type=merge -p '{"spec" : {"syncPolicy" : {"automated" : {"selfHeal": false }}}}' ./configureUiPathAS.sh mongodb uninstall --force kubectl patch application "fabric-installer" -n argocd --type=merge -p '{"spec" : {"syncPolicy" : {"automated" : {"selfHeal": true }}}}' -
If you experience issues with image vulnerabilities or storage consumption after performing an upgrade, delete the images from the old installer. For details, see the Troubleshooting section.
You must resume the backup after upgrading and performing the cleanup and migration operations.
Make sure Automation Suite is up and running and your automation continues as expected before proceeding with the following steps.
To enable the backup, follow the instructions in described in the Backing up and restoring the cluster documentation.
After performing an Automation Suite cluster upgrade, Azure and AWS template deployments require some changes to ensure a new node joins the cluster correctly through scale-out operations. To automate the changes, we recommend using the dedicated scripts. For instructions, see the Azure deployment template docs and the AWS deployment template docs.
When performing an upgrade for a cluster deployed with Azure templates, an error similar to the one shown in the following image might occur:
fixed_rke_address
field in the cluster_config.json
file. You must change the value of this field to the IP address of the first server instance prior to running the upgrade
command. The cluster_config.json
uploaded to the key vault should continue to have the IP address of the ILB since the node is not healthy, and traffic will
not be balanced to it.