- Overview
- Requirements
- Installation
- Post-installation
- Cluster administration
- Managing products
- Managing the cluster in ArgoCD
- Setting up the external NFS server
- Automated: Enabling the Backup on the Cluster
- Automated: Disabling the Backup on the Cluster
- Automated, Online: Restoring the Cluster
- Automated, Offline: Restoring the Cluster
- Manual: Enabling the Backup on the Cluster
- Manual: Disabling the Backup on the Cluster
- Manual, Online: Restoring the Cluster
- Manual, Offline: Restoring the Cluster
- Additional configuration
- Migrating objectstore from persistent volume to raw disks
- Monitoring and alerting
- Migration and upgrade
- Migration options
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Insights
- Step 7: Deleting the default tenant
- B) Single tenant migration
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to Troubleshoot Services During Installation
- How to Uninstall the Cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bucket
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to automatically clean up Longhorn snapshots
- How to disable TX checksum offloading
- How to address weak ciphers in TLS 1.2
- Unable to run an offline installation on RHEL 8.4 OS
- Error in Downloading the Bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- First installation fails during Longhorn setup
- SQL connection string validation error
- Prerequisite check for selinux iscsid module fails
- Azure disk not marked as SSD
- Failure After Certificate Update
- Automation Suite not working after OS upgrade
- Automation Suite Requires Backlog_wait_time to Be Set 1
- Volume unable to mount due to not being ready for workloads
- RKE2 fails during installation and upgrade
- Failure to upload or download data in objectstore
- PVC resize does not heal Ceph
- Failure to Resize Objectstore PVC
- Rook Ceph or Looker pod stuck in Init state
- StatefulSet volume attachment error
- Failure to create persistent volumes
- Storage reclamation patch
- Backup failed due to TooManySnapshots error
- All Longhorn replicas are faulted
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Cannot Log in After Migration
- Kinit: Cannot Find KDC for Realm <AD Domain> While Getting Initial Credentials
- Kinit: Keytab Contains No Suitable Keys for *** While Getting Initial Credentials
- GSSAPI Operation Failed With Error: An Invalid Status Code Was Supplied (Client's Credentials Have Been Revoked).
- Alarm Received for Failed Kerberos-tgt-update Job
- SSPI Provider: Server Not Found in Kerberos Database
- Login Failed for User <ADDOMAIN><aduser>. Reason: The Account Is Disabled.
- ArgoCD login failed
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis Probe Failure
- RKE2 Server Fails to Start
- Secret Not Found in UiPath Namespace
- After the Initial Install, ArgoCD App Went Into Progressing State
- MongoDB pods in CrashLoopBackOff or pending PVC provisioning after deletion
- Unexpected Inconsistency; Run Fsck Manually
- Degraded MongoDB or Business Applications After Cluster Restore
- Missing Self-heal-operator and Sf-k8-utils Repo
- Unhealthy Services After Cluster Restore or Rollback
- RabbitMQ pod stuck in CrashLoopBackOff
- Prometheus in CrashloopBackoff state with out-of-memory (OOM) error
- Missing Ceph-rook metrics from monitoring dashboards
- Pods cannot communicate with FQDN in a proxy environment
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite support bundle
- Exploring Logs

Automation Suite installation guide
Using the Automation Suite support bundle
You can get the Automation Suite Support Bundle tool in the following ways:
- By unzipping the sf-installer.zip installer package.
- By downloading the supportability-tools.zip
Before running the Automation Suite Support Bundle tool, navigate to the installer folder. You may find the installer in the following location or anywhere you downloaded it:
cd /opt/UiPathAutomationSuite/{version}/installer
cd /opt/UiPathAutomationSuite/{version}/installer
To start using the Automation Suite Support Bundle tool, run the following command:
./Support-Tools/support-bundle/support-bundle.sh
./Support-Tools/support-bundle/support-bundle.sh
The following image shows a typical output for this command:
-
Running only
bash support-bundle.sh
collects logs for the last 2 days from Ceph, the S3-compatible in-cluster objectstore. -
To set the start date for the log collection process, use the
-F
argument and enter the date in theYYYY-MM-DD
format. To set the number of days for which you want to collect logs, calculated from the start date, use the-D
argument and enter the number of days as an integer. For example, to collect logs for the interval between July 20, 2024, and July 24, 2024, runbash support-bundle.sh -F 2024-07-20 -D 5
.To collect logs for a particular date, use the-F
argument to specify the date and the-D
argument to set an interval of one day. For example, to collect logs for July 20, 2024, runbash support-bundle.sh -F 2024-07-20 -D 1
.If you do not set the number of days for which you want to collect logs, the tool uses a default interval of 7 days, calculated from the start date. For example, runningbash support-bundle.sh -F 2024-07-20
collects logs for the interval between July 20, 2024, and July 26, 2024. - We collect logs for almost all the namespaces used in Automation Suite. To collect logs only for particular namespaces, run
bash support-bundle.sh -N uipath,uipath-infra
. Use-N
as list of namespaces for which you want to collect the logs. - To generate the support bundle for MongoDB, use
-m
at the end of the command (bash support-bundle.sh -m
). This command generates a.tgz
compressed file namedmongo_support_bundle_<current_timestamp>.tgz
under the same folder where the bash command is run. Attach this file and send it to UiPath Support Team. - The RKE2, uipath, Redis, and Longhorn bundles are generated by default.
- To disable the RKE2 bundle, run the
support-bundle.sh -e
command. - Long-running commands in the RKE bundle are disabled. To enable them, run the
scripts/rke2-support-bundle.sh -a
command. - To disable Redis bundle generation, run the
support-bundle.sh -s
command.
- To disable the RKE2 bundle, run the
.zip
archive contains following files and folders:
File/folder |
Description |
---|---|
|
Contains logs collected from the S3 store. |
|
Contains event descriptions from all the namespaces. |
|
Contains descriptions of all the nodes in the cluster. |
|
Contains descriptions of corresponding objects from all namespaces. |
|
Contains last 4 hour logs from currently running pods. We are collecting this to handle the scenario where Ceph S3 store is down. |
If passed specific namespace list to the command, the structure would look similar to the one shown in the following image: