- Overview
- Requirements
- Recommended: Deployment templates
- Manual: Preparing the installation
- Manual: Preparing the installation
- Step 1: Configuring the OCI-compliant registry for offline installations
- Step 2: Configuring the external objectstore
- Step 3: Configuring High Availability Add-on
- Step 4: Configuring Microsoft SQL Server
- Step 5: Configuring the load balancer
- Step 6: Configuring the DNS
- Step 7: Configuring the disks
- Step 8: Configuring kernel and OS level settings
- Step 9: Configuring the node ports
- Step 10: Applying miscellaneous settings
- Step 12: Validating and installing the required RPM packages
- Step 13: Generating cluster_config.json
- Cluster_config.json Sample
- General configuration
- Profile configuration
- Certificate configuration
- Database configuration
- External Objectstore configuration
- Pre-signed URL configuration
- ArgoCD configuration
- External OCI-compliant registry configuration
- Disaster recovery: Active/Passive and Active/Active configurations
- High Availability Add-on configuration
- Orchestrator-specific configuration
- Insights-specific configuration
- Process Mining-specific configuration
- Document Understanding-specific configuration
- Automation Suite Robots-specific configuration
- AI Center-specific configuration
- Monitoring configuration
- Optional: Configuring the proxy server
- Optional: Enabling resilience to zonal failures in a multi-node HA-ready production cluster
- Optional: Passing custom resolv.conf
- Optional: Increasing fault tolerance
- Adding a dedicated agent node with GPU support
- Adding a dedicated agent Node for Task Mining
- Connecting Task Mining application
- Adding a Dedicated Agent Node for Automation Suite Robots
- Step 15: Configuring the temporary Docker registry for offline installations
- Step 16: Validating the prerequisites for the installation
- Manual: Performing the installation
- Post-installation
- Cluster administration
- Managing products
- Getting Started with the Cluster Administration portal
- Migrating objectstore from persistent volume to raw disks
- Migrating from in-cluster to external High Availability Add-on
- Migrating data between objectstores
- Migrating in-cluster objectstore to external objectstore
- Migrating to an external OCI-compliant registry
- Switching to the secondary cluster manually in an Active/Passive setup
- Disaster Recovery: Performing post-installation operations
- Converting an existing installation to multi-site setup
- Guidelines on upgrading an Active/Passive or Active/Active deployment
- Guidelines on backing up and restoring an Active/Passive or Active/Active deployment
- Monitoring and alerting
- Migration and upgrade
- Migrating between Automation Suite clusters
- Upgrading Automation Suite
- Downloading the installation packages and getting all the files on the first server node
- Retrieving the latest applied configuration from the cluster
- Updating the cluster configuration
- Configuring the OCI-compliant registry for offline installations
- Executing the upgrade
- Performing post-upgrade operations
- Applying a patch
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to troubleshoot services during installation
- How to uninstall the cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bucket
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to disable TX checksum offloading
- How to manually set the ArgoCD log level to Info
- How to expand AI Center storage
- How to generate the encoded pull_secret_value for external registries
- How to address weak ciphers in TLS 1.2
- How to check the TLS version
- How to schedule Ceph backup and restore data
- Unable to run an offline installation on RHEL 8.4 OS
- Error in downloading the bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- SQL connection string validation error
- Prerequisite check for selinux iscsid module fails
- Azure disk not marked as SSD
- Failure after certificate update
- Antivirus causes installation issues
- Automation Suite not working after OS upgrade
- Automation Suite requires backlog_wait_time to be set to 0
- Volume unable to mount due to not being ready for workloads
- Support bundle log collection failure
- Data loss when reinstalling or upgrading Insights following Automation Suite upgrade
- Unable to access Automation Hub following upgrade to Automation Suite 2024.10.0
- Single-node upgrade fails at the fabric stage
- Upgrade fails due to unhealthy Ceph
- RKE2 not getting started due to space issue
- Volume unable to mount and remains in attach/detach loop state
- Upgrade fails due to classic objects in the Orchestrator database
- Ceph cluster found in a degraded state after side-by-side upgrade
- Unhealthy Insights component causes the migration to fail
- Service upgrade fails for Apps
- In-place upgrade timeouts
- Docker registry migration stuck in PVC deletion stage
- AI Center provisioning failure after upgrading to 2023.10 or later
- Upgrade fails in offline environments
- SQL validation fails during upgrade
- snapshot-controller-crds pod in CrashLoopBackOff state after upgrade
- Setting a timeout interval for the management portals
- Authentication not working after migration
- Kinit: Cannot find KDC for realm <AD Domain> while getting initial credentials
- Kinit: Keytab contains no suitable keys for *** while getting initial credentials
- GSSAPI operation failed due to invalid status code
- Alarm received for failed Kerberos-tgt-update job
- SSPI provider: Server not found in Kerberos database
- Login failed for AD user due to disabled account
- ArgoCD login failed
- Update the underlying directory connections
- Partial failure to restore backup in Automation Suite 2024.10.0
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis probe failure
- RKE2 server fails to start
- Secret not found in UiPath namespace
- ArgoCD goes into progressing state after first installation
- MongoDB pods in CrashLoopBackOff or pending PVC provisioning after deletion
- Pods stuck in Init:0/X
- Missing Ceph-rook metrics from monitoring dashboards
- Running High Availability with Process Mining
- Process Mining ingestion failed when logged in using Kerberos
- After Disaster Recovery Dapr is not working properly for Process Mining
- Unable to connect to AutomationSuite_ProcessMining_Warehouse database using a pyodbc format connection string
- Airflow installation fails with sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''
- How to add an IP table rule to use SQL Server port 1433
- Automation Suite certificate is not trusted from the server where CData Sync is running
- Task Mining troubleshooting
- Running the diagnostics tool
- Using the Automation Suite support bundle
- Exploring Logs
Automation Suite on Linux Installation Guide
Running the diagnostics tool
The Automation Suite diagnostics tool runs a set of checks to generate a report on the cluster health, which you can analyze to identify issues and their potential root causes. The tool helps you find common issues, such as lost database connectivity or invalid or expired credentials.
uipathctl
and uipathtools
, which you can download on your management machine.
uipathtools
is a CLI tool that contains a subset of uipathctl
capabilities specific to health commands. The tool is backwards compatible and works with any of the supported Automation
Suite versions. We recommend using uipathtools
as the first step if you face any issue.
check
and test
commands provide quick insights into the state of the cluster without running a deep analysis.
-
check
relies on the ArgoCD health and sync status and does not modify any state in the cluster -
test
looks into the applications, deployment, or pods and temporarily mutates the state of the cluster to provide you with those insights.
To run a health check, use one of the following commands, depending on the CLI tool you use:
- If you use
uipathctl
, run:./uipathctl health check
./uipathctl health check - If you use
uipathtools
, run:./uipathtools health check
./uipathtools health check
uipathctl health check
command checks the health of all the components. However, it also allows you to check strictly the components that you are
interested in:
- If you want to exclude components from the execution, use the
--excluded
flag. For example, if you do not want to check the health of SQL, runuipathctl health check --excluded SQL
. The command checks the health of all components except for SQL. - If you want to include only certain components in the execution, use the
--included
flag. For example, if you only want to check the health of DNS and objectstore, runuipathctl health check --included DNS,OBJECTSTORAGE
.
Analyzing the logs
- After running a check health check, the logs show that health check for the Data Service application failed.
❌ [DATASERVICE] ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
❌ [DATASERVICE] ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced - After further investigation, it becomes clear that the Data Service application failed because the
dataservice-runtime-8f5bb7d56-v5krg
anddataservice-taskrunner-787df76c74-98h5l
pods are in a failed state. If you analyze further, you can find that the missingdataservice-external-storage-secret
is missing.❌ [POD] ✔ [LIST_NAMESPACES] Retrieved 25 namespaces to check pod health ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
❌ [POD] ✔ [LIST_NAMESPACES] Retrieved 25 namespaces to check pod health ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found - To fix this issue, ensure that you provided the correct credentials for the objectstore in the
.cluster_config.json
To run a health test, use one of the following commands, depending on the CLI tool you use:
- If you use
uipathctl
, run:./uipathctl health test
./uipathctl health test - If you use
uipathtools
, run:./uipathtools health test
./uipathtools health test
uipathctl health test
command executes health tests on all the components. However, it also allows you to check strictly the components that you
are interested in:
- If you want to exclude components from the execution, use the
--excluded
flag. For example, if you do not want to check the health of SQL, runuipathctl health test --excluded SQL
. The command checks the health of all components except for SQL. - If you want to include only certain components in the execution, use the
--included
flag. For example, if you only want to check the health of DNS and objectstore, runuipathctl health test --included DNS,OBJECTSTORAGE
.
check
and test
commands for the Data Service application, you can see that the former validates the health of the application, whereas the
latter checks the routing.
Known issue
You might get an error message similar to the following sample. You can ignore it as no action is required from your side.
E0621 23:32:56.426321 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.426392 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.444420 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.446150 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.513357 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.426321 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.426392 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.444420 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.446150 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.513357 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
diagnose
command provides deep insights into the state of the cluster. It helps you identify issues at all levels, such as SQL, objectstore,
node, secret, Istio, networking etc.
- It covers both the
check
andtest
commands. - It runs the prerequisites checks performed before the Automation suite installation, to validate changes to the environment configuration that were made post-installation and that can be the potential cause of the issue.
-
It runs on the all the nodes to gather any node-specific issues, such as resource unavailability, any network interference, etc.
To run a diagnostic check, use one of the following commands, depending on the CLI tool you use:
- If you use
uipathctl
, run:./uipathctl health diagnose cluster_config.json --versions version.json
./uipathctl health diagnose cluster_config.json --versions version.json - If you use
uipathtools
, run:./uipathtools health diagnose cluster_config.json --versions version.json
./uipathtools health diagnose cluster_config.json --versions version.json
diagnose
command runs at multiple levels, such as infrastructure, networking, storage, pods, DNS etc.
Analyzing the logs
There are two potential issues that you can notice in the previous logs:
- Istio has a bad configuration, which can cause issues accessing the Document Understanding platform:
❌ [ISTIO] ✔ [ISTIO_SYNC_STATUS] Istio sync is up-to-date ❌ [ISTIO_ENVOY_CONFIG_STATUS] Istio Envoy configs are not healthy: Error [IST0101] (VirtualService uipath/du-platform-vs) Referenced host:port not found: "aistorage:5000"
❌ [ISTIO] ✔ [ISTIO_SYNC_STATUS] Istio sync is up-to-date ❌ [ISTIO_ENVOY_CONFIG_STATUS] Istio Envoy configs are not healthy: Error [IST0101] (VirtualService uipath/du-platform-vs) Referenced host:port not found: "aistorage:5000" - Data Service is unavailable. See Ceph in the code example.
❌ [DATASERVICE] ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[external-storage-creds], unattached volumes=[workload-socket is-secrets openssl istio-podinfo temp-location cert-location istio-data external-storage-creds workload-certs istio-envoy java domain-cert-config edk2 credential-socket tmp additional-ca-cert-config pem istiod-ca-cert istio-token app-secrets ceph-storage-creds]: timed out waiting for the condition ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
❌ [DATASERVICE] ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[external-storage-creds], unattached volumes=[workload-socket is-secrets openssl istio-podinfo temp-location cert-location istio-data external-storage-creds workload-certs istio-envoy java domain-cert-config edk2 credential-socket tmp additional-ca-cert-config pem istiod-ca-cert istio-token app-secrets ceph-storage-creds]: timed out waiting for the condition ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
Known issues
You might get an error message similar to the following sample. You can ignore it as no action is required from your side.
check
, test
, and diagnose
) support additional filtering and output format.
Filtering
Filters |
Description |
Usages |
---|---|---|
|
Comma-separated list of the services to include in the validation |
This command runs the diagnose only against Istio and Insights. |
|
Comma-separated list of the services to exclude from the validation |
This command runs the test in the entire cluster, except Istio and Insights. |
Output format
json
, yaml
, text
, and junit
. You can pass these values to any of the command via the --output
flag. These output formats are handy when you want to leverage these tools to build your own troubleshooting framework on
top of them.
Example usages
Usage |
Example Output |
---|---|
|
|
|
|
|
|
|
|
INFO logs in green show that the required checks passed. However, you should still properly check the disk/memory usage to avoid hidden errors.
Even though these messages do not signal a high risk, you might have to rectify them, as they might be affecting some services in certain scenarios.
You must fix the issues described by these messages as they impact some service in the cluster.
If these services are down, it means the node is down. Try restarting the service using systemctl restart <service-name> as this should fix the issue.
/var/lib
as Kubernetes uses it to store its data. If the directory is full, various issues might arise. To prevent these problems,
make sure to increase its size.
For all the nodes, we specify if they are under Disk Pressure or Memory Pressure. If that happens, workloads on these nodes might start showing issues. Check if there are any other processes running on these nodes that are consuming resources and remove them if that is the case.
We use Ceph as S3 Object storage for storing logs and files from different applications. You can view the status of its services. If they are down, you might have to restart them. Make sure to also check if the disk usage by Ceph is full.
443
and 31443
to be open with the hostname that was provided. The report indicates if they are not accessible. Make sure to open the appropriate
ports if pointed here.
The tool checks if the uploaded certificate is valid for the given hostname and if it has not expired. If the certificate does not meet these criteria, errors occur. To prevent this, make sure to check your uploaded certificate and change it if required.
Since some services require GPU to be present on some of the nodes in the cluster, the Automation Suite Diagnostics Tool checks if there is are GPU nodes and prints number of such nodes. If you are expecting GPU nodes to be present and they do not show up here, that means something went wrong in GPU setup.
RabbitMQ and DockerRegistry are two important components that some services use. If any of them is down, you need to investigate the issue and a restart.
ArgoCD is our application lifecycle management (ALM) tool. If any of its services are down, then other applications may become outdated or have other issues. Recovering these services is important, and might need further debugging.
The Automation Suite Diagnostics Tool shows whether ArgoCD applications are missing and degraded.
- If applications are missing, go to the ArgoCD UI and sync it.
- If applications are degraded, additional debugging is needed to investigate the errors thrown by ArgoCD
- Quick validation
- Quick validation
- Health check
- Health test
- Deep validation
- Deep validation
- Additional utilities
- Additional uitilities
- Reading Diagnostics Reports
- INFO Logs
- WARN Messages
- ERROR Messages
- Rke2-server or Rke2-agent Service Down
- Directory Size Mounted at /var/lib
- Rke2 Version
- Disk Pressure or Memory Pressure
- Ceph Services Status
- Ports 443 and 31443
- Certificate Validity
- GPU
- RabbitMQ and DockerRegistry
- ArgoCD Services Down
- Missing or Degraded ArgoCD Applications