- Overview
- Requirements
- Installation
- Post-installation
- Cluster administration
- Monitoring and alerting
- Migration and upgrade
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to Troubleshoot Services During Installation
- How to Uninstall the Cluster
- How to clean up offline artifacts to improve disk space
- How to Disable TLS 1.0 and 1.1
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bundle
- How to debug failed Automation Suite installations
- How to disable NIC checksum offloading
- Unable to run an offline installation on RHEL 8.4 OS
- Error in Downloading the Bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- SQL connection string validation error
- Failure After Certificate Update
- Automation Suite Requires Backlog_wait_time to Be Set 1
- Cannot Log in After Migration
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Kinit: Cannot Find KDC for Realm <AD Domain> While Getting Initial Credentials
- Kinit: Keytab Contains No Suitable Keys for *** While Getting Initial Credentials
- GSSAPI Operation Failed With Error: An Invalid Status Code Was Supplied (Client's Credentials Have Been Revoked).
- Login Failed for User <ADDOMAIN><aduser>. Reason: The Account Is Disabled.
- Alarm Received for Failed Kerberos-tgt-update Job
- SSPI Provider: Server Not Found in Kerberos Database
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis Probe Failure
- RKE2 Server Fails to Start
- Secret Not Found in UiPath Namespace
- ArgoCD goes into progressing state after first installation
- Unexpected Inconsistency; Run Fsck Manually
- Missing Self-heal-operator and Sf-k8-utils Repo
- Degraded MongoDB or Business Applications After Cluster Restore
- Unhealthy Services After Cluster Restore or Rollback
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite Support Bundle Tool
- Exploring Logs
Evaluating Your Storage Needs
An Automation Suite cluster uses the data disks attached to its server nodes as storage resources available to all the products enabled on your cluster. Each product uses these resources differently.
To understand your storage needs and plan for them accordingly, refer to the following terminology and guidelines.
-
Server node disk size – The size of all individual disks attached to each server node.
- All servers must have the same number of disks attached.
- Disks on each server may have different sizes as long as the sum of all the disk sizes is identical on all servers.
- Total cluster disk size – Server node disk size multiplied by the number of server nodes.
-
Application available storage – The amount of storage available for applications to consume.
- Application available storage is lower than the total cluster disk size due to the way fault resiliency and high availability are implemented in the Automation Suite cluster.
The following table describes the multi-node HA-ready hardware requirements for the Basic and Complete profiles in the context of the previously introduced terms.
Preset hardware configuration |
Number of server nodes |
Server node disk size |
Total cluster disk size |
Application available storage (online) |
Application available storage (offline) |
---|---|---|---|---|---|
3 |
512 GiB |
1.5 TiB |
41 GiB |
37 GiB | |
3 |
2 TiB |
6 TiB |
291 GiB |
286 GiB |
To leverage the 291 GiB available storage, you must resize the PVC value to 291 GiB instead of the preconfigured 100 GiB value. Otherwise, your applications will not be able to take advantage of more than 100 GiB.
For instructions, see Resizing PVC.
As you enable and use products on the cluster, they consume some storage from the application available storage. Products usually have a small enablement footprint as well as some usage-dependent footprint that varies depending on the use case, scale of use, and project. The storage consumption is evenly distributed across all the storage resources (data disks), and you can monitor the levels of storage utilization using the Automation Suite monitoring stack.
The Automation Suite cluster uses an internal Kubernetes concept called Persistent Volumes as an internal abstraction that represent disks across all the nodes on the cluster.
To avoid instabilities, it is recommended to set up monitoring and alerts to constantly check if the free space on the Persistent Volumes drops below the application available storage value. For more details, see Monitoring Persistent Volumes.
If an alert triggers, you can mitigate it by increasing the storage capacity of your cluster as described in the following section.
If your evaluated needs do not meet the recommended hardware requirements, you can add more storage capacity using either one or both of the following methods:
- Add more server nodes with disks. For instructions, see Adding a new node to the cluster.
-
Add more disks to the existing nodes. For instructions, see Extending the data disk in a single-node evaluation environment and Extending the data disk in a multi-node HA-ready production environment.
Important: For each 60 GiB of product-specific storage needed, your Automation Suite cluster will need an additional 1 TiB of storage added to the total storage available on your cluster, distributed equally across your server nodes.
You can estimate your storage consumption using the product-specific metric in the following tables. These tables describe how much content you can place on your cluster out of the box. For reference, they include the storage footprint of a typical usage scenario of each product.
Basic product selection
Product |
Storage-driving metric |
Storage per metric |
Typical use case |
---|---|---|---|
Orchestrator |
|
|
Typically, a package is 5 MiB, and buckets, if any, are less than 1 MiB. A mature enterprise has 5 GiB of packages and 6 GiB of buckets deployed. |
Action Center |
|
|
Typically, a document takes 0.15 MiB, and the forms to fill take an additional 0.15 KiB. In a mature enterprise this can roll up to 4GiB in total. |
Test Manager |
|
|
Typically, all files and attachments add up to approximately 5 GiB. |
Insights |
|
|
2 GiB are required for enablement, with the storage footprint growing with the number. A well-established enterprise-scale deployment requires another few GiB for all the dashboards. |
Automation Hub |
N/A |
N/A |
2 GiB fixed footprint |
Automation Ops |
N/A |
N/A |
No storage footprint |
Complete product selection
Product |
Storage-driving metric |
Storage per metric |
Typical use case |
---|---|---|---|
Apps |
|
|
Typically, the database takes approximately 5 GiB, and a typical complex app consumes approximately 15 MiB. |
AI Center |
|
|
A typical and established installation will consume 8 GiB for 5 packages and an additional 1GiB for the datasets. A pipeline may consume an additional 50 GiB, but only when actively running. |
Document Understanding |
|
|
In a mature deployment, 12GiB will go to ML model, 17GiB to the OCR, and 50GiB to all documents stored. |
Task Mining |
|
|
Typically, about 200GiB of activity log data should be analyzed to suggest meaningful automations. Highly repetitive tasks however, may require much less data. |