- Overview
- Requirements
- Installation
- Post-installation
- Cluster administration
- Monitoring and alerting
- Migration and upgrade
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to Troubleshoot Services During Installation
- How to Uninstall the Cluster
- How to clean up offline artifacts to improve disk space
- How to disable TLS 1.0 and 1.1
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bucket
- How to debug failed Automation Suite installations
- How to disable TX checksum offloading
- Unable to run an offline installation on RHEL 8.4 OS
- Error in Downloading the Bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- SQL connection string validation error
- Failure After Certificate Update
- Automation Suite Requires Backlog_wait_time to Be Set 1
- Cannot Log in After Migration
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Kinit: Cannot Find KDC for Realm <AD Domain> While Getting Initial Credentials
- Kinit: Keytab Contains No Suitable Keys for *** While Getting Initial Credentials
- GSSAPI Operation Failed With Error: An Invalid Status Code Was Supplied (Client's Credentials Have Been Revoked).
- Login Failed for User <ADDOMAIN><aduser>. Reason: The Account Is Disabled.
- Alarm Received for Failed Kerberos-tgt-update Job
- SSPI Provider: Server Not Found in Kerberos Database
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis Probe Failure
- RKE2 Server Fails to Start
- Secret Not Found in UiPath Namespace
- ArgoCD goes into progressing state after first installation
- Unexpected Inconsistency; Run Fsck Manually
- Missing Self-heal-operator and Sf-k8-utils Repo
- Degraded MongoDB or Business Applications After Cluster Restore
- Unhealthy Services After Cluster Restore or Rollback
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite support bundle
- Exploring Logs
Deployment architecture
For more information on the core concepts used in an Automation Suite deployment, refer to Glossary.
Automation Suite supports the following two deployment modes:
Deployment mode |
Description |
---|---|
Single-node — evaluation |
Supported for evaluation and demo scenarios. |
Multi-node — production, HA-enabled |
Supported for production use. You can perform additional configuration post-deployment to have full HA capabilities. |
See Supported use cases for single-node and multi-node installations for more details on how to choose the deployment mode that you best suit your needs.
This page offers insight into the Automation Suite architecture and describes the components bundled into the installer.
A server node hosts cluster management services (control plane) that perform important cluster operations such as workload orchestration, cluster state management, load balance incoming requests, etc. Kubernetes may also run a few of the UiPath products and shared components based on underlying resource availability.
An agent node is responsible for running the UiPath products and shared components only.
A specialized agent node runs special workloads like Task Mining analysis and Document Understanding pipelines that require GPU capability. However, the core Task Mining and Document Understanding services still run on the server or agent nodes. Specialized agent nodes do not host any of the UiPath product or shared components.
A single-node evaluation deployment here means a single-server node. This does not imply the deployment of the entire Automation Suite on a single machine. You may have to add additional agent or specialized agent nodes if the entire product suite cannot fit in a single server node, or if you want to run special tasks like Task Mining analysis and Document understanding pipelines, which require GPU capabilities.
A multi-node HA-ready production deployment involves 3 or more server nodes behind a load balancer. This is to ensure that, in the event of disaster, when any of the server nodes goes down, Automation suite is still available to perform critical business workflows. The number of agent nodes is optional and is based on actual usage.
In a multi-node setup, High Availability (HA) is enabled by default. However, the Redis-based in-memory cache used by cluster services is running on a single pod and represents a single point of failure. To mitigate the impact of a cache node failure or restart, you can purchase the High Availability Add-on (HAA), which enables redundant, multi-pod deployment of the cache.
For more details on how to enable HAA in a multi-node setup, see Enabling High Availability Add-on for the cluster.
An online deployment means Automation Suite requires access to the internet during both installation and runtime. All the UiPath® products and supporting libraries are hosted either in UiPath® registry or UiPath-trusted third party store.
You can restrict access to the internet with the help of either a restricted firewall or a proxy server, by blocking all the traffic over the internet other than what is required by Automation Suite. This type of setup is also known as semi-online deployment. For more details, see Configuring the firewall and Configuring the proxy server.
These types of deployments are easier, faster, and require fewer hardware resources to install and manage as compared to offline deployments.
An offline deployment (air-gapped) is a completely isolated setup without access to the internet. This kind of setup requires the installation of an additional registry to store all the UiPath® products' container images and binaries, which are shipped in the form of tarball.
Uploading binaries (hydration) to the registry introduces additional hardware requirements and installation complexity, increasing the time required to perform an installation as compared to an online deployment.
An offline installation increases not only the complexity during installation, but also the cluster management operations like machine maintenance, disaster recovery, upgrading to newer versions, applying security patches, etc.
You are not allowed to change the deployment method post-installation. This means that you cannot change to offline if the installation is done online and vice versa. It is recommended to choose your deployment strategy after careful consideration.
The Automation Suite installer bundles both required and optional components.
The following table lists out these components:
Component |
Optional/Required |
Description |
---|---|---|
RKE2 |
Required |
Rancher-provided Kubernetes distribution. It is the container orchestration platform that runs all the architectural components and services. |
Rancher Server |
Required |
Rancher’s Kubernetes management tool. |
Longhorn |
Required |
Rancher-provided distributed block storage for Kubernetes. It helps expose external storages inside Kubernetes clusters for workloads to claim and use like mounted persistent storage. |
CEPH Object Store |
Required |
Open-source storage provider that exposes Amazon S3-compliant object/blob storage on top of persistent volumes created by Longhorn. It enables services to use blob storage like functionality for their operations. |
Argo CD |
Required |
Open-source declarative CD tool for Kubernetes. It follows the GitOps pattern of using Git repositories as the source of truth for defining the desired application state. It provides application lifecycle management (ALM) capabilities for Automation Suite components and UiPath services that run in a Kubernetes cluster. |
Docker registry |
Required |
Open-source docker registry used for pushing and pulling install time and runtime container images in your premises. |
Istio |
Required |
Open-source service mesh that provides functionality such as ingress, request routing, traffic monitoring etc., for the microservices running inside the Kubernetes cluster. |
Prometheus |
Required |
Open-source system monitoring toolkit for Kubernetes. It can scrape or accept metrics from Kubernetes components as well as workloads running in the clusters and store those in time series database. |
Grafana |
Required |
Open-source visualization tool used for querying and visualizing data stored in Prometheus. You can create and ship a variety of dashboards for cluster and service monitoring. |
Alertmanager |
Required |
Open-source tool that helps handling alerts sent by client applications such as the Prometheus server. It is responsible for deduplicating, grouping, and routing them to the correct receiver integrations, such as email, PagerDuty, or OpsGenie. |
Redis |
Required |
Redis Enterprise non-HA (single shard) used by some UiPath services to get centralized cache functionality. |
RabbitMQ |
Required |
Open-source reliable message broker used by some UiPath services to implement asynchronous execution patterns. |
MongoDB |
Optional |
MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas. MongoDB is deployed only when UiPath Apps is enabled |
FluentD and Fluentbit |
Required |
Open-source reliable log scraping solution. The logging operator deploys and configures a background process on every node to collect container and application logs from the node file system. |
Gatekeeper |
Required |
Open-source tool that allows a Kubernetes administrator to implement policies for ensuring compliance and best practices in their cluster. |