- Overview
- Requirements
- Recommended: Deployment templates
- Manual: Preparing the installation
- Manual: Preparing the installation
- Step 1: Configuring the OCI-compliant registry for offline installations
- Step 2: Configuring the external objectstore
- Step 3: Configuring High Availability Add-on
- Step 4: Configuring Microsoft SQL Server
- Step 5: Configuring the load balancer
- Step 6: Configuring the DNS
- Step 7: Configuring kernel and OS level settings
- Step 8: Configuring the disks
- Step 9: Configuring the node ports
- Step 10: Applying miscellaneous settings
- Step 12: Validating and installing the required RPM packages
- Step 13: Generating cluster_config.json
- Certificate configuration
- Database configuration
- External Objectstore configuration
- Pre-signed URL configuration
- External OCI-compliant registry configuration
- Disaster recovery: Active/Passive and Active/Active configurations
- High Availability Add-on configuration
- Orchestrator-specific configuration
- Insights-specific configuration
- Process Mining-specific configuration
- Document Understanding-specific configuration
- Automation Suite Robots-specific configuration
- Monitoring configuration
- Optional: Configuring the proxy server
- Optional: Enabling resilience to zonal failures in a multi-node HA-ready production cluster
- Optional: Passing custom resolv.conf
- Optional: Increasing fault tolerance
- install-uipath.sh parameters
- Adding a dedicated agent node with GPU support
- Adding a dedicated agent Node for Task Mining
- Connecting Task Mining application
- Adding a Dedicated Agent Node for Automation Suite Robots
- Step 15: Configuring the temporary Docker registry for offline installations
- Step 16: Validating the prerequisites for the installation
- Manual: Performing the installation
- Post-installation
- Cluster administration
- Managing products
- Getting Started with the Cluster Administration portal
- Migrating objectstore from persistent volume to raw disks
- Migrating from in-cluster to external High Availability Add-on
- Migrating data between objectstores
- Migrating in-cluster objectstore to external objectstore
- Switching to the secondary cluster manually in an Active/Passive setup
- Disaster Recovery: Performing post-installation operations
- Converting an existing installation to multi-site setup
- Guidelines on upgrading an Active/Passive or Active/Active deployment
- Guidelines on backing up and restoring an Active/Passive or Active/Active deployment
- Redirecting traffic for the unsupported services to the primary cluster
- Monitoring and alerting
- Migration and upgrade
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Orchestrator
- Step 7: Migrating standalone Insights
- Step 8: Deleting the default tenant
- B) Single tenant migration
- Migrating from Automation Suite on Linux to Automation Suite on EKS/AKS
- Upgrading Automation Suite
- Downloading the installation packages and getting all the files on the first server node
- Retrieving the latest applied configuration from the cluster
- Updating the cluster configuration
- Configuring the OCI-compliant registry for offline installations
- Migrating to an external OCI-compliant registry
- Executing the upgrade
- Performing post-upgrade operations
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to troubleshoot services during installation
- How to uninstall the cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bundle
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to disable NIC checksum offloading
- How to upgrade from Automation Suite 2022.10.10 and 2022.4.11 to 2023.10.2
- How to manually set the ArgoCD log level to Info
- Unable to run an offline installation on RHEL 8.4 OS
- Error in downloading the bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- First installation fails during Longhorn setup
- SQL connection string validation error
- Prerequisite check for selinux iscsid module fails
- Azure disk not marked as SSD
- Failure after certificate update
- Antivirus causes installation issues
- Automation Suite not working after OS upgrade
- Automation Suite requires backlog_wait_time to be set to 0
- Volume unable to mount due to not being ready for workloads
- Cluster unhealthy after automated upgrade from 2021.10
- Upgrade fails due to unhealthy Ceph
- RKE2 not getting started due to space issue
- Volume unable to mount and remains in attach/detach loop state
- Upgrade fails due to classic objects in the Orchestrator database
- Ceph cluster found in a degraded state after side-by-side upgrade
- Unhealthy Insights component causes the migration to fail
- Service upgrade fails for Apps
- In-place upgrade timeouts
- Docker registry migration stuck in PVC deletion stage
- AI Center provisioning failure after upgrading to 2023.10
- Upgrade fails in offline environments
- Setting a timeout interval for the management portals
- Authentication not working after migration
- Kinit: Cannot find KDC for realm <AD Domain> while getting initial credentials
- Kinit: Keytab contains no suitable keys for *** while getting initial credentials
- GSSAPI operation failed due to invalid status code
- Alarm received for failed Kerberos-tgt-update job
- SSPI provider: Server not found in Kerberos database
- Login failed for AD user due to disabled account
- ArgoCD login failed
- Update the underlying directory connections
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis probe failure
- RKE2 server fails to start
- Secret not found in UiPath namespace
- ArgoCD goes into progressing state after first installation
- MongoDB pods in CrashLoopBackOff or pending PVC provisioning after deletion
- Unhealthy services after cluster restore or rollback
- Pods stuck in Init:0/X
- Missing Ceph-rook metrics from monitoring dashboards
- Running the diagnostics tool
- Using the Automation Suite Support Bundle Tool
- Exploring Logs
Step 9: Configuring the node ports
Make sure to enable the following ports on your firewall for each source:
Port |
Protocol |
Source |
Purpose |
Requirements |
---|---|---|---|---|
|
TCP |
Jump Server / client machine |
For SSH (installation, cluster management debugging) |
Do not open this port to the internet. Allow access to the client machine or jump server. |
|
TCP |
All nodes in a cluster and the load balancer |
For HTTPS (accessing Automation Suite) |
This port must have inbound and outbound connectivity from all the nodes in the cluster and the load balancer. |
|
TCP |
All nodes in a cluster |
etcd client port |
Do not open this port to the internet. Access between nodes must be ensured over a private IP address. |
|
TCP |
All nodes in a cluster |
etcd peer port |
Do not open this port to the internet. Access between nodes must be ensured over a private IP address. |
|
TCP |
All nodes in a cluster |
For accessing Kube API using HTTPS, and required for node joining |
This port must have inbound and outbound connectivity from all the nodes in the cluster. |
|
UDP |
All nodes in a cluster |
Required for Flannel (VXLAN) |
Do not open this port to the internet. Access between nodes must be ensured over a private IP address. |
|
TCP |
All nodes in the cluster |
Used by Cilium for monitoring and handling pod crashes |
This port must have inbound and outbound connectivity from all the nodes in the cluster. |
|
TCP |
All nodes in a cluster and the load balancer |
For accessing Kube API using HTTPS, required for node joining |
This port must have inbound and outbound connectivity from all nodes in the cluster and the load balancer. |
|
TCP |
All nodes in a cluster |
kubelet / metrics server |
Do not open this port to the internet. Access between nodes must be ensured over a private IP address. |
|
TCP |
All nodes in a cluster |
NodePort port for internal communication between nodes in a cluster |
Do not open this port to the internet. Access between nodes must be ensured over a private IP address. |
The following additional ports are required in offline installations:
Port |
Protocol |
Source |
Purpose |
Requirements |
---|---|---|---|---|
|
TCP |
All nodes in the cluster |
Required for sending system email notifications |
Do not open this port to the internet. Access between nodes and the SMTP server must be ensured over a private IP address. |
|
TCP |
All nodes in the cluster |
Required for sending system email notifications |
Do not open this port to the internet. Access between nodes and the SMTP server must be ensured over a private IP address. |
30070 1 |
TCP | The machine on which you plan to trigger the installation or upgrade. | For accessing the temporary registry during installation and upgrade using HTTP. | Traffic on this port must be forwarded to the Server Pool. |
30070
on the machine on which you plan to trigger the installation or upgrade.
Exposing port 6443 outside the cluster boundary is mandatory if there is a direct connection to the Kerberos API.
Port 9345 is used by nodes to discover existing nodes and join the cluster in the multi-node deployment. To keep the high availability discovery mechanisms running, we recommend exposing it via the load balancer with health check.
Additionally, make sure you have connectivity from all nodes to the SQL server. Do not expose the SQL server on one of the Istio reserved ports, as it may lead to connection failures.
If you have a firewall set up in the network, make sure that it has these ports open and allows traffic according to the aforementioned requirements.