Subscribe

UiPath Automation Suite

The UiPath Automation Suite Guide

Configuring the machines

Configuring the disk


Disk requirements


Before the installation, you must partition and configure the disk using LVM, so that its size can be altered easily and without any data migration or data loss.

Disk partitioning


The default partitioning structure on RHEL machines is not suitable for installing Kubernetes. This is because Kubernetes infrastructure is usually installed under the /var partition. By default, the var partition is allocated only 8 GB of space.

📘

Note:

The supported format for disk is ext4 or xfs.

All partitions must be created using LVM. This is to ensure that cluster data can reside on a different disk, but still be able to be viewed coherently. This also helps in extending the partition size in the future without the risk of data migration or data loss.

For the RHEL OS, you need to ensure you have the following minimum mount point sizes on the machine.

Online

Disk

Partition

Size

Purpose

Cluster disk

/var/lib/rancher

190 GiB

Rancher folder stores container images and layers.

Cluster disk

/var/lib/kubelet

56 GiB

Kubelet folder stores runtime Kubernetes manifests such as secrets, configmaps, and emptyDir.

Cluster disk

/opt/UiPathAutomationSuite

10 GiB

Installer binary

etcd disk

/var/lib/rancher/rke2/server/db

16 GiB

Distributed database for Kubernetes

Data disk

/datadisk

512 GiB (Basic installation)
2 TiB (Complete installation)

Block storage abstraction

Offline

📘

Note:

The requirements for offline are the same as online, except for the first machine where you run the install on, which needs the following requirements.

The extra space is needed to unpack the offline bundle.

Disk

Partition

Size

Cluster disk

/var/lib/rancher

190 GiB

Rancher folder stores container images and layers.

Cluster disk

/var/lib/kubelet

56 GiB

Kubelet folder stores runtime Kubernetes manifests such as secrets, configmaps, and emptyDir.

Cluster disk

/opt/UiPathAutomationSuite

10 GiB

Installer Binary

etcd disk

/var/lib/rancher/rke2/server/db

16 GiB

Distributed database for Kubernetes

Data disk

/datadisk

512 GiB (Basic installation)
2 TiB (Complete installation)

Block storage abstraction

UiPath bundle disk

/uipath

512 GiB

Air-gapped bundle

 

📘

Note:

Data and etcd disks should be separate physical disks. This physically isolates the data and etcd disk from other cluster workload and activity while also enhancing the performance and stability of the cluster.

See Configuring the disk for details on how to use the sample script to partition and configure the disk before installation.

 

Using the script to configure the disk


Downloading the script

You can use the configureUiPathDisks.sh script to configure and partition the disk. To download the script, run the following command:

wget -O ~/configureUiPathDisks.sh https://download.uipath.com/automation-suite/configureUiPathDisks.sh

Running the script

You can use the configureUiPathDisks.sh script for the following purposes:

  • configure the disks and mount points for a new Automation Suite cluster installation;
  • resize the data disk post-installation.

For more details on the script usage, run the following command:

sudo ./configureUiPathDisks.sh --help
***************************************************************************************

Utility to configure the disk for UiPath Automation Suite Installation.
Run this script to configure the disks on new machine or to extend the size of datadisk

Arguments
  -n|--node-type                  NodeType, Possible values: agent, server. Default to server
  -i|--install-type               Installation mode, Possible values: online, offline. Default to online
  -c|--cluster-disk-name          Device to host rancher and  kubelet. Ex: /dev/sdb
  -e|--etcd-disk-name             Device to host etcd, Not required for agent node. Ex: /dev/sdb
  -d|--data-disk-name             Device to host datadisk, Not required for agent node. Ex: /dev/sdc
  -b|--bundle-disk-name           Device to host the uipath bundle. 
                                    Only required for offline installation on 1st server node 
  -f|--complete-suite             Installing complete product suite or any of these products: 
                                    aicenter, apps, taskmining, documentunderstanding. 
                                    This will configure the datadisk volume to be 2TiB instead of 512Gi.
  -p|--primary-server             Is this machine is first server machine? Applicable only for airgap install.
                                    This is the machine on which UiPath AutomationSuite bundle will be installed.
                                    Default to false
  --extend-data-disk              Extend the datadisk. Either attach new disk or resize the exiting datadisk
  --resize                        Used in conjunction of with --extend-data-disk to resize the exiting volume,
                                    instead of adding new volume               
  -d|--debug                      Run in debug
  -h|--help                       Display help

ExampleUsage:
  configureUiPathDisks.sh --node-type server --install-type online \
    --cluster-disk-name /dev/sdb --etcd-disk-name /dev/sdc \
    --data-disk-name /dev/sdd

***************************************************************************************

 

Configuring the disk for a single-node setup


Online

To configure the disk in an online single-node setup, run the following command on the machine:

./configuringUiPathAS.sh --cluster-disk-name name_of_cluster_disk \
  --etcd-disk-name name_of_etcd_disk \
  --data-disk-name name_of_cluster_disk

Offline

In an offline installation, you must load the product’s images in the docker registry. For that, additional storage in the form of a separate disk is required to host the UiPath product bundle.

To configure the disk in an offline single-node setup, run the following command on the machine:

./configuringUiPathAS.sh --cluster-disk-name name_of_cluster_disk \
  --etcd-disk-name name_of_etcd_disk \
  --data-disk-name name_of_cluster_disk \
  --bundle-disk-name name_of_uipath_bundle_disk \
  --primary-server \
  --install-type offline

📘

Note:

If you have an additional agent node for Task Mining or a GPU, only a cluster disk is required on that node. You can run the script by providing the cluster disk name and specifying the node type as agent.

 

Extending the data disk post-installation


To extend the data disk, you can attach the new physical disk or resize the existing disk.

Adding a new disk

To extend the data disk using the newly attached disk, run the following command on the server machine:

./configuringUiPathAS.sh --data-disk-name name_of_cluster_disk \
  --extend-data-disk

Resizing the existing disk

To extend the data disk by resizing an existing disk, run the following command on the server machine:

./configuringUiPathAS.sh --extend-data-disk --resize

 

Validating disk mounts


  1. Take the following steps to validate /etc/fstab is correctly configured to handle rebooting of system.

📘

Note:

Make sure that etcd and datadisk mount points are added in the fstab file.

If you have separate disk partition for /var/lib/rancher and /var/lib/kubelet, then fstab should also contains these two folders. Also make sure to include nofail option in those fstab entries so that it does not affect the VM boot in case of failures.

  1. Validate the disks are mounted correctly by running the following command:
mount -afv
  1. You should get the following response:
/datadisk                              : already mounted
/var/lib/rancher/rke2/server/db        : already mounted
/var/lib/rancher                       : already mounted
/var/lib/kubelet                       : already mounted

 

Enabling ports


Make sure that you have the following ports enabled on your firewall for each source.

Port

Protocol

Source

Purpose

Requirements

22

TCP

Jump Server / client machine

For SSH (installation, cluster management debugging)

Do not open this port to the internet. Allow access to client machine or jump server.

80

TCP

Offline installation only: required for sending system email notifications.

443

TCP

All nodes in a cluster + load balancer

For HTTPS (accessing Automation Suite)

This port should have inbound and outbound connectivity from all the nodes in the cluster and the load balancer.

587

TCP

Offline installation only: required for sending system email notifications.

If you enabled Task Mining and provisioned a dedicated Task Mining node and/or provisioned a dedicated node with GPU support, make sure that in addition to the above you have the following ports enabled on your firewall:

Port

Protocol

Source

Purpose

Requirements

2379

TCP

All nodes in a cluster

etcd client port

Must not expose to the internet. Access between nodes should be enough over a private IP address.

2380

TCP

All nodes in a cluster

etcd peer port

Must not expose to the internet. Access between nodes should be enough over a private IP address.

6443

TCP

All nodes in a cluster + load balancer

For accessing Kube API using HTTPS, and required for node joining

This port should have inbound and outbound connectivity from all nodes in the cluster and the load balancer.

8472

UDP

All nodes in a cluster

Required for Flannel (VXLAN)

Must not expose to the internet. Access between nodes should be enough over a private IP address.

9345

TCP

All nodes in a cluster + load balancer

For accessing Kube API using HTTP, required for node joining

This port should have inbound and outbound connectivity from all nodes in the cluster and the load balancer.

10250

TCP

All nodes in a cluster

kubelet / metrics server

Must not expose to the internet. Access between nodes should be enough over a private IP address.

30000 - 32767

TCP

All nodes in a cluster

NodePort port range for internal communication between nodes in a cluster

Must not expose to the internet. Access between nodes should be enough over a private IP address.

🚧

Important!

Ports 6443 and 9345 need to be accessed outside the cluster, but the remaining ports should not be exposed outside the cluster. Run your nodes behind a firewall / security group.

Also ensure, you have connectivity from all nodes to the SQL server.

If you have a firewall setup on the network, make sure that it has these ports open and allows traffic according to the requirements mentioned above.

Do not use Istio reserved ports while configuring your service pods, as it may lead to connection failures.

 

Optional: Configuring the proxy server


To configure a proxy, you need to perform additional configuration steps while setting up your environment with the prerequisites and during the advanced configuration phase of installation time.

The following steps are required when setting up your environment.

📘

Note:

We currently do not support HTTPS proxy with self-signed certificates. Make sure you use a public trusted certificate if you are configuring the proxy.

Step 1: Enabling ports on the Virtual Network

Make sure that you have the following rules enabled on your network security group for the given Virtual Network.

Source

Destination

Route via proxy

Port

Description

Virtual Network

SQL

No

SQL server port

Required for SQL Server.

Virtual Network

Load Balancer

No

9345
6443

Required to add new nodes to the cluster.

Virtual Network

Cluster(subnet)

No

All ports

Required for communication over a private IP range.

Virtual Network

alm.<fqdn>

No

443

Required for login and using ArgoCD client during deployment.

Virtual Network

Proxy Server

Yes

All ports

Required to route traffic to the proxy server.

Virtual Network

NameServer

No

All ports

Most of the cloud services such as Azure and AWS use this to fetch the VM metadata and consider this a private IP.

Virtual Network

MetaDataServer

No

All ports

Most of the cloud services such as Azure and AWS use the IP address 169.254.169.254 to fetch machine metadata.

Step 2: Adding proxy configuration to each node

When configuring the nodes, you need to add the proxy configuration to each node that is part of the cluster. This step is required to route outbound traffic from the node via the proxy server.

  • Add the following configuration in /etc/environment:
HTTP_PROXY=http://<PROXY-SERVER-IP>:<PROXY-PORT>
http_proxy=http://<PROXY-SERVER-IP>:<PROXY-PORT>
HTTPS_PROXY=http://<PROXY-SERVER-IP>:<PROXY-PORT>
https_proxy=http://<PROXY-SERVER-IP>:<PROXY-PORT>
NO_PROXY=alm.<fqdn>,<fixed_rke2_address>,<named server address>,<metadata server address>,<private_subnet_ip>,localhost,<Comma separated list of ips that should not got though proxy server>
no_proxy=alm.<fqdn>,<fixed_rke2_address>,<named server address>,<metadata server address>,<private_subnet_ip>,localhost,<Comma separated list of ips that should not got though proxy server>
  • Add the following configuration in /etc/wgetrc:
HTTP_PROXY=http://<PROXY-SERVER-IP>:<PROXY-PORT>
http_proxy=http://<PROXY-SERVER-IP>:<PROXY-PORT>
HTTPS_PROXY=http://<PROXY-SERVER-IP>:<PROXY-PORT>
https_proxy=http://<PROXY-SERVER-IP>:<PROXY-PORT>
NO_PROXY=alm.<fqdn>,<fixed_rke2_address>,<named server address>,<metadata server address>,<private_subnet_ip>,localhost,<Comma separated list of ips that should not got though proxy server>
no_proxy=alm.<fqdn>,<fixed_rke2_address>,<named server address>,<metadata server address>,<private_subnet_ip>,localhost,<Comma separated list of ips that should not got though proxy server>

Mandatory parameters

Description

http_proxy

Used to route HTTP outbound requests from the node. This should be the proxy server FQDN and port.

https_proxy

Used to route HTTPS outbound requests from the node. This should be the proxy server FQDN and port.

no_proxy

Comma-separated list of hosts, IP addresses, or IP ranges in CIDR format that you do not want to route via the proxy server. This should be a private subnet range, SQL server host, named server address, metadata server address: alm.<fqdn>,<fixed_rke2_address>,<named server address>,<metadata server address>

named server address - Most of the cloud services such as Azure and AWS use this to resolve DNS query.
metadata server address - Most of the cloud services such as Azure and AWS use the IP address 169.254.169.254 to fetch machine metadata.

  • Verify if the proxy settings are properly configured by running the following command:
curl -v $HTTP_PROXY
curl -v <fixed_rke_address>:9345

🚧

Important!

Once you meet the proxy server requirements, make sure to continue with the proxy configuration during installation. Follow the steps in Optional: Configuring the proxy server to ensure the proxy server is set up properly.

Updated a day ago


Configuring the machines


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.