Subscribe

UiPath Automation Suite

The UiPath Automation Suite Guide

Configuring the machines

This page provides instructions on how to configure the machines in a single-node evaluation environment.

Configuring the disk


🚧

Important!

To prevent data loss, ensure the infrastructure you use does not automatically delete cluster disks on cluster reboot or shutdown. If this capability is enabled, make sure to disable it.

You can configure and partition the disks using the configureUiPathDisks.sh script. For details, see the following sections.

Disk requirements


Before the installation, you must partition and configure the disk using LVM, so that its size can be altered easily and without any data migration or data loss.

Disk partitioning


The default partitioning structure on RHEL machines is not suitable for installing Kubernetes. This is because Kubernetes infrastructure is usually installed under the /var partition. By default, the var partition is allocated only 8 GB of space.

📘

Note:

The supported format for disk is ext4 or xfs.

All partitions must be created using LVM. This is to ensure that cluster data can reside on a different disk, but still be able to be viewed coherently. This also helps in extending the partition size in the future without the risk of data migration or data loss.

For the RHEL OS, you need to ensure you have the following minimum mount point sizes on the machine.

Online

Disk label Partition Size Purpose
Cluster disk /var/lib/rancher 190 GiB Rancher folder stores container images and layers
/var/lib/kubelet 56 GiB Kubelet folder stores runtime Kubernetes configurations such as secrets, configmaps, and emptyDir
/opt/UiPathAutomationSuite 10 GiB Installer binary
etcd disk /var/lib/rancher/rke2/server/db 16 GiB Distributed database for Kubernetes
Block storage /datadisk 512 GiB Block storage abstraction
Objectstore1 N/A 512 GiB In-cluster objectstore

1 This refers to the in-cluster objectstore and is not applicable if you use external objectstore.

Offline

📘

Note:

The requirements for offline are the same as online, except for the first machine where you run the install on, which needs the following requirements.

The extra space is needed to unpack the offline bundle.

Disk label Partition Size Purpose
Cluster disk /var/lib/rancher 190 GiB Rancher folder stores container images and layers
/var/lib/kubelet 56 GiB Kubelet folder stores runtime Kubernetes configurations such as secrets, configmaps, and emptyDir
/opt/UiPathAutomationSuite 10 GiB Installer binary
etcd disk /var/lib/rancher/rke2/server/db 16 GiB Distributed database for Kubernetes
Block storage /datadisk 512 GiB Block storage abstraction
Objectstore 1 N/A 512 GiB In-cluster objectstore
UiPath bundle disk /uipath 512 GiB Air-gapped bundle

1 This refers to the in-cluster objectstore and is not applicable if you use external objectstore.

 

📘

Note:

Data and etcd disks should be separate physical disks. This physically isolates the data and etcd disk from other cluster workload and activity while also enhancing the performance and stability of the cluster.

See the following section for details on how to use the sample script to partition and configure the disk before installation.

 

Using the script to configure the disk


Downloading the script

📘

Note:

For offline installations, you must perform this step on a machine with access to internet and to the air-gapped machines where Automation Suite is deployed. Copy the file from the online machine to the target machine.

You can use the configureUiPathDisks.sh script to configure and partition the disk.
For download instructions, see configureUiPathDisks.sh.

Running the script

You can use the configureUiPathDisks.sh script for the following purposes:

  • configure the disks and mount points for a new Automation Suite cluster installation;
  • resize the data disk post-installation.

To make the script executable, run:

chmod +x ./configureUiPathDisks.sh

For more details on the script usage, run:

sudo ./configureUiPathDisks.sh --help
***************************************************************************************

Utility to configure the disk for UiPath Automation Suite Installation.
Run this script to configure the disks on new machine or to extend the size of datadisk

Arguments
  -n|--node-type                  NodeType, Possible values: agent, server. Default to server.
  -i|--install-type               Installation mode, Possible values: online, offline. Default to online.
  -c|--cluster-disk-name          Device to host rancher and  kubelet. Ex: /dev/sdb.
  -e|--etcd-disk-name             Device to host etcd, Not required for agent node. Ex: /dev/sdb.
  -R|--ceph-raw-disk-name         Device to host ceph OSD, Not required for agent node. Ex: /dev/sdm
  -l|--data-disk-name             Device to host datadisk, Not required for agent node. Ex: /dev/sdc.
  -b|--bundle-disk-name           Device to host the uipath bundle.
                                    Only required for offline installation on 1st server node.
  -k|--robot-package-disk-name    Device to host robot package cache folder
                                    Only required for Automation Suite Robots dedicated node when 'packagecaching' is enabled
  -P|--robot-package-path         Path to robot package cache folder
                                    (defaults to '/uipath_asrobots_package_cache' if not set)
  -f|--complete-suite             Installing complete product suite or any of these products:
                                    aicenter, apps, taskmining, documentunderstanding.
                                    This will configure the datadisk volume to be 2TiB instead of 512Gi.
  -p|--primary-server             Is this machine is first server machine? Applicable only for airgap install.
                                    This is the machine on which UiPath AutomationSuite bundle will be installed.
                                    Default to false.
  -x|--extend-data-disk           Extend the datadisk. Either attach new disk or resize the exiting datadisk.
  -r|--resize                     Used in conjunction of with --extend-data-disk to resize the exiting volume,
                                    instead of adding new volume.
  -d|--debug                      Run in debug.
  -h|--help                       Display help.

ExampleUsage:

  configureUiPathDisks.sh --node-type server --install-type online \
    --cluster-disk-name /dev/sdb --etcd-disk-name /dev/sdc \
    --data-disk-name /dev/sdd

  configureUiPathDisks.sh --data-disk-name /dev/sdh \
    --extend-data-disk

***************************************************************************************

 

Configuring the disk for a single-node evaluation setup


Online

To configure the disk in an online single-node evaluation setup, run the following command on the machine:

./configureUiPathDisks.sh --cluster-disk-name name_of_cluster_disk \
  --etcd-disk-name name_of_etcd_disk \
  --data-disk-name name_of_data_disk

Offline

In an offline installation, you must load the product’s images in the docker registry. For that, additional storage in the form of a separate disk is required to host the UiPath product bundle.

To configure the disk in an offline single-node evaluation setup, run the following command on the machine:

./configureUiPathDisks.sh --cluster-disk-name name_of_cluster_disk \
  --etcd-disk-name name_of_etcd_disk \
  --data-disk-name name_of_data_disk \
  --bundle-disk-name name_of_uipath_bundle_disk \
  --primary-server \
  --install-type offline

📘

Note:

If you have an additional agent node for Task Mining or a GPU, only a cluster disk is required on that node. You can run the script by providing the cluster disk name and specifying the node type as agent.

 

Configuring the objectstore disk


📘

An Azure known issue incorrectly marks the Azure disk as non-SSD. If Azure is your cloud provider, and you want to configure the Objectstore disk, follow the instructions in Troubleshooting.

You can expand your storage size for the in-cluster objectstore by running the following script:

./configureUiPathDisks.sh --ceph-raw-disk-name name_ceph_raw_disk

To increase the size of your in-cluster storage post-installation, rerun the same command.

 

Extending the data disk post-installation


To extend the data disk, you can attach the new physical disk or resize the existing disk.

Adding a new disk

To extend the data disk using the newly attached disk, run the following command on the server machine:

./configureUiPathDisks.sh --data-disk-name name_of_data_disk \
  --extend-data-disk

Resizing the existing disk

To extend the data disk by resizing an existing disk, run the following command on the server machine:

./configureUiPathDisks.sh --extend-data-disk --resize

 

Validating disk mounts


  1. Take the following steps to validate /etc/fstab is correctly configured to handle rebooting of system.

📘

Note:

Make sure that etcd and datadisk mount points are added in the fstab file.

If you have separate disk partition for /var/lib/rancher and /var/lib/kubelet, then fstab should also contains these two folders. Also make sure to include nofail option in those fstab entries so that it does not affect the VM boot in case of failures.

  1. Validate the disks are mounted correctly by running the following command:
mount -afv
  1. You should get the following response:
/datadisk                              : already mounted
/var/lib/rancher/rke2/server/db        : already mounted
/var/lib/rancher                       : already mounted
/var/lib/kubelet                       : already mounted

 

Enabling ports


Make sure that you have the following ports enabled on your firewall for each source.

PortProtocolSourcePurposeRequirements
22TCPJump Server / client machineFor SSH (installation, cluster management debugging)Do not open this port to the internet. Allow access to client machine or jump server.
80TCPOffline installation only: required for sending system email notifications.
443TCPAll nodes in a cluster + load balancerFor HTTPS (accessing Automation Suite)This port should have inbound and outbound connectivity from all the nodes in the cluster and the load balancer.
587TCPOffline installation only: required for sending system email notifications.
9090TCPAll nodes in the clusterUsed by Cilium for monitoring and handling pod crashes.This port should have inbound and outbound connectivity from all the nodes in the cluster.

If you enabled Task Mining and provisioned a dedicated Task Mining node and/or provisioned a dedicated node with GPU support, make sure that in addition to the above you have the following ports enabled on your firewall:

PortProtocolSourcePurposeRequirements
2379TCPAll nodes in a clusteretcd client portMust not expose to the internet. Access between nodes should be enough over a private IP address.
2380TCPAll nodes in a clusteretcd peer portMust not expose to the internet. Access between nodes should be enough over a private IP address.
6443TCPAll nodes in a cluster + load balancerFor accessing Kube API using HTTPS, and required for node joiningThis port should have inbound and outbound connectivity from all nodes in the cluster and the load balancer.
8472UDPAll nodes in a clusterRequired for Flannel (VXLAN)Must not expose to the internet. Access between nodes should be enough over a private IP address.
9345TCPAll nodes in a cluster + load balancerFor accessing Kube API using HTTP, required for node joiningThis port should have inbound and outbound connectivity from all nodes in the cluster and the load balancer.
10250TCPAll nodes in a clusterkubelet / metrics serverMust not expose to the internet. Access between nodes should be enough over a private IP address.
30071TCPAll nodes in a clusterNodePort port for internal communication between nodes in a clusterMust not expose to the internet. Access between nodes should be enough over a private IP address.

🚧

Important!

Ports 6443 and 9345 need to be accessed outside the cluster, but the remaining ports should not be exposed outside the cluster. Run your nodes behind a firewall / security group.

Also ensure you have connectivity from all nodes to the SQL server.
Do not expose the SQL server on one of the Istio reserved ports, as it may lead to connection failures.

If you have a firewall setup on the network, make sure that it has these ports open and allows traffic according to the requirements mentioned above.

 

Installing the required RPM packages


To install and validate the required RPM packages, you can use any of the following tools

  • the install-uipath.sh manual installer. In online environments, it installs and validates all RPM packages by default.
  • the installUiPathAS.sh interactive installer. It validates all RPM packages via the install-uipath.sh installer.
  • the validateUiPathASReadiness.sh script:
    • to validate the RPM packages, run: validateUiPathASReadiness.sh validate-packages;
    • to install the RPM packages in an online installation, run: validateUiPathASReadiness.sh install-packages;
    • to install the RPM packages in an offline installation, run: validateUiPathASReadiness.sh install-packages --install-type offline.

For a list of RPM package requirements, see Hardware and software requirements.

 

Optional: Configuring the proxy server


To configure a proxy, you need to perform additional configuration steps while setting up your environment with the prerequisites and during the advanced configuration phase of installation time.

The following steps are required when setting up your environment.

📘

Note:

We currently do not support HTTPS proxy with self-signed certificates. Make sure you use a public trusted certificate if you are configuring the proxy.

Step 1: Enabling ports on the Virtual Network

Make sure that you have the following rules enabled on your network security group for the given Virtual Network.

SourceDestinationRoute via proxyPortDescription
Virtual NetworkSQLNoSQL server portRequired for SQL Server.
Virtual NetworkLoad BalancerNo9345
6443
Required to add new nodes to the cluster.
Virtual NetworkCluster(subnet)NoAll portsRequired for communication over a private IP range.
Virtual Networkalm.<fqdn>No443Required for login and using ArgoCD client during deployment.
Virtual NetworkProxy ServerYesAll portsRequired to route traffic to the proxy server.
Virtual NetworkNameServerNoAll portsMost of the cloud services such as Azure and AWS use this to fetch the VM metadata and consider this a private IP.
Virtual NetworkMetaDataServerNoAll portsMost of the cloud services such as Azure and AWS use the IP address 169.254.169.254 to fetch machine metadata.

Step 2: Adding proxy configuration to each node

When configuring the nodes, you need to add the proxy configuration to each node that is part of the cluster. This step is required to route outbound traffic from the node via the proxy server.

  1. Add the following configuration in /etc/environment:
http_proxy=http://<PROXY-SERVER-IP>:<PROXY-PORT>
https_proxy=http://<PROXY-SERVER-IP>:<PROXY-PORT>
no_proxy=alm.<fqdn>,<fixed_rke2_address>,<named server address>,<metadata server address>,<private_subnet_ip>,localhost,<Comma separated list of ips that should not got though proxy server>
  1. Add the following configuration in /etc/wgetrc:
http_proxy=http://<PROXY-SERVER-IP>:<PROXY-PORT>
https_proxy=http://<PROXY-SERVER-IP>:<PROXY-PORT>
no_proxy=alm.<fqdn>,<fixed_rke2_address>,<named server address>,<metadata server address>,<private_subnet_ip>,localhost,<Comma separated list of ips that should not got though proxy server>
Mandatory parametersDescription
http_proxyUsed to route HTTP outbound requests from the node. This should be the proxy server FQDN and port.
https_proxyUsed to route HTTPS outbound requests from the node. This should be the proxy server FQDN and port.
no_proxyComma-separated list of hosts, IP addresses that you do not want to route via the proxy server. This should be a private subnet, SQL server host, named server address, metadata server address: alm.<fqdn>,<fixed_rke2_address>,<named server address>,<metadata server address>

named server address – Most of the cloud services such as Azure and AWS use this to resolve DNS query.
metadata server address – Most of the cloud services such as Azure and AWS use the IP address 169.254.169.254 to fetch machine metadata.
  1. Verify if the proxy settings are properly configured by running the following command:
curl -v $HTTP_PROXY
curl -v <fixed_rke_address>:9345

🚧

Important!

Once you meet the proxy server requirements, make sure to continue with the proxy configuration during installation. Follow the steps in Optional: Configuring the proxy server to ensure the proxy server is set up properly.

Updated 4 months ago


Configuring the machines


This page provides instructions on how to configure the machines in a single-node evaluation environment.

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.