Important
Automation Suite currently supports only Nvidia GPU drivers. See the list of GPU-supported operating systems.
For more on the cloud-specific instance types, see the following:
Before adding a dedicated agent node with GPU support, make sure to check Hardware requirements.
For more examples on how to deploy NVIDIA CUDA on a GPU, check this page.
Installing a GPU driver on the machine
Important
GPU driver is stored under
/opt/nvidia
and/usr
folder. It is highly recommended that these folders should be at-least 5 GiB and 15 GiB respectively on GPU agent machine.
- To install the GPU driver on the agent node, run the following command:
sudo yum install kernel kernel-tools kernel-headers kernel-devel
sudo reboot
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
sudo sed 's/$releasever/8/g' -i /etc/yum.repos.d/epel.repo
sudo sed 's/$releasever/8/g' -i /etc/yum.repos.d/epel-modular.repo
sudo yum config-manager --add-repo http://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo
sudo yum install cuda
- To install the container toolkits, run the following command:
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo
sudo dnf clean expire-cache && sudo dnf install -y nvidia-container-toolkit
sudo yum install -y nvidia-container-runtime.x86_64
Verify if the drivers are installed properly
Run sudo nvidia-smi
command on the node to verify if the drivers were installed properly.

Note:
Once the cluster has been provisioned, additional steps are required to configure the provisioned GPUs.
At this point, the GPU drivers have been installed and that the GPU nodes have been added to the cluster.
Â
Adding a GPU node to the cluster
Step 1: Configuring the machine
Follow the steps for configuring the machine to ensure the disk is partitioned correctly and all networking requirements are met.
- Configuring the machine for a single-node evaluation setup
- Configuring the machine for a multi-node HA-ready production setup
Step 2: Copying the interactive installation wizard to the target machine for installation
For online installation
-
SSH to any of the server machine.
-
Run the following command to copy the contents of the
UiPathAutomationSuite
folder to the GPU node (username and DNS are specific to the GPU node):
sudo su -
scp -r /opt/UiPathAutomationSuite <username>@<node dns>:/opt/
scp -r ~/* <username>@<node dns>:/opt/UiPathAutomationSuite/
For offline installation
-
SSH to any of the server node.
-
Ensure that the
/opt/UiPathAutomationSuite
directory containssf-infra.tar.gz
file (it is part of the installation package download step )
scp -r ~/opt/UiPathAutomationSuite <username>@<node dns>:/var/tmp
Step 3: Running the interactive installation wizard to configure the dedicated node
For online installation
- SSH to the GPU Node.
- Run the following commands:
sudo su -
cd /opt/UiPathAutomationSuite
chmod -R 755 /opt/UiPathAutomationSuite
yum install unzip jq -y
CONFIG_PATH=/opt/UiPathAutomationSuite/cluster_config.json
UNATTENDED_ACTION="accept_eula,download_bundle,extract_bundle,join_gpu" ./installUiPathAS.sh
For offline installation
- Connect via SSH to the GPU dedicated node.
- Install the platform bundle on the GPU dedicated node using the following script:
sudo su
mv /var/tmp/UiPathAutomationSuite /opt
cd /opt/UiPathAutomationSuite
chmod -R 755 /opt/UiPathAutomationSuite
./install-uipath.sh -i ./cluster_config.json -o ./output.json -k -j gpu --offline-bundle ./sf-infra.tar.gz --offline-tmp-folder /opt/UiPathAutomationSuite/tmp --install-offline-prereqs --accept-license-agreement
Enabling the GPU on the cluster
-
Log in to any server node.
-
Navigate to the installer folder (
UiPathAutomationSuite
).
cd /opt/UiPathAutomationSuite
- Enable the GPU on the cluster by running the following command on any server node:
sudo ./configureUiPathAS.sh gpu enable
Updated 5 months ago