This page describes the steps needed to deploy your own local Machine Learning model to use with UiPath features that require it.
This setup works on on-prem Nvidia GPUs, with no need for virtualization, but also works with cloud providers such as AWS, Azure and GCP.
Example of GPUs without virtualization:
- 1080 Ti 8gb
- RTX 2080 Ti 11gb
- RTX 2070 SUPER 8gb
Example of GPUs with virtualization:
- Tesla V100
- Tesla P40
- Tesla K80
The main difference between these two types of GPUs is that the ones with virtualization usually have more GPU RAM and are offered by most cloud providers. Having more GPU RAM increases the maximum size of the image you can input to the model. In conclusion, virtualization GPUs are not significantly faster that the consumer GPUs.
The following resources must be installed on the machine you want to deploy to:
- CUDA v9.0
- cuDNN v7.6
- NVIDIA Docker v2
This can be easily done by running the following line in the terminal of the GPU machine:
curl -fsSL https://raw.githubusercontent.com/UiPath/Infrastructure/master/ML/prereq_installer.sh | sudo sh
A reboot is required after running the installation script.
This line runs a script hosted by UiPath, which automatically downloads and installs the above resources. Once the script is finished and the resources are installed, to start a server instance of any Machine Learning model, a zip file containing the model is needed. This zip file contains an entry point script and a local speed test script.
If you would like to know more about the technical details of this script, you can visit the UiPath Infrastructure Github Repository.
Updated 6 months ago