- Release notes
- Overview
- Setup and configuration
- Software requirements
- Hardware requirements
- Deploying the server
- Connecting to the server
- Data storage
Software requirements
The supported operating systems for the Computer Vision server are:
- Microsoft Windows 10 21H2, Windows 11
- Ubuntu v16.04, v18.04, v20.04, v22.04
- Red Hat Enterprise Linux 8
The Windows Computer Vision server uses a container-based deployment with Docker in Windows Subsystem for Linux (WSL) 2. The following need to be installed:
- WSL 2
- Docker Desktop for Windows or Docker Engine (if installed directly in WSL)
- Nvidia Windows 11 Display Driver
- Nvidia Container Toolkit
The following resources must be installed on the machine you want to deploy to:
- CUDA v11.1
- cuDNN8 v8.2.1
- Docker
- Nvidia Container Toolkit
For convenience, UiPath provides a script to install these prerequisites. This script is provided "as is", without any implied or explicit guarantee. To install the prerequisites using this script, run the following line in the terminal of the GPU machine:
curl -fsSL https://raw.githubusercontent.com/UiPath/Infrastructure/main/ML/ml_prereq_all.sh | sudo bash -s -- --env gpu
curl -fsSL https://raw.githubusercontent.com/UiPath/Infrastructure/main/ML/ml_prereq_all.sh | sudo bash -s -- --env gpu
This line runs a script hosted by UiPath, which automatically downloads and installs the above resources. Once the script is finished and the resources are installed, to start a server instance of any Machine Learning model, a zip file containing the model is needed. This zip file contains an entry point script and a local speed test script.
If you would like to know more about the technical details of this script, you can visit the UiPath Infrastructure Github Repository.
The following resources must be installed on the machine you want to deploy to:
- CUDA v11.1
- cuDNN8 v8.2.1
- Podman
For convenience, UiPath provides a script to install these prerequisites. This script is provided "as is", without any implied or explicit guarantee. To install the prerequisites using this script, run the following line in the terminal of the GPU machine:
curl -fsSL https://raw.githubusercontent.com/UiPath/Infrastructure/main/ML/ml_prereq_podman_rhel8.sh | sudo bash -s -- --env gpu
curl -fsSL https://raw.githubusercontent.com/UiPath/Infrastructure/main/ML/ml_prereq_podman_rhel8.sh | sudo bash -s -- --env gpu
This line runs a script hosted by UiPath, which automatically downloads and installs the above resources. Once the script is finished and the resources are installed, to start a server instance of any Machine Learning model, a zip file containing the model is needed. This zip file contains an entry point script and a local speed test script.
If you would like to know more about the technical details of this script, you can visit the UiPath Infrastructure Github Repository.