ai-computer-vision
2022.10
false
  • Release notes
    • 2022.10.5.7
    • 2022.10.5.5
    • 2022.10.0
  • Overview
  • Setup and configuration
    • Software requirements
    • Hardware requirements
    • Deploying the server
    • Connecting to the server
    • Licensing
  • Data storage
UiPath logo, featuring letters U and I in white
AI Computer Vision User Guide
Automation CloudAutomation Cloud Public SectorAutomation SuiteStandalone
Last updated Nov 11, 2024

Deploying the server

If you want to deploy your own Computer Vision server with Docker on either Microsoft Windows or Ubuntu or with Podman on Red Hat Enterprise Linux and use it with Computer Vision activities, follow the steps below.

Microsoft Windows

Before deploying the server, make sure to check the software and hardware requirements.

Installing WSL

First, WSL has to be installed on your machine.

To install WSL, run the following command, where {distribution} is the Linux distribution you want to use:
wsl --install -d {distribution}wsl --install -d {distribution}
Note: The recommended operating system for this installation process is Ubuntu.

Installing Nvidia drivers

To run the Computer Vision Server on a Windows machine, you must download and install the Nvidia Windows 11 display driver on the system with a compatible GeForce or Nvidia RTX/Quadro card from the official Nvidia website.

Important: Any kind of Linux display driver installed in WSL might cause errors.

Installing Docker and Nvidia Container Toolkit

You can install both Docker and Nvidia Container Toolkit by running the following script:

https://github.com/UiPath/Infrastructure/blob/main/ML/ml_prereq_wsl.shhttps://github.com/UiPath/Infrastructure/blob/main/ML/ml_prereq_wsl.sh

Running the server

To run the Computer Vision Server, you have to run the following commands in the WSL Linux Terminal:

export CV_URL="LINK_FROM_SALES_REP"
wget "$CV_URL" -O controls_detection.tar
docker load -i controls_detection.tar
docker run -p 8501:5000 --gpus all controls_detection eula=acceptexport CV_URL="LINK_FROM_SALES_REP"
wget "$CV_URL" -O controls_detection.tar
docker load -i controls_detection.tar
docker run -p 8501:5000 --gpus all controls_detection eula=accept

Making the server available on the network

For the server to be visible on the local network, a firewall rule must be created on Windows, with an Inbound Rule for the port on which the Computer Vision server is available. By default, the port is 8501.

Because the Linux instance running in WSL has its own virtual network interface controller, the traffic to the host IP is not directly redirected to the IP of the Linux instance. This issue can be bypassed by forwarding the traffic of the host IP to the Linux instance with the following command:

netsh interface portproxy add v4tov4 listenaddress=0.0.0.0 listenport=8501 connectaddress=$wsl_ip connectport=8501netsh interface portproxy add v4tov4 listenaddress=0.0.0.0 listenport=8501 connectaddress=$wsl_ip connectport=8501

The WSL IP can be found with the following command:

wsl -d {distribution} hostname -Iwsl -d {distribution} hostname -I
Note: This issue does not appear when using Docker Desktop on Windows.

Automatically starting the server

When using Docker Engine installed directly on Linux, to automatically start the server when the machine boots, a Scheduled Task must be created in Windows. This task is executed at system startup and runs the following PowerShell script, where {distribution} is the installed Linux distribution:
wsl -d {distribution} echo "starting...";
$wsl_ips = (wsl -d {distribution} hostname -I);
$host_ip = $wsl_ips.Split(" ")[0];
netsh interface portproxy add v4tov4 listenport=8501 listenaddress=0.0.0.0 connectport=8501 connectaddress=$host_ip;
wsl -d {distribution} -u root service docker start;
wsl -d {distribution} -u root docker run -p 8501:5000 --gpus all controls_detection eula=accept;wsl -d {distribution} echo "starting...";
$wsl_ips = (wsl -d {distribution} hostname -I);
$host_ip = $wsl_ips.Split(" ")[0];
netsh interface portproxy add v4tov4 listenport=8501 listenaddress=0.0.0.0 connectport=8501 connectaddress=$host_ip;
wsl -d {distribution} -u root service docker start;
wsl -d {distribution} -u root docker run -p 8501:5000 --gpus all controls_detection eula=accept;

Installation constraints

This installation process requires a machine which supports nested virtualization. Currently, most Cloud VMs do not support nested virtualization for GPU machines. In conclusion, this installation process is best suited for customers which have physical Windows servers with GPUs.

Ubuntu

Before deploying the server, make sure to check the software and hardware requirements.

All the commands listed on this page should be executed in a terminal on the GPU Machine.

Downloading the Computer Vision server image export

Save the link provided to you by your sales representative in the current terminal session:

export CV_URL="LINK_FROM_SALES_REP"export CV_URL="LINK_FROM_SALES_REP"

Download the export:

wget "$CV_URL" -O controls_detection.tarwget "$CV_URL" -O controls_detection.tar

Loading the image into Docker

Run the following command:

docker load -i controls_detection.tardocker load -i controls_detection.tar

Starting the server

Run the following command:

docker run \
-p 8501:5000 \
--gpus all \
controls_detection eula=acceptdocker run \
-p 8501:5000 \
--gpus all \
controls_detection eula=accept

Upgrading the Computer Vision model

Upgrading the model is equivalent to installing a new version of it. This is because upgrading implies changing the model itself and also its binaries, which inevitably causes the server to stop working.

If you want to perform the upgrade on the exact same server machine, downtime is to be expected. To avoid this scenario, you can simply install the new version on a different server machine and once the installation is complete, switch traffic to it.

A standard upgrade scenario looks like this:

  1. Prepare for and announce downtime (if applicable).
  2. Install the new model in place of the old one.
  3. Run the server.

If your environment uses a multi-node Load Balancer setup, you can avoid downtime altogether by reinstalling each node, one at a time.

Linux RHEL

Before deploying the server, make sure to check the software and hardware requirements.

All the commands listed on this page should be executed in a terminal on the GPU Machine.

Downloading the Computer Vision server image export

Save the link provided to you by your sales representative in the current terminal session:

export CV_URL="LINK_FROM_SALES_REP"export CV_URL="LINK_FROM_SALES_REP"

Download the export:

wget "$CV_URL" -O controls_detection.tarwget "$CV_URL" -O controls_detection.tar

Loading the image into Podman

Run the following command:

podman load -i controls_detection.tarpodman load -i controls_detection.tar

Starting the server

Run the following command:

podman run -p 8501:5000 --hooks-dir=/usr/share/containers/oci/hooks.d/
      \--security-opt=label=disable controls_detection eula=acceptpodman run -p 8501:5000 --hooks-dir=/usr/share/containers/oci/hooks.d/
      \--security-opt=label=disable controls_detection eula=accept

Upgrading the Computer Vision model

Upgrading the model is equivalent to installing a new version of it. This is because upgrading implies changing the model itself and also its binaries, which inevitably causes the server to stop working.

If you want to perform the upgrade on the exact same server machine, downtime is to be expected. To avoid this scenario, you can simply install the new version on a different server machine and once the installation is complete, switch traffic to it.

A standard upgrade scenario looks like this:

  1. Prepare for and announce downtime (if applicable).
  2. Install the new model in place of the old one.
  3. Run the server.

If your environment uses a multi-node Load Balancer setup, you can avoid downtime altogether by reinstalling each node, one at a time.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.