UiPath Activities

Deploying a Local Computer Vision Server

If you want to deploy your own local Computer Vision Machine Learning server to use with the Computer Vision activities, follow the steps in this page:

Note:

The information in this page is valid only after performing the steps explained in the Setup for Machine Learning Solutions page, which describes how to install all the necessary prerequisites on the target GPU Machine.

Requirements

Hardware Requirements

You need a machine running Ubuntu v16.04 or Red Hat Enterprise Linux v7.x with the following specifications:

minimum
recommended

GPU: nVidia K80

GPU: nVidia P40

Resolution

Time

Resolution

Time

1280x800

1.349

1280x800

0.437

1440x900

1.674

1440x900

0.541

1680x1050

2.124

1680x1050

0.675

1920x1080

2.336

1920x1080

0.756

1920x1200

2.593

1920x1200

0.825

2560x1600

4.180

2560x1600

1.341

For more detailed hardware specifications, see the respective official documentation for Ubuntu 16.04 Desktop Edition and Red Hat Enterprise Linux.

Software Requirements

For the details regarding software requirements and installation prerequisites, go here

Deploying the Computer Vision Machine Learning Model

Moving the ML Model File to the GPU Machine

On the Physical GPU Machine

  1. Download the Controls_detection.zip file to your machine from the link provided to you by your sales rep.

Connecting Remotely to the GPU Machine

  1. Download the Controls_detection.zip file to your machine from the link provided to you by your sales rep.
  2. Open a new PowerShell terminal on the local machine and use cd FULL_PATH_TO_Controls_detection.zip to navigate to the folder where the Controls_detection.zip file is located. For example, if the path of the file is C:\Users\[WINDOWS-USER]\Downloads\Controls_detection.zip, the command should look like:
cd C:\Users\[WINDOWS-USER]\Downloads\
  1. Use the scp .\Controls_detection.zip ServerUser@server-ip:/GPUMachinePath command to copy the files over to the GPU machine. To find out what the path on the GPU machine is, use the pwd command in the terminal connected via ssh to the GPU machine. For example, if the path is /home/ML-Test, the command should look like:
scp .\Controls_detection.zip ServerUser@server-ip://home/ML-Test

Note:

Alternatively, you can use either Putty or WinSCP to connect to the GPU machine and transfer the files without using the scp command.

Unzipping the Archive

On Ubuntu

Run the following commands:

sudo apt-get install -y zip
unzip controls_detection.zip

On Red Hat Enterprise Linux

Run the following commands:

sudo yum install unzip
unzip controls_detection.zip

Starting the Server

  1. On the GPU Machine, use the cd Controls_detection command to navigate to the Controls_detection folder.
  2. Use the bash start_server.sh command to begin deploying the server.
  3. Wait until the installation is 100% complete and press Ctrl + C to stop any programs that might be running in the background.
  4. Run bash test.sh to test the speed of the neural network.

Connecting to the Computer Vision Server

Once the server is deployed, to connect to it when using the Computer Vision activities, you must change the value of the URL property of the CV Screen Scope activity to http://[MACHINE_URL]:[PORT]/v1/models/controls_detection:predict where [MACHINE_URL] is the address of the machine where the server is deployed, and [PORT] is the unique port which the Docker container requires. For example, if the URL of your machine is k80-ubuntu.azure.com, and the port you have chosen to work with is 8501, then the value of the property should look like http://k80-ubuntu.azure.com:8501/v1/models/controls_detection:predict.

The default URL can also be changed from the Project Settings page, under the Computer Vision activities tab.

Important!

Make sure you have the port that is used by the Computer Vision activities (default 8501) open on the machine the model is deployed on.

To open the port on the server machine, you have to run the following console commands, replacing [PORT] with the actual port you want to use and [your-default-zone] with your system's zone (by default public):

For Ubuntu

apt install firewalld
systemctl enable firewalld
firewall-cmd --zone=[your-default-zone] --permanent --add-port=[PORT]/tcp
firewall-cmd --reload

For Red Hat Enterprise Linux

yum install firewalld
systemctl enable firewalld
firewall-cmd --zone=[your-default-zone] --permanent --add-port=[PORT]/tcp
firewall-cmd --reload

For more info on firewallId, you can view their official documentation here.

Changing the Default Port

The default port through which the model is built to connect through is 8501. It is specified in the ./controls_detection/start_server.sh file. The line -p 8501:8501 designates the chosen port.

To change the default port, you must edit the start_server.sh file and modify the -p 8501:8501 line to -p [NEW_PORT]:8501, where [NEW_PORT] is the port you want to use. The default start_server.sh file looks like this:

#!/bin/bash
docker run \
    --name tensorflow-server-controls-detection \
    -p 8501:8501 \
    --mount type=bind,source=$(pwd)/models,target=/models \
    -e MODEL_NAME=controls_detection \
    -t tensorflow/serving:1.12.0-gpu

So, for example, if you wanted to change the port to 80, the content of the start_server.sh file should look like:

#!/bin/bash
docker run \
    --name tensorflow-server-controls-detection \
    -p 80:8501 \
    --mount type=bind,source=$(pwd)/models,target=/models \
    -e MODEL_NAME=controls_detection \
    -t tensorflow/serving:1.12.0-gpu

Updated about a month ago


Deploying a Local Computer Vision Server


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.