UiPath Document Understanding

UiPath Document Understanding

OCR Services

About OCR Services

OCR services are used for the following purposes:

  • At data labeling time, when importing documents into Data Manager. The services available for this step are UiPath Document OCR (free in cloud or on-premises), Google Cloud OCR (cloud only), Microsoft Read OCR (cloud or on-premises), and Omnipage (on-premises only).
  • At run time when calling models from RPA workflows. The services available for this step are all the OCR engines integrated with the UiPath RPA platform including the above, plus Abbyy Finereader, Microsoft OCR (legacy), Microsoft Project Oxford OCR, and Tesseract.

In production, we recommend calling the OCR using the Digitize Document activity in your workflow and passing the Document Object Model as input to the activity calling the ML model. For this purpose, you need to use the Machine Learning Extractor activity (Official feed).

As a quick convenience for testing purposes, you can also configure the OCR directly in AI Center (Settings window), but this is not recommended for production deployments.

On Premises Deployment Options

UiPath Document OCR has four deployment options available:

  • On the robot using a LocalServer activity package and the UiPath.OCR.Activities package version 3.1.0-preview or later - requires no internet access and no additional hardware but the Robot machine needs a CPU with AVX2 support.
    • This should be your default option. For larger volumes you can add more Robots.
  • Standalone Docker container running on Linux GPU machine (see below - recommended for volumes over 1M pages/yr) - Internet access required for licensing/metering
    • This should be your default option for large volumes over 2-3M pages per year.
  • Standalone Docker container running on Linux CPU machine (see below) - Internet access required for licensing/metering
    • Only for rare situations where your Robot machines run on CPUs without AVX2 support, or where GPU cannot be obtained.
  • ML Skill in AI Center (see ML Packages section) (GPU strongly recommended) - Internet access not required on premises if AI Center installation is airgapped


This section details the hardware and software requirements for installing OCR Engines.

Hardware Requirements

Machines Involved : VM in the Cloud or On-Prem Box or Laptop
Operating Systems: Windows (Windows 10) or Linux (Ubuntu/CentOS/RedHat)
Computing Engines: CPU or GPU
OCR: UiPath OCR CPU or UiPath OCR GPU or OmniPage OCR CPU

CPU Cores


Video RAM (GB)


UiPath CPU




UiPath GPU





OmniPage CPU




Software Requirements

The software requirements for OCR Engines are the same as for Data Manager.

Network Configuration

Data Manager needs access to OCR engine <IP>:<port_number>. OCR engine might be UiPath Document OCR on-premises, Omnipage OCR on-premises, Google Cloud Vision OCR, Microsoft Read Azure, Microsoft Read on-premises.

Robots need access to OCR <IP>:<port_number>. Same OCR options as above, except for Omnipage, which is available in the Robots directly as an Activity Pack.

OCR engines need access to the Licensing server hosted by UiPath in Azure, on port 443.

Minimal Trial or Proof-of-Concept Configuration

If you only want to serve pre-trained out-of-the-box models, you can run an OCR engine on your Windows 10 laptop. Make sure Docker Desktop has 8G of RAM available.

If you want to try training a custom model as a demo on a small volume of data (under 100 documents), you can run the OCR Engine on an environment with a limit of 4GB of RAM. For small cases like this, a GPU for the OCR engine may not be necessary.


OCR Engines are containerized applications that run on top of docker. You cannot run these on the same machine as AI Center on-premises. In order to run them on a separate machine, the pre-requisites installer commands below can be used to set up docker and optionally the NVidia drivers. These scripts should not be run on the machine where AI Center will be installed.

The prerequisites for OCR Engines are the same as for Data Manager.

(Optional) GPU Machine Install


Run this command:

curl -fsSL | sudo bash -s -- --env gpu

On some systems running the command twice or a system reboot might be required to install all requirements.
Azure Specific: In order to use the NV-series virtual machines you need to either install the NVIDIA driver before executing the above command, or you can use a Driver Extension from Azure to install the necessary NVIDIA driver according to that tier GPU model.

Azure VMs

If you are installing on a VM in Azure, then use this command instead:

curl -fsSL | sudo bash -s -- --env gpu --cloud azure


UiPath OCR

UiPath OCR is a proprietary OCR technology of UiPath, supporting characters used by the following Latin script languages: English, French, German, Italian, Portuguese, Romanian, and Spanish. Text in other languages will be recognized but without accents, for instance, “Ł” in Polish will be recognized as “L”. Pages processed using UiPath OCR are not counted towards the page quota purchased along with the Document Understanding Enterprise license so UiPath OCR is free to use.

UiPath OCR is available with the following deployment types:

  • cloud public URLs - more details on the Public Endpoints page.
  • on-premises standalone docker container (requires Internet access)
  • on-premises as ML Skill in AI Center on-premises regular deployment (requires Internet access)
  • on-premises as ML Skill in AI center on-premises air-gapped deployment (does not require Internet access)
  1. To install UiPath OCR standalone docker container, run these commands:
docker login -u *** -p **
docker pull
  1. Run using CPUs
docker run -d -p 5000:80 LicenseAgreement=accept
  1. Run using GPU
docker run -d -p 5000:80 --gpus all LicenseAgreement=accept
  1. In AI Center, when creating a new ML Package, at the bottom of the screen there is the optional OCR configuration section where you can define the server side OCR Engine type, the OCR URL, and the OCR Key. The OCR Key is the API Key you obtain from the Licenses section of your Automation Cloud account. This is the OCR configuration which will be used by the MachineLearning Extractor activity if you check the "UseServerSideOCR" box. This box is unchecked by default, and in that case the extractor will use the OCR in the Digitize Document activity.


Running on the same machine as AI Fabric

UiPath Document OCR container and Omnipage OCR container cannot run on the same machine as AI Center on-premises.

OmniPage OCR

The Omnipage docker container is intended to be used only with Data Manager, for importing documents in languages that UiPath Document OCR does not yet support.

Run these commands:

docker login -u *** -p ***
docker pull
docker run -d -p 5100:80 LicenseAgreement=accept

Google Cloud OCR

The endpoint can be obtained from the Google Cloud Platform documentation. The ApiKey can be obtained from your Google Cloud Platform Console if you have a Google Cloud Vision service in your subscription.

Microsoft Read



Applicable to both Azure and on-premises container endpoints.

In the case of Azure services, you need to provide both the Endpoint and the ApiKey.

In the case of on-premises container endpoints, API Key is not necessary.

Configuring OCR service in Data Manager and AI Center Document Understanding ML Skills

The table below shows how to configure the 5 supported OCR engine types in both Data Manager and AI Center.



The ocr.method argument corresponds to the OCR Engine dropdown in the ML Package creation view in AI Center.

OCR Engine






UiPath Automation Cloud
Document Understanding API Key
Enterprise Plan




UiPath Automation Cloud
Document Understanding API Key
Enterprise Plan




GCP Console API Key

Microsoft Read 2.0 On-Prem




Microsoft Read 2.0 Azure


API Key for your resource from Azure Portal


Microsoft Read 3.2 On-Prem




Microsoft Read 3.2 Azure


API Key for your resource from Azure Portal


Updated about a month ago

OCR Services

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.