This page details the hardware, software requirements as well as prerequisites for installing AI Fabric multinode on a DockerEE Kubernetes cluster.
AI Fabric multinodes requires a minimum of four nodes available for the installation. The four nodes will be used as follow:
- 1 Universal Control Plane (UCP) Manager node
- 1 Docker Trusted Registry (DTR) node
- 2 worker nodes
The minimum nodes configuration is the following.
OS Disk (GB)
External Data Disk (GB)
The ML Skills and Pipeline jobs will run on the worker node(s). In order to scale add as many worker nodes as you want to run more ML Skills and Pipelines jobs. There is no universal value for how much resources one ML Skill/Pipeline job will consume as it depends of the model. However, here are minimum resources used by an ML Skill/Pipeline job along with resources used by a UiPath Document Understanding Model as a baseline, please note that by default we are deploying two replicas for each ML Skill, numbers below are for 1 replica:
Minimum for Serving (ML SKill)
Minimum for Training (Pipeline)
DU model Serving
DU model Training (1000 images)
While estimating number of worker nodes you need to support do not forget to add our core services resources. In DockerEE cluster, our core services need at least 6.5 CPU and 18.75GB RAM to run.
Most scenarios will not require training on a GPU as very few model architectures can execute with GPU but not CPU. If you have constraints on model training time, it is recommended you add a GPU with at least 8 GB of Video RAM. GPU need to be attached to worker nodes.
Note: Only NVIDIA GPUs are currently supported.
We only support DockerEE version 3.1 and above. As of now we officially support installation on empty cluster only.
The following table lists the operating system(s) officially supported for the AI Fabric on-premises installation.
7.4, 7.5, 7.6, 7.7, 7.8
The following table lists the browser(s) officially supported for the AI Fabric on-premises installation.
64 or above
80 or above
66 or above
Before starting the UiPath installation the following prerequisites are needed:
- Orchestrator 20.4
See the guide here for various ways to install Orchestrator.
- SQL Server
It is highly recommended that you use the same SQL Server as was used when installing Orchestrator as detailed here. For the installation, you will require the hostname, admin username, and password of this SQL Server.
Make sure that SQL Server Authentication mode is enabled.
AI Fabric uses SQL solely for metadata storage. This means that the amount of data store is very small. There is no need to provision a lot of storage capacity for these tables.
AI Fabric runs on a kubernetes cluster. All communication into and out of the cluster is secured with HTTPS (TLS). Tenant and user-specific traffic uses modern protocols (OAuth2.0 and OpenID) supported by UiPath's Identity Server.
The diagram below shows a detailed architecture diagram of the various components in AI Fabric.
At a high-level, AI Fabric core services manage the deployment and training of machine learning models.
A deployment of a machine learning model (called an ML Skill) is a container with the code and model artifacts. AI Fabric creates an end-point from that container that is permissioned and replicated.
A training or evaluation of a machine learning model will create a container image on-the-fly and execute code predefined in by the AI Fabric user or by an out-of-the-box retrainable model.
Updated 13 days ago