ai-center
2020.10
false
UiPath logo, featuring letters U and I in white
AI Center
Automation CloudAutomation SuiteStandalone
Last updated Jun 6, 2024

AI Fabric single-node

Important: Before proceeding, make sure you meet the requirements detailed here

At a high level, the installation of AI Fabric needs to execute these steps:

StepActive TimeWaiting Time
1. Provision AIF Machine< 5 min--
2. Configure Database< 1 min--
3. Configure Orchestrator< 5 min--
4. Run AI Fabric Infrastructure Installer< 1 min~20 mins
5. Run AI Fabric application Installer< 5 min~20 mins
6. Verify Installation~5 min--

Network configuration

  • The Linux machine where AI Fabric will be installed needs to be able to connect to the Orchestrator machine (domain and port).
  • The Linux machine where AI Fabric will be installed needs to be able to connect to the SQL Server (domain/IP and port).
  • Robots/Studio that will make use of AI Fabric need connectivity to the AI Fabric Linux machine.

For peripheral Document Understanding components (Data Manager and OCR Engines):

  • Data Manager needs access to AI Fabric on prem :<port_number> or to public SaaS endpoints like https://invoices.uipath.com in case Prelabelling is needed (prelabeling is optional).
  • Data Manager needs access to OCR engine :<port_number>. OCR engine might be UiPath Document OCR on premises, Omnipage OCR on premises, Google Cloud Vision OCR, Microsoft Read Azure, Microsoft Read on premises.
  • Robots need access to OCR :<port_number>. Same OCR options as above, except for Omnipage, which is available in the Robots directly as an Activity Pack.

Connectivity requirements - online installation

The AI Fabric Online install refers to an on-premises installation that downloads AI Fabric application and all related artifacts (e.g. machine learning models) from the internet.

Endpoints the installer connects to

The AI Fabric installer downloads container images and machine learning models to populate your AI Fabric instance with ready-to-use machine learning (this includes Document Understanding models). For this reason, at installation time, the Linux machine needs access to these endpoints over https (port 443):

Host NamePurpose
registry.replicated.comUpstream Docker images are pulled via registry.replicated.com. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA
proxy.replicated.comUpstream Docker images are pulled via proxy.replicated.com. The on-prem docker client uses a license ID to authenticate to proxy.replicated.com. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA
replicated.appUpstream application YAML and metadata is pulled from replicated.app. The current running version of the application (if any) will be sent in addition to a license ID. Application IDs are sent to replicated.app to authenticate and receive these YAML files. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA
get.replicated.comSync artifacts from replicated. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA
api.replicated.comAPI requests to infrastructure installer. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA
k8s.kurl.shKubernetes cluster installation scripts and artifacts are served from kurl.sh. An application identifier is sent in a URL path, and bash scripts and binary executables are served from kurl.sh. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA
kurl-sh.s3.amazonaws.comtar.gz packages are downloaded from Amazon S3 during embedded cluster installations. The IP ranges to whitelist for accessing these can be scraped dynamically from the AWS IP Address Ranges1 documentation.
*.docker.ioUpstream Docker images are pulled from docker.io. There may be multiple subdomains like registry-1.docker.io thus the pattern to the right should be allowed.
*.docker.comOther upstream Docker images are pulled from docker.com. There may be multiple subdomains thus the pattern to the right should be allowed.
raw.githubusercontent.comFor scripts to create persistent volume claim deployment.
quay.ioProvides container images.
registry.k8s.ioUpstream images are pulled from registry.k8s.io.

Endpoints the GPU installer scripts connects to

These endpoints only need to be allow connections for using a GPU with AI Fabric. All GPU installation is done through our GPU installer script in 4. Run the AI Fabric Infrastructure Installer .

Host NamePurpose
developer.download.nvidia.comDownload the GPU drivers from NVIDIA.
nvidia.github.ioDownload https://nvidia.github.io/nvidia-docker/gpgkey and nvidia-docker.list
raw.githubusercontent.comThe script will internally download a YAML file from github.com/NVIDIA/k8s-device-plugin

Endpoints connected to at runtime

At runtime, an AI Fabric that was installed via the online installer connects to these endpoints:

Host NamePurpose
du-metering.uipath.comTo account for, and validate Document Understanding licenses.
registry.replicated.comUpstream Docker images are pulled via a private docker registry from registry.replicated.com. This domain is owned by Replicated, Inc which is headquartered in Los Angeles, CA.
raw.githubusercontent.comFor scripts to update OOB models daily.
github.comFor scripts to update OOB models daily.
custom (optional) This depends on how the persona using AI Fabric has chosen to build their ml packages. AI Fabric dynamically builds a container image. The dependencies for that image can be bundled within the ML Package itself (in which case no extra outbound network calls are done), or can be specified in a requirements.txt file. This file can specify the location from which dependencies will be downloaded.

Connectivity requirements - airgapped installation

The AI Fabric airgapped install refers to an on-premises installation triggered after a one-time download from a uipath domain.

No internet connection is required at installation time (note: if the node has a GPU, this assumes that NVIDA driver version 450.51.06 and nvidia-container-runtime have been installed as detailed in the prerequisite for an airgapped install).

At application runtime, whether connectivity is require or not is completely up to the user of AI Fabric. An AI Fabric user creates a ml packages that can be deployed and trained on AI Fabric. AI Fabric dynamically builds a container image from that ML Package. The dependencies for that image can be bundled within the ML Package itself (in which case no extra outbound network calls are done), or can be specified in a requirements.txt file. This file can specify the location from which dependencies will be downloaded such as an internal, secure python package dependency repository.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.