Orchestrator
2022.10
false
Banner background image
Orchestrator Installation Guide
Last updated 2024年3月4日

Hardware Requirements

There are multiple enterprise cloud deployment options available to host your Orchestrator, such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). Depending on your deployment option of choice and the size of the environment you plan to build, you need to consult different hardware requirements.

This chapter provides insight into the hardware requirements specific to some of these scenarios.

Small to Medium Deployments

The hardware requirements differ from your development environment to the production environment. While the same hardware requirements as your production environment could be utilized for testing and development purposes, that implies higher and unnecessary costs, especially in large-scale deployments.

Development Environments

These requirements assume a maximum of 100 Unattended robots running simultaneously. Two machines can be used, one for Orchestrator and (optionally) Elasticsearch, and one for SQL Server, configured as follows:

Web Application Server

CPU Cores (>2GHz)

RAM (GB)

HDD (GB)

4

4

150

SQL Server

CPU Cores (>2GHz)

RAM (GB)

HDD (GB)

4

8

300

Production Environments

For production environments, it is highly recommended to provide one dedicated server for each role:

  • Orchestrator web application.
  • SQL Server Database Engine.
  • Elasticsearch and Kibana.

For a Multi-Node Installation, in addition to the above, the following is also required:

  • High Availability add-on (HAA) for Orchestrator (3+ HAA nodes are required for true high availability and 6+ HAA nodes for geo-redundancy.
    Note:

    Multi-node Orchestrator deployments use the RESP (REdis Serialization Protocol) for communication, and thus can be configured using any solution relying on this protocol.

    HAA is the only such solution supported by UiPath.

The hardware configuration for each required server depends upon the size of your deployment, as detailed below. The hardware requirements presented here were made based on tests where a Robot was defined as follows:

  • messages are sent from the Robot to Orchestrator with a frequency of 1 message per second
  • within 60 seconds, the Robot sends:
    • 15 message logs
    • 2 heartbeats
    • 6 get asset requests
    • 6 add queue item requests
    • 6 get queue item requests

Support up to 250 Unattended Robots

Web Application Server

Number of Robots

CPU Cores (min 2 GHz)

RAM (GB)

HDD (GB)

<20

4

4

100

<50

4

4

100

<100

4

4

150

<200

4

4

200

<250

4

4

200

Note:
For more than 200 Robots, increase to 500 the number of connections allowed in the pool of the SQL connection string from the UiPath.Orchestrator.dll.config file. To do this, add the Max Pool Size=500 parameter to the connection string, so that it looks similar to this example:

<add name="Default" providerName="System.Data.SqlClient" connectionString="Server=SQL4142;Integrated Security=True;Database=UiPath;Max Pool Size=500;" />

SQL Server

Number of Robots

CPU Cores (min 2 GHz)

RAM (GB)

HDD (GB)

<20

4

8

100

<50

4

8

200

<100

4

8

300

<200

8

8

SSD 400

<250

8

16

SSD 400

Disc space requirements highly depend on:

  • Whether work queues are used or not. If work queues are used, it depends on average number of transactions added daily/weekly and size (number of fields, size of each field) of each transaction.
  • The retention period for successfully processed queue items (the customer should implement their own retention policy).
  • Whether messages logged by the Robots are stored or not in the database. If they are stored, a filter can be applied to only store in the DB specific levels of messages (for example, store in the DB the messages with log level Error and Critical, and store in Elasticsearch messages with log level Info,Warn and Trace).
  • Frequency of logging messages - the Robot developer uses the Log Message activity at will, whenever they consider a message is worth to be logged.
  • The retention period for old logged messages (the customer should implement their own retention policy).
  • Logging level value set up in the Robot. For example, if logging level in the robot is set to Info, only messages with levels Info,Warn,Error and Critical are sent to Orchestrator; messages with levels Debug,Trace and Verbose are ignored, they will not reach Orchestrator.
Elasticsearch Server

Number of Robots

CPU Cores (min 2 GHz)

RAM (GB)

HDD (GB)

<20

4

4

100

<50

4

4

100

<100

4

8

150

<200

4

12

200

<250

4

12

300

Disc space requirements depend on:

  • The retention period (the customer should implement their own retention policy).
  • Frequency of logging messages - the Robot developer uses the Log Message activity at will, whenever they consider a message is worth to be logged.
  • Logging level value set up in the Robot. For example, if logging level in the Robot is set to Info, only messages with levels Info,Warn, “Error” and “Critical” are sent to Orchestrator; messages with levels “Debug”, “Trace” and “Verbose” are ignored, they will not reach Orchestrator.
    Note: For more than 50 Robots you need to instruct the Java Virtual Machine used by Elasticsearch to use 50% of the available RAM, by setting both the -Xms and -Xmx arguments to half of the total amount of memory. This is done either through the ES_JAVA_OPTS environment variable or by editing the jvm.options file.

Support Between 250 and 500 Unattended Robots

Web Application Server

Number of Robots

CPU Cores (min 2 GHz)

RAM (GB)

HDD (GB)

<300

8

8

200

<400

8

8

220

<500

16

16

250

SQL Server

Number of Robots

CPU Cores (min 2 GHz)

RAM (GB)

HDD (GB)

<300

16

32

SSD 400

<400

16

32

SSD 500

<500

16

32

SSD 600

Note: For SQL Server Standard Edition, 16 CPU cores is the maximum that the Standard edition will use. For a virtual machine, please ensure that this number of cores is obtained as 4 virtual sockets with 4 cores each (and not as 2 sockets with 8 cores or 8 sockets with 2 cores). For Enterprise edition, it does not matter what is the combination to obtain 16 cores.

For more than 300 Robots, please consider not storing all logged messages in the SQL Server database. Store in the DB only the messages with log level Error and Critical. Store all messages (including Error and Critical) in Elasticsearch.

Elasticsearch Server

Number of Robots

CPU Cores (min 2 GHz)

RAM (GB)

HDD (GB)

<300

4

12

300

<400

4

16

500

<500

4

16

600

Large Deployments

IaaS Attended Deployments

The following section is an example of a large, scalable deployment using the Azure Infrastructure as a Service (IaaS) offerings. This configuration was used:

Architecture

Note:

The architecture examples below contain optional and/or differing components (e.g. CyberArk, UiPath High Availability Add-on).

The Jumpbox depicted is not required but is a recommended best practice for your production environments, providing isolation and security.

Single-Node Architecture


Multi-Node Architecture


Hardware Requirements

This section describes the hardware configurations used for the performance testing listed in Scaling Your Deployment, below.

Orchestrator Nodes

Each Orchestrator node must be configured as follows:

VCPUs

RAM (GB)

SSD (GB)

16

32

128

SQL Server

The SQL Server virtual machine specifications must scale in line with the number of Orchestrator nodes:

Orchestrator Nodes

VCPUs

RAM (GB)

Disk

1 - 2

8

16

1TB - ultra SSD disk for database, tempDB, and transactional log

5

16

32

1TB - ultra SSD disk for database

1TB - ultra SSD disk for tempDB

1TB - ultra SSD disk for transactional log

10

32

64

1TB - ultra SSD disk for database

1TB - ultra SSD disk for tempDB

1TB - ultra SSD disk for transactional log

15

40

96

1TB - ultra SSD disk for database

1TB - ultra SSD disk for tempDB

1TB - ultra SSD disk for transactional log

Elasticsearch Availability Set

The Elasticsearch availability set is comprised of 3 master nodes and 6 data nodes, for a total of 9 nodes, each with the following specifications:

VCPUs

RAM (GB)

OS SSD (GB)

Data SSD (TB)

8

16

128 (with 5000 IOPS and 100 MB/s Throughput)

1 (with 5000 IOPS and 200 MB/s Throughput)

Software Requirements

The versions listed above are those used for the deployments and performance tested loads described.

Load Balancing

For multi-node deployments, it is recommended to use two Azure Standard load balancers:

  • One for the Orchestrator servers;
  • One for the Elasticsearch servers.
High Availability Add-on

Scaling Your Deployment

The number of nodes needed in your Orchestrator scale set depends on the number of Robots being deployed:

Orchestrator Scale Set Nodes

No. of Robots

1

up to 6,000

2

up to 14,000

5

up to 80,000

10

up to 200,000

15

up to 300,000

These deployments were tested using the hardware and software configurations above to exhibit no performance loss under the specified load below.

Performance Testing

The data displayed in the following 2 tables is representative of an attended deployment.

Static Data

Static Data refers to the initial Orchestrator load.

Entity

One Node

Two Nodes

Five Nodes

Ten Nodes

Fifteen Nodes

Tenants

1

1

1

1

1

Folders

1

2

4

4

6

Robots

6,000

14,000

80,000

200,000

300,000

Packages

8,000

16,000

48,000

48,000

48,000

Processes

4,000

8,000

24,000

24,000

24,000

Queues

600

1,200

1,800

2,400

3,000

Queue Items

1,120,000

1,500,000

3,000,000

5,000,000

7,000,000

Assets

500

1,000

1,500

3,000

4,500

Dynamic Data

Dynamic data refers to the data added to or changed in Orchestrator as processes are executed.

Entity

One Node

Two Nodes

Five Nodes

Ten Nodes

Fifteen Nodes

Queue Items (per day)

300,000

600,000

4,000,000

9,000,000

10,500,000

Jobs (per minute)

700

1,500

3,000

6,000

7,500

Logs (per minute)

20,000

50,000

300,000

600,000

800,000

Nuget Downloads (Maximum per minute)

1,000

3,000

10,000

14,000

18,000

Robots Connected (Maximum)

6,000

14,000

80,000

200,000

300,000

Heartbeat (per minute)

12,000

28,000

160,000

400,000

600,000

Busy Robots

3,000

7,000

40,000

100,000

150,000

Available Robots

3,000

7,000

40,000

100,000

150,000

PaaS Attended Deployments

The following sections offer an insight into the capabilities of a PaaS deployment in terms of performance.

Architecture

The following prerequisites are needed:

  • Orchestrator:

    • Orchestrator App Service Plan: 20 P3V2 instances
    • Azure SQL Server: Premium P15: 4000 DTUs
    • Azure Redis cache P2 Premium 13GB
  • Identity Server:

    • Identity Server App Service Plan: 2 instances P3V2
    • Azure SQL Server: Standard S7: 800 DTU
  • Elasticsearch:

Performance Testing

The data displayed in the following tables is representative of an attended deployment.

Static Data

Static Data refers to the initial Orchestrator load.

Entity

One Node

Tenants

1

Folders

8,000

Robots

80,000

Packages

8,000

Processes

8,000

Queues

8,000

Queue Items

2,000,000

Assets

8,000

Dynamic Data

Dynamic data refers to the data added to or changed in Orchestrator as processes are executed.

Entity

One Node

Queue Items (per day)

5,000,000

Jobs (per minute)

2,600

Logs (per minute)

240,000

Nuget Downloads (Maximum per minute)

2,000

Robots Connected (Maximum)

80,000

Heartbeat (per minute)

160,000

Busy Robots

40,000

Available Robots

40,000

TCP Ports

Port

Description

443

Default port for communication between Users and Orchestrator with the connected Robots.

1433

Default port for communication between Orchestrator and the SQL Server machine.

9200

Communication between Orchestrator and Elasticsearch.

9300

Communication between Elasticsearch nodes, if applicable.

5601

Default port used by the Kibana plugin, if applicable.

3389

Required for RDP automation, needed for High-Density Robots.

You can also check out hardware requirements for Studio and Robot.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.