订阅

UiPath Automation Suite

UiPath Automation Suite 指南

Evaluating your storage needs

This page provides instructions on how to evaluate your storage needs for an Automation Suite cluster.

An Automation Suite cluster uses the data disks attached to its server nodes as storage resources available to all the products enabled on your cluster. Each product uses these resources differently.
To understand your storage needs and plan for them accordingly, refer to the following terminology and guidelines.

术语


  • Server node disk size – The size of all individual disks attached to each server node.
    • All servers must have the same number of disks attached.
    • Disks on each server may have different sizes as long as the sum of all the disk sizes is identical on all servers.
  • Total cluster disk size – Server node disk size multiplied by the number of server nodes.
  • Application available storage – The amount of storage available for applications to consume.
    • Application available storage is lower than the total cluster disk size due to the way fault resiliency and high availability are implemented in the Automation Suite cluster.

The following table describes the multi-node HA-ready hardware requirements for the Basic and Complete (variant A) profiles in the context of the previously introduced terms.

Preset hardware configuration

Number of server nodes

Server node disk size

Total cluster disk size

Application available storage (online)

Application available storage (offline)

Multi-node HA-ready profile – Basic product selection

3

512 GB

1.5 TB

41 GB

37 GB

Multi-node HA-ready profile – Complete product selection

3

2 TB

6 TB

291 GB

286 GB

🚧

重要

To leverage the 291 Gb available storage, you must resize the PVC value to 291 Gb instead of the preconfigured 100 Gb value. Otherwise, your applications will not be able to take advantage of more than 100 Gb.
For instructions, see Resizing PVC.

 

Estimating the storage used by your applications


As you enable and use products on the cluster, they consume some storage from the application available storage. Products usually have a small enablement footprint as well as some usage-dependent footprint that varies depending on the use case, scale of use, and project. The storage consumption is evenly distributed across all the storage resources (data disks), and you can monitor the levels of storage utilization using the Automation Suite monitoring stack.

 

How to monitor available storage

The Automation Suite cluster uses an internal Kubernetes concept called Persistent Volumes as an internal abstraction that represent disks across all the nodes on the cluster.
To avoid instabilities, it is recommended to set up monitoring and alerts to constantly check if the free space on the Persistent Volumes drops below the application available storage value. For more details, see Monitoring Persistent Volumes.
If an alert triggers, you can mitigate it by increasing the storage capacity of your cluster as described in the following section.

 

如何增加存储容量

If your evaluated needs do not meet the recommended hardware requirements, you can add more storage capacity using either one or both of the following methods:

  1. Add more server nodes with disks. For instructions, see Adding a new node to the cluster.

  2. Add more disks to the existing nodes. For instructions, see Extending the data disk in a single-node evaluation environment and Extending the data disk in a multi-node HA-ready production environment.

🚧

重要

For each 60 Gb of product-specific storage needed, your Automation Suite cluster will need an additional 1 TB of storage added to the total storage available on your cluster, distributed equally across your server nodes.

 

How to calculate your usage needs

You can estimate your storage consumption using the product-specific metric in the following tables. These tables describe how much content you can place on your cluster out of the box. For reference, they include the storage footprint of a typical usage scenario of each product.

Basic product selection

Product

Storage-driving metric

Storage per metric

Typical use case

Orchestrator

Size of the automation packages for deployed automations
Size of the storage buckets of deployed automation

Mb per package
Mb per bucket

Typically, a package is 5 Mb, and buckets, if any, are less than 1 Mb. A mature enterprise has 5 Gb of packages and 6 Gb of buckets deployed.

Action Center

Number of documents stored by customer in document tasks
Number of tasks created

Gb per document in document tasks
Number of tasks

Typically, a document takes 0.15 Mb, and the forms to fill take an additional 0.15 Kb. In a mature enterprise this can roll up to 4Gb in total.

Test Manager

Number of attachments and screenshots stored by users

Mb of attachments and screenshots

Typically, all files and attachments add up to approximately 5 Gb.

Insights

Enablement footprint and the number of dashboards published

Gb per dashboard

2 Gb are required for enablement, with the storage footprint growing with the number. A well-established enterprise-scale deployment requires another few Gb for all the dashboards.

Automation Hub

N/A

N/A

2 Gb fixed footprint

Automation Ops

N/A

N/A

No storage footprint

Complete product selection

Product

Storage-driving metric

Storage per metric

Typical use case

Apps

Number of apps deployed and enablement footprint

Number of apps, size of apps, size of database supporting apps

Typically, the database takes approximately 5 Gb, and a typical complex app consumes approximately 15 Mb.

AI Center

Number of uploaded ML packages
Number of datasets for analysis
Number of published pipelines

Gb per package
Gb per dataset
Number of pipelines

A typical and established installation will consume 8 Gb for 5 packages and an additional 1Gb for the datasets.
A pipeline may consume an additional 50 Gb, but only when actively running.

Document Understanding

Size of ML model
Size of OCR model
Number of stored documents

Gb per ML model
Gb per OCR model
Number of documents stored

In a mature deployment, 12Gb will go to ML model, 17Gb to the OCR, and 50GB to all documents stored.

Task Mining

Hours of user activity analyzed to suggest automation patterns

Gb per hour

Typically, about 200Gb of activity log data should be analyzed to suggest meaningful automations. Highly repetitive tasks however, may require much less data.

4 个月前更新


Evaluating your storage needs


This page provides instructions on how to evaluate your storage needs for an Automation Suite cluster.

建议的编辑仅限用于 API 参考页面

您只能建议对 Markdown 正文内容进行编辑,而不能建议对 API 规范进行编辑。