automation-suite
2024.10
true
UiPath logo, featuring letters U and I in white

Automation Suite on EKS/AKS Installation Guide

Last updated Dec 18, 2024

Configuring input.json

The input.json file allows you to configure the UiPath® products you want to deploy, the parameters, settings, and preferences applied to the selected products, and the settings of your cloud infrastructure. You must update this file to change the defaults and use any advanced configuration for your cluster.
Note:

Some products might have dependencies. For details, see Cross-product dependencies.

To edit input.json, you can use your favorite text editor on your client machine.
The following table describes the main input.json parameters you must update to properly configure Automation Suite. For a configuration example, refer to AKS input.json example or EKS input.json example.

General parameters

Description

kubernetes_distribution

Specificy which Kubernetes distribution you use. Can be aks or eks.

install_type

Determines whether the cluster is deployed in online or offline mode. If not specified, the cluster is deployed in online mode. To deploy the cluster in offline mode, you must explicitly set the value of the install_type parameter to offline.
Possible values: online or offline
Default value: online

registries

URLs to pull the docker images and helm charts for UiPath® products and Automation Suite.

registry.uipath.com

istioMinProtocolVersion

The minimum version of the TLS protocol accepted by Istio for secure communications. It can be set to either TLSV1_2 or TLSV1_3.

fqdn

The load balancer endpoint for Automation Suite

admin_username

The username that you would like to set as an admin for the host organization.

admin_password

The host admin password to be set.

profile

Sets the profile of the installation. The available profiles are:

  • lite: lite mode profile.
  • ha: multi-node HA-ready production profile.

For more details about managing profiles, see Profile configuration.

telemetry_optout

true or false - used to opt out of sending telemetry back to UiPath®. It is set to false by default.
If you want to opt out, then set to true.

fips_enabled_nodes

Indicate whether you want to enable FIPS 140-2 on the nodes on which you plan to install Automation Suite. Possible values are true and false.
storage_class

Specify the storage class to be used for PV provisioning. This storage class must support multiple replicas for optimum High Availability and possess backup capabilities.

For more information, refer to Block storage section.

storage_class_single_replica

Provide the storage class to be used for PV provisioning. This storage class can have a single replica for the components that do not require High Availability. The storage class can have no backup capabilities.

For more information, refer to File storage section.

The storage_class_single_replica value can be the same as the storage_class value.
exclude_components

Use this parameter to prevent non-essential components from being installed.

For details, see Bring your own components.

namespace

Specify the namespace where you want to install Automation Suite.

argocd.application_namespace

The namespace of the application you plan to install. This should ideally be the same as the namespace where you plan to install Automation Suite.

argocd.project

The ArgoCD project required to deploy Automation Suite. This is only required if you want to use the shared or the global ArgoCD instance instead of a dedicated ArgoCD instance.

Profile configuration

You can choose from the following profile modes when installing Automation Suite:

  • Configurable high availability profile (lite mode)

  • Standard high availability profile (HA mode)

Only in lite mode you can switch specific products to high availability (HA). This flexibility allows you to start with non-critical workload, and have the freedom to easily switch to HA mode when onboarding critical ones.

To enable high availability (HA) for a product, you must modify the input.json file. Specifically, you must change the profile parameter to ha for the products you want to make highly available:
....
  "automation_ops": {
    "enabled": true,
    "profile": "ha" // flag for turning on high availability
  },
  "action_center": {
    "enabled": true,
    "profile": "lite"
  },
  ...  ....
  "automation_ops": {
    "enabled": true,
    "profile": "ha" // flag for turning on high availability
  },
  "action_center": {
    "enabled": true,
    "profile": "lite"
  },
  ...

To switch from lite mode to HA mode, take the following steps:

  1. Make sure you have enough infrastructure capabilities to switch to standard HA at platform level. By default, lite mode sets each product to have one replica with horizontal pod scaling enabled.
  2. Edit the input.json configuration file, and make change the profile parameter for the targeted products.
  3. Run the following command:
    uipathctl manifest apply uipathctl manifest apply

UiPath® products

You can enable and disable products in Automation Suite at the time of installation and at any point post-installation. For more details on each product configuration, see Managing products.

Orchestrator example:

"orchestrator": {
  "enabled": true,
  "external_object_storage": {
    "bucket_name": "uipath-as-orchestrator"
  },
  "testautomation": {
    "enabled": true
  },
  "updateserver": {
    "enabled": true
  }"orchestrator": {
  "enabled": true,
  "external_object_storage": {
    "bucket_name": "uipath-as-orchestrator"
  },
  "testautomation": {
    "enabled": true
  },
  "updateserver": {
    "enabled": true
  }

Bring your own components

Automation Suite allows you to bring your own Gatekeeper and OPA Policies, Cert Manager, Istio, monitoring, logging components, and more. If you choose to exclude these components, ensure that they are available in your cluster before installing Automation Suite.

The following sample shows a list of excluded components. You can remove the components you would like Automation Suite to provision.

"exclude_components": [
        "alerts",
        "auth",
        "istio",
        "cert-manager",
        "logging",
        "monitoring",
        "gatekeeper",
        "network-policies",
        "velero"
  ]   "exclude_components": [
        "alerts",
        "auth",
        "istio",
        "cert-manager",
        "logging",
        "monitoring",
        "gatekeeper",
        "network-policies",
        "velero"
  ] 

Excluding Istio

If you bring your own Istio component, make sure to include the gateway_selector labels from your Istio gateway in the input.json file. To find your gateway selector label, take the following steps:
  1. List all the pods in the <istio-system> namespace by running the kubectl get pods -n <istio-system> command.
  2. Find the one for your Istio gateway deployment.

Note: If you plan to install the WASM plugin yourself and want to avoid providing the Automation Suite installer with write access to the <istio-system> namespace, you must add the istio-configure component to the exclude_components list.

Excluding Cert Manager

If you choose to bring your own Cert Manager, and your TLS certificate is issued by a private or non-public CA, you must manually include both the leaf certificate and intermediate CA certificates in the TLS certificate file. In case of public CAs, they are automatically trusted by client systems, and no further action is required on your part.

Certificate configuration

If no certificate is provided at the time of installation, the installer creates self-issued certificates and configures them in the cluster.

Note:

The certificates can be created at the time of installation only if you grant the Automation Suite installer admin privileges during the installation. If you cannot grant the installer admin privileges, then you must create and manage the certificates yourself.

For details on how to obtain a certificate, see Managing the certificates.

Note:
Make sure to specify the absolute path for the certificate files. Run pwd to get the path of the directory where files are placed and append the certificate file name to the input.json.

Parameter

Description

server_certificate.ca_cert_file

Absolute path to the Certificate Authority (CA) certificate. This CA is the authority that signs the TLS certificate. A CA bundle must contain only the chain certificates used to sign the TLS certificate. The chain limit is nine certificates.

If you use a self-signed certificate, you must specify the path to rootCA.crt, which you previously created. Leave blank if you want the installer to generate ir.

server_certificate.tls_cert_file

Absolute path to the TLS certificate (server.crt is the self-signed certificate). Leave blank if you want the installer to generate it.
Note: If you provide the certificate yourself, the server.crt file must contain the entire chain, as shown in the following example:
-----server cert-----
-----root ca chain----------server cert-----
-----root ca chain-----

server_certificate.tls_key_file

Absolute path to the certificate key (server.key is the self-signed certificate). Leave blank if you want the installer to generate it.

identity_certificate.token_signing_cert_file

Absolute path to the identity token signing certificate used to sign tokens (identity.pfx is the self-signed certificate). Leave blank if you want the installer to generate an identity certificate using the server certificate.

identity_certificate.token_signing_cert_pass

Plain text password set when exporting the identity token signing certificate.

additional_ca_certs

Absolute path to the file containing the additional CA certificates that you want to be trusted by all the services running as part of Automation Suite. All certificates in the file must be in valid PEM format.

For example, you need to provide the file containing the SQL server CA certificate if the certificate is not issued by a public certificate authority.

Infrastructure prerequisites

You must provide configurations details of the prerequisites that you configured on Azure or AWS. For input.json parameter requirements, see the following prerequisite sections:

External Objectstore configuration

General configuration

Automation Suite allows you to bring your own external storage provider. You can choose from the following storage providers:

  • Azure
  • AWS
  • S3-compatible

You can configure the external object storage in one of the following ways:

  • during installation;
  • post-installation, using the input.json file.
Note:
  • For Automation Suite to function properly when using pre-signed URLs, you must make sure that your external objectstore is accessible from the Automation Suite cluster, browsers, and all your machines, including workstations and robot machines.
  • The Server Side Encryption with Key Management Service (SSE-KMS) can only be enabled on the Automation Suite buckets deployed in any region created after January 30, 2014.

    SSE-KMS functionality requires pure SignV4 APIs. Regions created before January 30, 2014 do not use pure SignV4 APIs due to backward compatibility with SignV2. Therefore, SSE-KMS is only functional in regions that use SignV4 for communication. To find out when the various regions were provisioned, refer to the AWS documentation.

If you use a Private Endpoint to access the container, you must add the fqdn parameter in the input.json file and specify the Private Endpoint as value.
The following table lists out the input.json parameters you can use to configure each provider of external object storage:

Parameter

Azure

AWS

S3-compatible

Description

external_object_storage.enabled

available

available

available

Specify whether you would like to bring your own object store. Possible values: true and false.

external_object_storage.create_bucket

available

available

available

Specify whether you would like to provision the bucket. Possible values: true and false.

external_object_storage.storage_type

available

available

available

Specify the storage provider you would like to configure. The value is case-sensitive. Possible values: azure and s3.
Note: Many S3 objectstores require the CORS set to all the traffic from the Automation Suite cluster. You must configure the CORS policy at the objectstore level to allow the FQDN of the cluster.

external_object_storage.fqdn

not available

available

available

Specify the FQDN of the S3 server. Required in the case of AWS instance and non-instance profile.

external_object_storage.port

not available

available

available

Specify the S3 port. Required in the case of AWS instance and non-instance profile.

external_object_storage.region

not available

available

available

Specify the AWS region where buckets are hosted. Required in the case of AWS instance and non-instance profile.

external_object_storage.access_key

not available

available

available

Specify the access key for the S3 account.

external_object_storage.secret_key

not available

available

available

Specify the secret key for the S3 account. Only required in the case of the AWS non-instance profile.

external_object_storage.use_instance_profile

not available

available

available

Specify whether you want to use an instance profile. An AWS Identity and Access Management (IAM) instance profile grants secure access to AWS resources for applications or services running on Amazon Elastic Compute Cloud (EC2) instances. If you opt for AWS S3, an instance profile allows an EC2 instance to interact with S3 buckets without the need for explicit AWS credentials (such as access keys) to be stored on the instance.

external_object_storage.bucket_name_prefix 1

not available

available

available

Indicate the prefix for the bucket names. Optional in the case of the AWS non-instance profile.

external_object_storage.bucket_name_suffix 2

not available

available

available

Indicate the suffix for the bucket names. Optional in the case of the AWS non-instance profile.

external_object_storage.account_key

available

not available

not available

Specify the Azure account key.

external_object_storage.account_name

available

not available

not available

Specify the Azure account name.

external_object_storage.azure_fqdn_suffix

available

not available

not available

Specify the Azure FQDN suffix. Optional parameter.

external_object_storage.client_id

available

not available

not available

Specify your Azure client ID. Only required when using managed identity.

1 If you plan on disabling pre-signed URL access, note that this configuration is not supported by Task Mining and the following activities that upload or retrieve data from the objectstore:

2, 3 When configuring the external object storage, you must follow the naming rules and conventions from your provider for both bucket_name_prefix and bucket_name_suffix. In addition to that, the suffix and prefix must have a combined length of no more than 25 characters, and you must not end the prefix or start the suffix with a hyphen (-) as we already add the character for you automatically.

Product-specific configuration

You can use the parameters described in the General configuration section to update the general Automation Suite configuration. This means that all installed products would share the same configuration. If you want to configure one or more products differently, you can override the general configuration. You just need to specify the product(s) you want to set up external object storage for differently, and use the same parameters to define your configuration. Note that all the other installed products would continue to inherit the general configuration.

The following example shows how you can override the general configuration for Orchestrator:

"external_object_storage": {
  "enabled": false, // <true/false>
  "create_bucket": true, // <true/false>
  "storage_type": "s3", // <s3,azure,aws>
  "fqdn": "",  // <needed in the case of aws instance and non-instance profile>
  "port": 443, // <needed in the case of aws instance and non-instance profile>
  "region": "", 
  "access_key": "", // <needed in the case of aws non instance profile>
  "secret_key": "", // <needed in the case of aws non instance profile>
  "use_managed_identity": false, // <true/false>
  "bucket_name_prefix": "",
  "bucket_name_suffix": "",
  "account_key": "", 
  "account_name": "",
  "azure_fqdn_suffix": "core.windows.net",
  "client_id": "" 
},

"orchestrator": {
  "external_object_storage": {
    "enabled": false, // <true/false>
    "create_bucket": true, // <true/false>
    "storage_type": "s3", // <s3,azure>
    "fqdn": "",  // <needed in the case of aws instance and non-instance profile>
    "port": 443, // <needed in the case of aws instance and non-instance profile>
    "region": "", 
    "access_key": "", // <needed in case of aws non instance profile>
    "secret_key": "", // <needed in case of aws non instance profile>
    "use_managed_identity": false, // <true/false>
    "bucket_name_prefix": "",
    "bucket_name_suffix": "",
    "account_key": "", 
    "account_name": "",
    "azure_fqdn_suffix": "core.windows.net",
    "client_id": "" 
  }
}"external_object_storage": {
  "enabled": false, // <true/false>
  "create_bucket": true, // <true/false>
  "storage_type": "s3", // <s3,azure,aws>
  "fqdn": "",  // <needed in the case of aws instance and non-instance profile>
  "port": 443, // <needed in the case of aws instance and non-instance profile>
  "region": "", 
  "access_key": "", // <needed in the case of aws non instance profile>
  "secret_key": "", // <needed in the case of aws non instance profile>
  "use_managed_identity": false, // <true/false>
  "bucket_name_prefix": "",
  "bucket_name_suffix": "",
  "account_key": "", 
  "account_name": "",
  "azure_fqdn_suffix": "core.windows.net",
  "client_id": "" 
},

"orchestrator": {
  "external_object_storage": {
    "enabled": false, // <true/false>
    "create_bucket": true, // <true/false>
    "storage_type": "s3", // <s3,azure>
    "fqdn": "",  // <needed in the case of aws instance and non-instance profile>
    "port": 443, // <needed in the case of aws instance and non-instance profile>
    "region": "", 
    "access_key": "", // <needed in case of aws non instance profile>
    "secret_key": "", // <needed in case of aws non instance profile>
    "use_managed_identity": false, // <true/false>
    "bucket_name_prefix": "",
    "bucket_name_suffix": "",
    "account_key": "", 
    "account_name": "",
    "azure_fqdn_suffix": "core.windows.net",
    "client_id": "" 
  }
}

Rotating the blob storage credentials for Process Mining

To rotate the blob storage credentials for Process Mining in Automation Suite the stored secrets must be updated with the new credentials. See Rotating blob storage credentials.

Pre-signed URL configuration

You can use the disable_presigned_url flag to specify whether you would like to disable pre-signed URL access at global level. By default, pre-signed URLs are enabled for the entire platform. The possible values are: true and false.
{
  "disable_presigned_url" : true
}{
  "disable_presigned_url" : true
}
Note:
  • You can only change the default value of this parameter only for new installations. The operation is irreversible and does not apply to an existing cluster.

  • You can apply this configuration only to the entire platform. You cannot override the global configuration at product level.

External OCI-compliant registry configuration

To configure an external OCI-compliant registry, update the following parameters in the input.json file:

Keys

Value

registries.docker.url

Default value: registry.uipath.com

The URL or FQDN of the registry to be used by Automation Suite to host the container images.

registries.docker.username

registries.docker.password

Authentication information to be used for pulling the Docker images from the registry.

If one of the values is found in the input file, you must provide both of them when configuring the external registry.

registries.docker.pull_secret_value

The registry pull secret.

registries.helm.url

Default value: registry.uipath.com

The URL or FQDN of the registry to be used by Automation Suite to host the Helm chart of the service.

registries.helm.username

registries.helm.password

Authentication information to be used for pulling Helm charts from the registry.

If one of the values is found in the input file, you must provide both of them when configuring the external registry.

registry_ca_cert

The location of the CA file corresponding to the certificate configured for the registry.

If the registry is signed by a private certificate authority hosted on your premises, you must provide it to establish the trust.

Note:
You can use different methods to generate the encoded version of the pull_secret_value, including the one using Docker. For details, see .

The following configuration sample shows a typical OCI-compliant registry setup:

{
    "registries": {
        "docker": {
            "url": "registry.domain.io",
            "username": "username",
            "password": "password", 
            "pull_secret_value": "pull-secret-value"
        },
        "helm": {
            "url": "registry.domain.io",
            "username": "username",
            "password": "password"
        },
        "trust": {
            "enabled": true,
            "public_key": "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFNE4vSzNzK2VXUTJHU3NnTTJNcUhsdEplVHlqRQp1UC9sd0dNTnNNUjhUZTI2Ui9TTlVqSVpIdnJKcEx3YmpDc0ZlZUI3L0xZaFFsQzlRdUU1WFhITDZ3PT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg==",
            "detection_mode": false
        }
    },
    "registry_ca_cert": "/etc/pki/ca-trust/extracted/ca-bundle.trust.crt"
}{
    "registries": {
        "docker": {
            "url": "registry.domain.io",
            "username": "username",
            "password": "password", 
            "pull_secret_value": "pull-secret-value"
        },
        "helm": {
            "url": "registry.domain.io",
            "username": "username",
            "password": "password"
        },
        "trust": {
            "enabled": true,
            "public_key": "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFNE4vSzNzK2VXUTJHU3NnTTJNcUhsdEplVHlqRQp1UC9sd0dNTnNNUjhUZTI2Ui9TTlVqSVpIdnJKcEx3YmpDc0ZlZUI3L0xZaFFsQzlRdUU1WFhITDZ3PT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg==",
            "detection_mode": false
        }
    },
    "registry_ca_cert": "/etc/pki/ca-trust/extracted/ca-bundle.trust.crt"
}
Note: For registries such as Harbor, which require using a project, you must append the project name to the registry URL. The requirement applies to the registries.docker.url and registries.helm.url parameters in the input.json file, as shown in the following example:
{
  "registries": {
    "docker": {
      "url": "registry.domain.io/myproject",
      "username": "username",
      "password": "password"
      "pull_secret_value": "pull-secret-value"
    },
    "helm": {
      "url": "registry.domain.io/myproject",
      "username": "username",
      "password": "password"
    }
    "trust": {  
        "enabled": true,  
        "public_key": "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFNE4vSzNzK2VXUTJHU3NnTTJNcUhsdEplVHlqRQp1UC9sd0dNTnNNUjhUZTI2Ui9TTlVqSVpIdnJKcEx3YmpDc0ZlZUI3L0xZaFFsQzlRdUU1WFhITDZ3PT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg==",  
        "detection_mode": false  
    }
  },
  "registry_ca_cert": "/etc/pki/ca-trust/extracted/ca-bundle.trust.crt"
}{
  "registries": {
    "docker": {
      "url": "registry.domain.io/myproject",
      "username": "username",
      "password": "password"
      "pull_secret_value": "pull-secret-value"
    },
    "helm": {
      "url": "registry.domain.io/myproject",
      "username": "username",
      "password": "password"
    }
    "trust": {  
        "enabled": true,  
        "public_key": "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFNE4vSzNzK2VXUTJHU3NnTTJNcUhsdEplVHlqRQp1UC9sd0dNTnNNUjhUZTI2Ui9TTlVqSVpIdnJKcEx3YmpDc0ZlZUI3L0xZaFFsQzlRdUU1WFhITDZ3PT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg==",  
        "detection_mode": false  
    }
  },
  "registry_ca_cert": "/etc/pki/ca-trust/extracted/ca-bundle.trust.crt"
}

Custom namespace configuration

You can specify a single custom namespace that replaces the default uipath, uipath-check, and uipath-installer namespaces. To define a custom namespace, provide a value for the optional namespace parameter. If you do not provide a value for the namespace parameter, the default namespaces are used instead.

Custom namespace label configuration

If you want the UiPath namespaces to contain custom namespace labels, add the following section to the input.json file. Make sure to add your own labels.
"namespace_labels": {
		"install-type": "aksoffline",
		"uipathctlversion": "rc-10_0.1",
		"updatedLabel": "rerun"
	}, "namespace_labels": {
		"install-type": "aksoffline",
		"uipathctlversion": "rc-10_0.1",
		"updatedLabel": "rerun"
	},

Custom node toleration configuration

If you need custom taints and tolerations on the nodes on which you plan to install Automation Suite, update input.json with the following flags. Make sure to provide the appropriate values to the spec field.
"tolerations": [
  {
    "spec": {
      "key": "example-key", 
      "operator": "Exists",
      "value": "optional-value",
      "effect": "NoSchedule"
    }
  },
  {
    "spec": {
      "key": "example-key2", 
      "operator": "Exists",
      "value": "optional-value2",
      "effect": "NoSchedule"
    }
  }
]"tolerations": [
  {
    "spec": {
      "key": "example-key", 
      "operator": "Exists",
      "value": "optional-value",
      "effect": "NoSchedule"
    }
  },
  {
    "spec": {
      "key": "example-key2", 
      "operator": "Exists",
      "value": "optional-value2",
      "effect": "NoSchedule"
    }
  }
]

Internal load balancer configuration

You can use an internal load balancer for your deployment in both AKS and EKS installation types. To do this, you must specify this in the ingress section of the input.json file.
The AKS internal load balancer configuration field details field details:
ParameterDescription
azure-load-balancer-internalSpecifies if the load balancer is internal.
The EKS internal load balancer configuration field details field details:
  
aws-load-balancer-backend-protocolSpecifies the backend protocol.
aws-load-balancer-nlb-target-typeSpecifies the target type to configure for NLB. You can choose between instance and ip.
aws-load-balancer-schemeSpecifies whether the NLB will be internet-facing or internal. Valid values are internal or internet-facing. If not specified, default is internal.
aws-load-balancer-typeSpecifies the load balancer type. This controller reconciles those service resources with this annotation set to either nlb or external.
aws-load-balancer-internalSpecifies whether the NLB will be internet-facing or internal.

AKS Example

"ingress": {
    "service_annotations": {
      "service.beta.kubernetes.io/azure-load-balancer-internal": "true"
    }
  }, "ingress": {
    "service_annotations": {
      "service.beta.kubernetes.io/azure-load-balancer-internal": "true"
    }
  },

EKS Example

"ingress": {
    "service_annotations": {
      "service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "ssl",
      "service.beta.kubernetes.io/aws-load-balancer-nlb-target-type": "ip",
      "service.beta.kubernetes.io/aws-load-balancer-scheme": "internal",
      "service.beta.kubernetes.io/aws-load-balancer-type": "nlb",
      "service.beta.kubernetes.io/aws-load-balancer-internal": "true"
    }
  },  "ingress": {
    "service_annotations": {
      "service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "ssl",
      "service.beta.kubernetes.io/aws-load-balancer-nlb-target-type": "ip",
      "service.beta.kubernetes.io/aws-load-balancer-scheme": "internal",
      "service.beta.kubernetes.io/aws-load-balancer-type": "nlb",
      "service.beta.kubernetes.io/aws-load-balancer-internal": "true"
    }
  },
For more information on creating internal load balancers in AKS and EKS, access the following links:

Workload identity configuration

Workload identity is a variant of managed identity available if you use Automation Suite on AKS. Workload identity helps you avoid managing credentials by enabling pods to use a Kubernetes identity, such as a service account. Workload identity also allows Kubernetes applications to access Azure resources securely with Microsoft Entra ID, based on annotated service accounts.

To learn more about workload identity, see Use a Microsoft Entra Workload ID on AKS - Azure Kubernetes Service.

Note:

Studio Web, Insights, and Task Mining currently do not support workload identity.

To use workload identity, take the following steps:

  1. Enable workload identity and the OIDC issuer for the cluster, retrieve the OIDC issuer URL, and create a user-assigned managed identity, which will be used in the workload identity. To perform these operations, follow the instructions in Deploy and configure an AKS cluster with workload identity - Azure Kubernetes Service.
  2. Save the client ID of the user-assigned managed identity and provide it in your input.json file.
    {
      ...
      "pod_identity" : {
        "enabled": true,
        "aks_managed_identity_client_id":<client-id>,
      }
      ...
    }{
      ...
      "pod_identity" : {
        "enabled": true,
        "aks_managed_identity_client_id":<client-id>,
      }
      ...
    }
  3. Run the following script that creates the federated credentials for all the service accounts we create for Automation Suite:
    #!/bin/bash
    
    # Variables
    RESOURCE_GROUP="<resource-group-name>"
    USER_ASSIGNED_IDENTITY_NAME="<user-assigned-identity-name>"
    AKS_OIDC_ISSUER="<aks-oidc-issuer>"
    AUDIENCE="api://AzureADTokenExchange"
    
    # Helper function to create federated credentials
    create_federated_credentials() {
      local NAMESPACE=$1
      shift
      local SERVICE_ACCOUNTS=("$@")
    
      for SERVICE_ACCOUNT_NAME in "${SERVICE_ACCOUNTS[@]}"; do
        # Generate a unique federated identity credential name
        FEDERATED_IDENTITY_CREDENTIAL_NAME="${NAMESPACE}-${SERVICE_ACCOUNT_NAME}"
    
        echo "Creating federated credential for namespace: ${NAMESPACE}, service account: ${SERVICE_ACCOUNT_NAME}"
    
        # Run the Azure CLI command
        az identity federated-credential create \
          --name "${FEDERATED_IDENTITY_CREDENTIAL_NAME}" \
          --identity-name "${USER_ASSIGNED_IDENTITY_NAME}" \
          --resource-group "${RESOURCE_GROUP}" \
          --issuer "${AKS_OIDC_ISSUER}" \
          --subject "system:serviceaccount:${NAMESPACE}:${SERVICE_ACCOUNT_NAME}" \
          --audience "${AUDIENCE}"
    
        if [ $? -eq 0 ]; then
          echo "Federated credential created successfully for ${NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
        else
          echo "Failed to create federated credential for ${NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
        fi
      done
    }
    
    
    # Call the function for each namespace and its service accounts
    create_federated_credentials "uipath" "default" "asrobots-sa" "taskmining-client" "dataservice-be-service-account" "dataservice-fe-service-account" "insights-adf" "du-documentmanager-service-account" "services-configure-uipath-ba" "aicenter-jobs" "aicenter-deploy" "ailoadbalancer-service-account" "airflow"
    create_federated_credentials "uipath-check" "default"
    create_federated_credentials "velero" "velero-server"#!/bin/bash
    
    # Variables
    RESOURCE_GROUP="<resource-group-name>"
    USER_ASSIGNED_IDENTITY_NAME="<user-assigned-identity-name>"
    AKS_OIDC_ISSUER="<aks-oidc-issuer>"
    AUDIENCE="api://AzureADTokenExchange"
    
    # Helper function to create federated credentials
    create_federated_credentials() {
      local NAMESPACE=$1
      shift
      local SERVICE_ACCOUNTS=("$@")
    
      for SERVICE_ACCOUNT_NAME in "${SERVICE_ACCOUNTS[@]}"; do
        # Generate a unique federated identity credential name
        FEDERATED_IDENTITY_CREDENTIAL_NAME="${NAMESPACE}-${SERVICE_ACCOUNT_NAME}"
    
        echo "Creating federated credential for namespace: ${NAMESPACE}, service account: ${SERVICE_ACCOUNT_NAME}"
    
        # Run the Azure CLI command
        az identity federated-credential create \
          --name "${FEDERATED_IDENTITY_CREDENTIAL_NAME}" \
          --identity-name "${USER_ASSIGNED_IDENTITY_NAME}" \
          --resource-group "${RESOURCE_GROUP}" \
          --issuer "${AKS_OIDC_ISSUER}" \
          --subject "system:serviceaccount:${NAMESPACE}:${SERVICE_ACCOUNT_NAME}" \
          --audience "${AUDIENCE}"
    
        if [ $? -eq 0 ]; then
          echo "Federated credential created successfully for ${NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
        else
          echo "Failed to create federated credential for ${NAMESPACE}:${SERVICE_ACCOUNT_NAME}"
        fi
      done
    }
    
    
    # Call the function for each namespace and its service accounts
    create_federated_credentials "uipath" "default" "asrobots-sa" "taskmining-client" "dataservice-be-service-account" "dataservice-fe-service-account" "insights-adf" "du-documentmanager-service-account" "services-configure-uipath-ba" "aicenter-jobs" "aicenter-deploy" "ailoadbalancer-service-account" "airflow"
    create_federated_credentials "uipath-check" "default"
    create_federated_credentials "velero" "velero-server"

To use workload identity with SQL, see Workload identity-based access to SQL from AKS.

To use workload identity with your storage account, see Workload identity-based access to your storage account from AKS.

Orchestrator-specific configuration

Orchestrator can save robot logs to an Elasticsearch server. You can configure this functionality in the orchestrator.orchestrator_robot_logs_elastic section. If not provided, robot logs are saved to Orchestrator's database.
The following table lists out the orchestrator.orchestrator_robot_logs_elastic parameters:

Parameter

Description

orchestrator_robot_logs_elastic

Elasticsearch configuration.

elastic_uri

The address of the Elasticsearch instance that should be used. It should be provided in the form of a URI. If provided, then username and password are also required.

elastic_auth_username

The Elasticsearch username, used for authentication.

elastic_auth_password

The Elasticsearch password, used for authentication.

Insights-specific configuration

If enabling Insights, users can include SMTP server configuration that will be used to send scheduled emails/alert emails. If not provided, scheduled emails and alert emails will not function.

The insights.smtp_configuration fields details:

Parameter

Description

tls_version

Valid values are TLSv1_2, TLSv1_1, SSLv23. Omit key altogether if not using TLS.

from_email

Address that alert/scheduled emails will be sent from.

host

Hostname of the SMTP server.

port

Port of the SMTP server.

username

Username for SMTP server authentication.

password

Password for SMTP server authentication.

enable_realtime_monitoringFlag to enable Insights Real-time monitoring. Valid values are true, false. Default value is false.

Example

"insights": {
    "enabled": true,
    "enable_realtime_monitoring": true,
    "smtp_configuration": {
      "tls_version": "TLSv1_2",
      "from_email": "test@test.com",
      "host": "smtp.sendgrid.com",
      "port": 587,
      "username": "login",
      "password": "password123"
    }
  }"insights": {
    "enabled": true,
    "enable_realtime_monitoring": true,
    "smtp_configuration": {
      "tls_version": "TLSv1_2",
      "from_email": "test@test.com",
      "host": "smtp.sendgrid.com",
      "port": 587,
      "username": "login",
      "password": "password123"
    }
  }

Process Mining-specific configuration

If enabling Process Mining, we recommend users to specify a SECONDARY SQL server to act as a data warehouse that is separate from the primary Automation Suite SQL Server. The data warehouse SQL Server will be under heavy load and can be configured in the processmining section:

Parameter

Description

sql_connection_str

DotNet formatted connection string with database set as a placeholder: Initial Catalog=DB_NAME_PLACEHOLDER.

sqlalchemy_pyodbc_sql_connection_str

Sqlalchemy PYODBC formatted connection string for custom airflow metadata database location: sqlServer:1433/DB_NAME_PLACEHOLDER.

Example:

mssql+pyodbc://testadmin%40myhost:mypassword@myhost:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES

where

user: testadmin%40myhost
Note:

If there is '@' in user name it has to be urlencoded to %40

Example: (SQL Server setup with Kerberos authentication)

mssql+pyodbc://:@assql2019.autosuitead.local:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES&Trusted_Connection=yes

warehouse.sql_connection_str

DotNet formatted SQL connection string to the processmining data warehouse SQL Server with placeholder for dbname:

Initial Catalog=DB_NAME_PLACEHOLDER.

warehouse.sqlalchemy_pyodbc_sql_connection_str

Sqlalchemy PYODBC formatted SQL connection string to the processmining data warehouse SQL Server with placeholder for dbname:

sqlServer:1433/DB_NAME_PLACEHOLDER.

warehouse.master_sql_connection_str

If the installer is creating databases through sql.create_db: true setting, a DotNet formatted master SQL connection string must be provided for the processmining data warehouse SQL Server. Database in the connection string must be set as master.

Sample Process Mining connection string

"processmining": {
    "enabled": true,
    "app_security_mode": "system_managed",
    "warehouse": {
        "sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;User Id=testadmin@sfdev4515230-sql.database.windows.net;Password='password';Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
	    "sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:07%3Cl%5Bxj-%3D~%3Az%60Ds%26nl@sfdev4515230-sql.database.windows.net:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES&Trusted_Connection=yes",
        "master_sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=master;Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
    },
    "sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:07%3Cl%5Bxj-%3D~%3Az%60Ds%26nl@sfdev4515230-sql.database.windows.net:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES"
    "sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Metadata;User Id=testadmin@sfdev4515230-sql.database.windows.net;Password='password';Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",  
},"processmining": {
    "enabled": true,
    "app_security_mode": "system_managed",
    "warehouse": {
        "sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;User Id=testadmin@sfdev4515230-sql.database.windows.net;Password='password';Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
	    "sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:07%3Cl%5Bxj-%3D~%3Az%60Ds%26nl@sfdev4515230-sql.database.windows.net:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES&Trusted_Connection=yes",
        "master_sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=master;Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
    },
    "sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:07%3Cl%5Bxj-%3D~%3Az%60Ds%26nl@sfdev4515230-sql.database.windows.net:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES"
    "sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Metadata;User Id=testadmin@sfdev4515230-sql.database.windows.net;Password='password';Persist Security Info=False;Integrated Security=true;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",  
},
Attention:

When setting up Microsoft SQL Server make sure that the timezone of the SQL Server machine where the Airflow database and a dedicated Process Mining database are installed, is set to UTC.

Attention:

When configuring the connection strings for the processmining data warehouse SQL Server, the named instance of the SQL Server should be omitted.

Named instances of SQL Server cannot operate on the same TCP port. Therefore, the port number alone is sufficient to distinguish between instances.

For example, use tcp:server,1445 instead of tcp:server\namedinstance,1445.
Important: Note that the names for template PYODBC connection string sql_connection_string_template_sqlalchemy_pyodbc and the PYODBC connection string sqlalchemy_pyodbc_sql_connection_str used when you bring your own database are different. Also connection string names are different for the template SQL sql_connection_string_template and sql_connection_str used when you bring your own database.
Important:
If you bring your own database and you configured this using the sql_connection_str and sqlalchemy_pyodbc_sql_connection_str connection strings in the processmining section of the input.jsonfile, the template connection strings sql_connection_string_template and sql_connection_string_template_sqlalchemy_pyodbc are ignored if specified.
Important:
You must use the default server port 1433 for Airflow database connections.

Non-standard SQL server ports are not supported.

Automation Suite Robots-specific configuration

Automation Suite Robots can use package caching to optimize your process runs and allow them to run faster. NuGet packages are fetched from the filesystem instead of being downloaded from the Internet/network. This requires an additional space of minimum 10GiB and should be allocated to a folder on the host machine filesystem of the dedicated nodes.

To enable package caching, you need to update the following input.json parameters:

Parameter

Default value

Description

packagecaching

true

When set to true, robots use a local cache for package resolution.

packagecachefolder

/uipath_asrobots_package_cache

The disk location on the serverless agent node where the packages are stored.

AI Center-specific configuration

For AI Center to function properly, you must configure the aicenter.external_object_storage.port and aicenter.external_object_storage.fqdn parameters in the input.json file.
Note: You must configure the parameters in the aicenter section of the input.json file even if you have configured the external_object_storage section of the file.
The following sample shows a valid input.json configuration for AI Center:
"aicenter": {
  "external_object_storage" {
    "port": 443,
    "fqdn": "s3.us-west-2.amazonaws.com"
  }
},
"external_object_storage": {
  "enabled": true,
  "create_bucket": false,
  "storage_type": "s3", 
  "region": "us-west-2", 
  "use_instance_profile": true
}
..."aicenter": {
  "external_object_storage" {
    "port": 443,
    "fqdn": "s3.us-west-2.amazonaws.com"
  }
},
"external_object_storage": {
  "enabled": true,
  "create_bucket": false,
  "storage_type": "s3", 
  "region": "us-west-2", 
  "use_instance_profile": true
}
...

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2025 UiPath. All rights reserved.