- Overview
- Requirements
- Installation
- Q&A: Deployment templates
- Configuring the machines
- Configuring the external objectstore
- Configuring an external Docker registry
- Configuring the load balancer
- Configuring the DNS
- Configuring Microsoft SQL Server
- Configuring the certificates
- Online multi-node HA-ready production installation
- Offline multi-node HA-ready production installation
- Disaster recovery - Installing the secondary cluster
- Downloading the installation packages
- install-uipath.sh parameters
- Enabling Redis High Availability Add-On for the cluster
- Document Understanding configuration file
- Adding a dedicated agent node with GPU support
- Adding a dedicated agent Node for Task Mining
- Connecting Task Mining application
- Adding a Dedicated Agent Node for Automation Suite Robots
- Post-installation
- Cluster administration
- Monitoring and alerting
- Migration and upgrade
- Migration options
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Insights
- Step 7: Deleting the default tenant
- B) Single tenant migration
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to troubleshoot services during installation
- How to uninstall the cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bundle
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to automatically clean up Longhorn snapshots
- How to disable TX checksum offloading
- How to manually set the ArgoCD log level to Info
- How to generate the encoded pull_secret_value for external registries
- How to address weak ciphers in TLS 1.2
- Unable to run an offline installation on RHEL 8.4 OS
- Error in downloading the bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- First installation fails during Longhorn setup
- SQL connection string validation error
- Prerequisite check for selinux iscsid module fails
- Azure disk not marked as SSD
- Failure after certificate update
- Antivirus causes installation issues
- Automation Suite not working after OS upgrade
- Automation Suite requires backlog_wait_time to be set to 0
- GPU node affected by resource unavailability
- Volume unable to mount due to not being ready for workloads
- Support bundle log collection failure
- Failure to upload or download data in objectstore
- PVC resize does not heal Ceph
- Failure to resize PVC
- Failure to resize objectstore PVC
- Rook Ceph or Looker pod stuck in Init state
- StatefulSet volume attachment error
- Failure to create persistent volumes
- Storage reclamation patch
- Backup failed due to TooManySnapshots error
- All Longhorn replicas are faulted
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Authentication not working after migration
- Kinit: Cannot find KDC for realm <AD Domain> while getting initial credentials
- Kinit: Keytab contains no suitable keys for *** while getting initial credentials
- GSSAPI operation failed due to invalid status code
- Alarm received for failed Kerberos-tgt-update job
- SSPI provider: Server not found in Kerberos database
- Login failed for AD user due to disabled account
- ArgoCD login failed
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis probe failure
- RKE2 server fails to start
- Secret not found in UiPath namespace
- ArgoCD goes into progressing state after first installation
- Issues accessing the ArgoCD read-only account
- MongoDB pods in CrashLoopBackOff or pending PVC provisioning after deletion
- Unhealthy services after cluster restore or rollback
- Pods stuck in Init:0/X
- Prometheus in CrashloopBackoff state with out-of-memory (OOM) error
- Missing Ceph-rook metrics from monitoring dashboards
- Running High Availability with Process Mining
- Process Mining ingestion failed when logged in using Kerberos
- Unable to connect to AutomationSuite_ProcessMining_Warehouse database using a pyodbc format connection string
- Airflow installation fails with sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''
- How to add an IP table rule to use SQL Server port 1433
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite Support Bundle Tool
- Exploring Logs
Step 3: Post-deployment steps
The installation process generates self-signed certificates on your behalf. These certificates are compliant with FIPS 140-2 and will expire in 90 days. You must replace them with certificates signed by a trusted Certificate Authority (CA) as soon as installation completes. If you do not update the certificates, the installation will stop working after 90 days.
If you installed Automation Suite on a FIPS 140-2-enabled host and want to update the certificates, make sure they arecompatible with FIPS 140-2.
For instructions, see Managing certificates.
After completing an Automation Suite installation using the GCP deployment template, you can enable FIPS 140-2 on your machines. For instructions, see Security and compliance.
To get the deployment outputs, take the following steps:
The output should look similar to the following image:
The outputs give you the necessary information for accessing the suite and the cluster.
The following table describes the values:
Key |
Description |
---|---|
|
The fully qualified domain name provided for the installation. Make sure you use the same one when configuring the DNS. For instructions on how to configure the DNS, see: |
|
The load balancer’s IP address used for configuring the DNS. |
|
The IP address of the bastion VM needed to access the cluster via SSH. |
|
The deployment ID included in the name of all the resources in a deployment. |
|
The URL to the secret containing the credentials for the database. |
|
The URL to the secret containing the credentials for the Host organization in the Automation Suite portal. |
|
The URL to the secret containing the credentials for the Host organization in the Automation Suite portal. |
|
The URL to the secret containing the credentials for the ArgoCD console used to manage the installed products. |
|
The URL to Longhorn monitoring tools:
https://monitoring.${var.lb_fqdn} |
|
The URL to Grafana monitoring tools:
https://monitoring.${var.lb_fqdn}/grafana |
|
The URL to Prometheus monitoring tools:
https://monitoring.${var.lb_fqdn}/prometheus |
|
The URL to Alertmanager monitoring tools:
https://monitoring.${var.lb_fqdn}/alertmanager |
The Cluster Administration portal is a centralized location where you can find all the resources required to complete an Automation Suite installation and perform common post-installation operations. For details, see Getting started with the Cluster Administration portal.
To access the Cluster Administration portal, take the following step:
https://${CONFIG_CLUSTER_FQDN}/uipath-management
.To access the services, you must have a DNS configured. See Configuring the DNS in a single-node evaluation setup or Configuring the DNS in a multi-node HA-ready production setup for details.
Alternatively, you can follow the instructions in Configuring a client machine to access the cluster for testing purposes only.
If using a self-signed certificate, you may get an certificate error as shown in the following image.
Click Proceed to…, then update the cluster certificates as explained in Configuring the certificates in a single-node evaluation setup or Configuring the certificates in a multi-node HA-ready production setup.
https://<fqdn>
. You can get the credentials via a secret available at:
as_host_credentials
URL for the Host organization;as_default_credentials
URL for the Default organization.
https://alm.<fqdn>
. You can get the credentials via a secret that can be found at the argocd_credentials
URL.
To access the monitoring tools for the first time, log in as an admin with the following default credentials:
- Username: admin
- Password: to retrieve the password , run the
following
command:
kubectl get secrets/dex-static-credential -n uipath-auth -o "jsonpath={.data['password']}" | base64 -d
kubectl get secrets/dex-static-credential -n uipath-auth -o "jsonpath={.data['password']}" | base64 -d
To update the default password used for accessing the monitoring tools, take the following steps:
-
Run the following command by replacing
newpassword
with your new password:password="newpassword" password=$(echo -n $password | base64) kubectl patch secret dex-static-credential -n uipath-auth --type='json' -p="[{'op': 'replace', 'path': '/data/password', 'value': '$password'}]"
password="newpassword" password=$(echo -n $password | base64) kubectl patch secret dex-static-credential -n uipath-auth --type='json' -p="[{'op': 'replace', 'path': '/data/password', 'value': '$password'}]" -
Run the following command by replacing
<cluster_config.json>
with the path to your configuration file:/opt/UiPathAutomationSuite/UiPath_Installer/install-uipath.sh -i <cluster_config.json> -f -o output.json --accept-license-agreement
/opt/UiPathAutomationSuite/UiPath_Installer/install-uipath.sh -i <cluster_config.json> -f -o output.json --accept-license-agreement
Use the GPC console to edit number of nodes (server or agent nodes) as follows:
- Updating certificates
- Enabling FIPS 140-2
- Accessing the deployment outputs
- Output definitions
- Accessing the Cluster Administration portal
- Accessing the services
- Accessing Automation Suite portal
- Accessing ArgoCD
- Accessing the monitoring tools
- Accessing the cluster
- Editing the number of nodes
- Removing the deployment