- Overview
- Requirements
- Installation
- Q&A: Deployment templates
- Downloading the installation packages
- Install-uipath.sh Parameters
- Enabling Redis High Availability Add-On for the cluster
- Document Understanding configuration file
- Adding a dedicated agent node with GPU support
- Connecting Task Mining application
- Adding a dedicated agent Node for Task Mining
- Adding a Dedicated Agent Node for Automation Suite Robots
- Post-installation
- Cluster administration
- Monitoring and alerting
- Migration and upgrade
- Migration options
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Insights
- Step 7: Deleting the default tenant
- B) Single tenant migration
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to troubleshoot services during installation
- How to uninstall the cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bundle
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to automatically clean up Longhorn snapshots
- How to disable TX checksum offloading
- How to address weak ciphers in TLS 1.2
- Unable to run an offline installation on RHEL 8.4 OS
- Error in Downloading the Bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- First installation fails during Longhorn setup
- SQL connection string validation error
- Prerequisite check for selinux iscsid module fails
- Azure disk not marked as SSD
- Failure after certificate update
- Antivirus causes installation issues
- Automation Suite not working after OS upgrade
- Automation Suite requires backlog_wait_time to be set to 0
- Volume unable to mount due to not being ready for workloads
- Unable to launch Automation Hub and Apps with proxy setup
- Failure to upload or download data in objectstore
- PVC resize does not heal Ceph
- Failure to resize PVC
- Failure to resize objectstore PVC
- Rook Ceph or Looker pod stuck in Init state
- StatefulSet volume attachment error
- Failure to create persistent volumes
- Storage reclamation patch
- Backup failed due to TooManySnapshots error
- All Longhorn replicas are faulted
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Authentication not working after migration
- Kinit: Cannot find KDC for realm <AD Domain> while getting initial credentials
- Kinit: Keytab contains no suitable keys for *** while getting initial credentials
- GSSAPI operation failed due to invalid status code
- Alarm received for failed Kerberos-tgt-update job
- SSPI provider: Server not found in Kerberos database
- Login failed for AD user due to disabled account
- ArgoCD login failed
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis probe failure
- RKE2 server fails to start
- Secret not found in UiPath namespace
- ArgoCD goes into progressing state after first installation
- Unexpected inconsistency; run fsck manually
- MongoDB pods in CrashLoopBackOff or pending PVC provisioning after deletion
- MongoDB Pod Fails to Upgrade From 4.4.4-ent to 5.0.7-ent
- Unhealthy services after cluster restore or rollback
- Pods stuck in Init:0/X
- Prometheus in CrashloopBackoff state with out-of-memory (OOM) error
- Missing Ceph-rook metrics from monitoring dashboards
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite Support Bundle Tool
- Exploring Logs
Step 1.2: Configuring the VM
- To connect to the machine using SSH, follow the Azure instructions.
-
Alternatively, you can connect to the machine on your terminal using SSH:
# If you set a password the command is: ssh <user>@<dns_of_vm> # If you used an ssh key: ssh -i <.\Path\To\myKey1.pem> <user>@<dns_of_vm>
# If you set a password the command is: ssh <user>@<dns_of_vm> # If you used an ssh key: ssh -i <.\Path\To\myKey1.pem> <user>@<dns_of_vm>
Log in to the machine via SSH using the following commands:
-
If you set a password:
ssh <user>@<dns_of_vm>
ssh <user>@<dns_of_vm> -
If you used an SSH key:
ssh -i <.\Path\To\myKey1.pem> <user>@<dns_of_vm>
ssh -i <.\Path\To\myKey1.pem> <user>@<dns_of_vm>
The disk device name is different from the disk name. You will need the disk device name when configuring the disk.
To configure the disk for installation, see the following:
You need to mark the Azure disk as SSD by running the following command:
echo "0" > "/sys/block/{DEVICE_NAME}/queue/rotational"
echo "KERNEL==\"{DEVICE_NAME}\", ATTR{queue/rotational}=\"0\"" >> "/etc/udev/rules.d/99-azure-mark-ssd.rules"
udevadm control --reload
udevadm trigger
echo "0" > "/sys/block/{DEVICE_NAME}/queue/rotational"
echo "KERNEL==\"{DEVICE_NAME}\", ATTR{queue/rotational}=\"0\"" >> "/etc/udev/rules.d/99-azure-mark-ssd.rules"
udevadm control --reload
udevadm trigger
These additional inbound ports are needed only for multi-node HA-ready production installations. Add them to all VMs.
Port |
Protocol |
Source |
Destination |
Purpose |
---|---|---|---|---|
443
|
TCP |
Any |
Any |
https traffic |
2379
|
TCP |
VirtualNetwork |
VirtualNetwork |
etcd client port |
2380
|
TCP |
VirtualNetwork |
VirtualNetwork |
etcd peer port |
6443
|
TCP |
Any |
Any |
Kubernetes API |
8472
|
UDP |
VirtualNetwork |
VirtualNetwork |
Flannel |
9345
|
TCP |
Any |
Any |
Kubernetes API |
10250
|
TCP |
VirtualNetwork |
VirtualNetwork |
kubelet |
30071
|
TCP |
VirtualNetwork |
VirtualNetwork |
NodePort |
Opening TCP ports on an Azure VM for multi-node installations
Create new inbound networking rules for the ports needed over TCP protocol.