- Overview
- Requirements
- Installation
- Q&A: Deployment Templates
- Step 1.1: Creating the VM
- Step 1.2: Configuring the VM
- Step 2: Configuring the Load Balancer
- Step 3: Configuring Azure SQL
- Step 4: Configuring the DNS
- Post-installation
- Cluster administration
- Monitoring and alerting
- Migration and upgrade
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to Troubleshoot Services During Installation
- How to Uninstall the Cluster
- How to clean up offline artifacts to improve disk space
- How to disable TLS 1.0 and 1.1
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bucket
- How to debug failed Automation Suite installations
- How to disable TX checksum offloading
- Unable to run an offline installation on RHEL 8.4 OS
- Error in Downloading the Bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- SQL connection string validation error
- Failure After Certificate Update
- Automation Suite Requires Backlog_wait_time to Be Set 1
- Cannot Log in After Migration
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Kinit: Cannot Find KDC for Realm <AD Domain> While Getting Initial Credentials
- Kinit: Keytab Contains No Suitable Keys for *** While Getting Initial Credentials
- GSSAPI Operation Failed With Error: An Invalid Status Code Was Supplied (Client's Credentials Have Been Revoked).
- Login Failed for User <ADDOMAIN><aduser>. Reason: The Account Is Disabled.
- Alarm Received for Failed Kerberos-tgt-update Job
- SSPI Provider: Server Not Found in Kerberos Database
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis Probe Failure
- RKE2 Server Fails to Start
- Secret Not Found in UiPath Namespace
- ArgoCD goes into progressing state after first installation
- Unexpected Inconsistency; Run Fsck Manually
- Missing Self-heal-operator and Sf-k8-utils Repo
- Degraded MongoDB or Business Applications After Cluster Restore
- Unhealthy Services After Cluster Restore or Rollback
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite support bundle
- Exploring Logs
Step 2: Configuring the Load Balancer
This step is mandatory for a multi-node HA-ready production deployment or a single-node evaluation deployment with a dedicated Task Mining and/or GPU node.
If you are configuring a single-node evaluation deployment without a dedicated Task Mining and/or GPU node, proceed to Configuring Azure SQL.
If you are using the Azure Internal Load Balancer (LB) for deployments, you can encounter issues with the calls from the backend Virtual Machine (VM) to the LB frontend IP. The issues occur due to source IP and MAC address mismatch of the network packet. This prevents the recipient from working out the correct response path, resulting in the failure of calls from the VM to the LB. For more details, see Azure Load Balancer Components limitations and Backend Traffic Troubleshooting.
If you are installing Task Mining, the dedicated Task Mining should not be added in the node pool.
The Task Mining Analyzer node should not be added in the node pool.
Make sure to repeat these steps for all virtual networks corresponding to each node.
The recommended configuration is to add two backend pools, as follows:
- one backend pool that includes all server nodes (referred to as the Server Pool);
- one backend pool with all server and agent nodes (minus the Task Mining node, and this pool is referred to as the Node Pool in our documentation).
kubeapi-probe
, whereas the Node Pool is associated with the https-probe
.
443
, 6443
, and 9345
.