- Overview
- Requirements
- Installation
- Post-installation
- Cluster administration
- Monitoring and alerting
- Migration and upgrade
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to Troubleshoot Services During Installation
- How to Uninstall the Cluster
- How to clean up offline artifacts to improve disk space
- How to Disable TLS 1.0 and 1.1
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bundle
- How to debug failed Automation Suite installations
- How to disable NIC checksum offloading
- Unable to run an offline installation on RHEL 8.4 OS
- Error in Downloading the Bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- SQL connection string validation error
- Failure After Certificate Update
- Automation Suite Requires Backlog_wait_time to Be Set 1
- Cannot Log in After Migration
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Kinit: Cannot Find KDC for Realm <AD Domain> While Getting Initial Credentials
- Kinit: Keytab Contains No Suitable Keys for *** While Getting Initial Credentials
- GSSAPI Operation Failed With Error: An Invalid Status Code Was Supplied (Client's Credentials Have Been Revoked).
- Login Failed for User <ADDOMAIN><aduser>. Reason: The Account Is Disabled.
- Alarm Received for Failed Kerberos-tgt-update Job
- SSPI Provider: Server Not Found in Kerberos Database
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis Probe Failure
- RKE2 Server Fails to Start
- Secret Not Found in UiPath Namespace
- ArgoCD goes into progressing state after first installation
- Unexpected Inconsistency; Run Fsck Manually
- Missing Self-heal-operator and Sf-k8-utils Repo
- Degraded MongoDB or Business Applications After Cluster Restore
- Unhealthy Services After Cluster Restore or Rollback
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite Support Bundle Tool
- Exploring Logs
Step 1.2: Configuring the VM
- To connect to the machine using SSH, follow the Azure instructions.
-
Alternatively, you can connect to the machine on your terminal using SSH:
# If you set a password the command is: ssh <user>@<dns_of_vm> # If you used an ssh key: ssh -i <.\Path\To\myKey1.pem> <user>@<dns_of_vm>
# If you set a password the command is: ssh <user>@<dns_of_vm> # If you used an ssh key: ssh -i <.\Path\To\myKey1.pem> <user>@<dns_of_vm>
Log in to the machine via SSH using the following commands:
-
If you set a password:
ssh <user>@<dns_of_vm>
ssh <user>@<dns_of_vm> -
If you used an SSH key:
ssh -i <.\Path\To\myKey1.pem> <user>@<dns_of_vm>
ssh -i <.\Path\To\myKey1.pem> <user>@<dns_of_vm>
The disk device name is different from the disk name. You will need the disk device name when configuring the disk.
To configure the disk for installation, see the following:
These additional inbound ports are needed only for multi-node HA-ready production installations. Add them to all VMs.
Port |
Protocol |
Source |
Destination |
Purpose |
---|---|---|---|---|
443
|
TCP |
Any |
Any |
https traffic |
2379
|
TCP |
VirtualNetwork |
VirtualNetwork |
etcd client port |
2380
|
TCP |
VirtualNetwork |
VirtualNetwork |
etcd peer port |
6443
|
TCP |
Any |
Any |
Kubernetes API |
8472
|
UDP |
VirtualNetwork |
VirtualNetwork |
Flannel |
9345
|
TCP |
Any |
Any |
Kubernetes API |
10250
|
TCP |
VirtualNetwork |
VirtualNetwork |
kubelet |
30071
|
TCP |
VirtualNetwork |
VirtualNetwork |
NodePort |
Opening TCP ports on an Azure VM for multi-node installations
Create new inbound networking rules for the ports needed over TCP protocol.