- Overview
- Requirements
- Installation
- Q&A: Deployment Templates
- Release Notes
- Azure deployment architecture
- Step 1: Preparing the Azure Deployment
- Step 2: Deploying Automation Suite to Azure
- Step 3: Post-deployment Steps
- Post-installation
- Cluster administration
- Monitoring and alerting
- Migration and upgrade
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to Troubleshoot Services During Installation
- How to Uninstall the Cluster
- How to clean up offline artifacts to improve disk space
- How to Disable TLS 1.0 and 1.1
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bundle
- How to debug failed Automation Suite installations
- How to disable NIC checksum offloading
- Unable to run an offline installation on RHEL 8.4 OS
- Error in Downloading the Bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- SQL connection string validation error
- Failure After Certificate Update
- Automation Suite Requires Backlog_wait_time to Be Set 1
- Cannot Log in After Migration
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Kinit: Cannot Find KDC for Realm <AD Domain> While Getting Initial Credentials
- Kinit: Keytab Contains No Suitable Keys for *** While Getting Initial Credentials
- GSSAPI Operation Failed With Error: An Invalid Status Code Was Supplied (Client's Credentials Have Been Revoked).
- Login Failed for User <ADDOMAIN><aduser>. Reason: The Account Is Disabled.
- Alarm Received for Failed Kerberos-tgt-update Job
- SSPI Provider: Server Not Found in Kerberos Database
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis Probe Failure
- RKE2 Server Fails to Start
- Secret Not Found in UiPath Namespace
- ArgoCD goes into progressing state after first installation
- Unexpected Inconsistency; Run Fsck Manually
- Missing Self-heal-operator and Sf-k8-utils Repo
- Degraded MongoDB or Business Applications After Cluster Restore
- Unhealthy Services After Cluster Restore or Rollback
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite Support Bundle Tool
- Exploring Logs
Step 1: Preparing the Azure Deployment
This page explains how to prepare your Azure environment.
The deployment requires access to an Azure subscription and a Resource Group with the RBAC role Owner.
You can check your role assignment by going through the following:
Resource Group → Access Control (IAM) → Check Access → View My Access
The deployment provisions a number of Standard_D (general purpose), Standard_E and/or Standard_NC (with GPU) VMs. The Azure subscription has a quota on the number of cores that can be provisioned for the VM family.
Check the subscription quota by going to Usage + quotas in the Azure portal.
Make sure that the VM SKUs are available for the region in which you deploy.
You can check the availability at: Azure Products by Region.
By default, the templates deploy the VMs across as many Azure Availability Zones as possible to enable the resilience to zonal failures in a multi-node HA-ready production cluster.
Not all Azure Regions support Availability Zones. See Azure Geograpies for details.
VM SKUs have additional Availability Zones restrictions that you can check using the CLI cmdlet. See Get-AzComputeResourceSku for details.
The cluster is considered resilient to zonal failures if the servers are spread across three Azure Availability Zones. If the Azure region does not support Availability Zones for the type of VM selected for servers, the deployment will continue without zone resilience.
The template provisions an Azure Load Balancer with a public IP and a DNS label to access the services.
<dnsName>.<regionName>.cloudapp.azure.com
.
Azure-provided
or 168.63.129.16
.
If you want to access the cluster over the internet, you can check out Step 3: Post-deployment steps.
The template allows you to deploy the nodes in an existing Virtual Network. However, the Virtual Network must have a subnet that meets the following requirements:
- has enough free address space to accommodate all the nodes and the internal load balancer;
- outbound connectivity configured through a NAT gateway;
- allows HTTPS traffic on port
443
.