- Overview
- Requirements
- Installation
- Prerequisite checks
- Downloading the installation packages
- uipathctl cluster
- uipathctl cluster maintenance
- uipathctl cluster maintenance disable
- uipathctl cluster maintenance enable
- uipathctl cluster maintenance is-enabled
- uipathctl cluster migration
- uipathctl cluster migration export
- uipathctl cluster migration import
- uipathctl cluster migration run
- uipathctl cluster upgrade
- uipathctl config
- uipathctl config add-host-admin
- uipathctl config additional-ca-certificates
- uipathctl config additional-ca-certificates get
- uipathctl config additional-ca-certificates update
- uipathctl config alerts
- uipathctl config alerts add-email
- uipathctl config alerts remove-email
- uipathctl config alerts update-email
- uipathctl config argocd
- uipathctl config argocd ca-certificates
- uipathctl config argocd ca-certificates get
- uipathctl config argocd ca-certificates update
- uipathctl config argocd generate-dex-config
- uipathctl config argocd generate-rbac
- uipathctl config argocd registry
- uipathctl config argocd registry get
- uipathctl config argocd registry update
- uipathctl config enable-basic-auth
- uipathctl config orchestrator
- uipathctl config orchestrator get-config
- uipathctl config orchestrator update-config
- uipathctl config saml-certificates get
- uipathctl config saml-certificates rotate
- uipathctl config saml-certificates update
- uipathctl config tls-certificates
- uipathctl config tls-certificates get
- uipathctl config tls-certificates update
- uipathctl config token-signing-certificates
- uipathctl config token-signing-certificates get
- uipathctl config token-signing-certificates rotate
- uipathctl config token-signing-certificates update
- uipathctl health
- uipathctl health bundle
- uipathctl health check
- uipathctl health diagnose
- uipathctl health test
- uipathctl manifest
- uipathctl manifest apply
- uipathctl manifest diff
- uipathctl manifest get
- uipathctl manifest get-revision
- uipathctl manifest list-applications
- uipathctl manifest list-revisions
- uipathctl manifest render
- uipathctl prereq
- uipathctl prereq create
- uipathctl prereq run
- uipathctl resource
- uipathctl resource report
- uipathctl snapshot
- uipathctl snapshot backup
- uipathctl snapshot backup create
- uipathctl snapshot backup disable
- uipathctl snapshot backup enable
- uipathctl snapshot delete
- uipathctl snapshot list
- uipathctl snapshot restore
- uipathctl snapshot restore create
- uipathctl snapshot restore delete
- uipathctl snapshot restore history
- uipathctl snapshot restore logs
- uipathctl version
- Post-installation
- Migration and upgrade
- Upgrading Automation Suite on EKS/AKS
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Orchestrator
- Step 7: Migrating standalone Insights
- Step 8: Deleting the default tenant
- B) Single tenant migration
- Migrating from Automation Suite on Linux to Automation Suite on EKS/AKS
- Monitoring and alerting
- Cluster administration
- Product-specific configuration
- Using the Orchestrator Configurator Tool
- Configuring Orchestrator parameters
- Orchestrator appSettings
- Configuring appSettings
- Configuring the maximum request size
- Overriding cluster-level storage configuration
- Configuring credential stores
- Configuring encryption key per tenant
- Cleaning up the Orchestrator database
- Troubleshooting
- Troubleshooting
Troubleshooting
After installing Automation Suite on AKS, when you check the health status of the Automation Suite robots pod, it returns an unhealthy status: "[POD_UNHEALTHY] Pod asrobots-migrations-cvzfn in namespace uipath is in Failed status".
Following an Automation Suite on AKS installation or upgrade, the backup setup does not work because of a failure to connect to Azure Government.
You can fix the issue by taking the following steps:
- Create a file named
velerosecrets.txt
, with the following contents:AZURE_CLIENT_SECRET=<secretforserviceprincipal> AZURE_CLIENT_ID=<clientidforserviceprincipal> AZURE_TENANT_ID=<tenantidforserviceprincipal> AZURE_SUBSCRIPTION_ID=<subscriptionidforserviceprincipal> AZURE_CLOUD_NAME=AzureUSGovernmentCloud AZURE_RESOURCE_GROUP=<infraresourcegroupoftheakscluster>
AZURE_CLIENT_SECRET=<secretforserviceprincipal> AZURE_CLIENT_ID=<clientidforserviceprincipal> AZURE_TENANT_ID=<tenantidforserviceprincipal> AZURE_SUBSCRIPTION_ID=<subscriptionidforserviceprincipal> AZURE_CLOUD_NAME=AzureUSGovernmentCloud AZURE_RESOURCE_GROUP=<infraresourcegroupoftheakscluster> - Encode the data in the
velerosecrets.txt
file as Base64:export b64velerodata=$(cat velerosecrets.txt | base64)
export b64velerodata=$(cat velerosecrets.txt | base64) - Update the
velero-azure
secret in thevelero
namespace, as shown in the following example:apiVersion: v1 kind: Secret metadata: name: velero-azure namespace: velero data: cloud: <insert the $b64velerodata value here>
apiVersion: v1 kind: Secret metadata: name: velero-azure namespace: velero data: cloud: <insert the $b64velerodata value here> - Restart the
velero
deployment:kubectl rollout restart deploy -n velero
kubectl rollout restart deploy -n velero
Pods in the uipath namespace are not running when custom node taints are enabled. The pods cannot talk to the adminctl webhook that injects pod tolerations in an EKS env.
admctl
webhook from the cluster CIDR or 0.0.0.0/0
.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-all-ingress-to-admctl
namespace: uipath
spec:
podSelector:
matchLabels:
app: admctl-webhook
ingress:
- from:
- ipBlock:
cidr: <cluster-pod-cdr> or "0.0.0.0/0"
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-all-ingress-to-admctl
namespace: uipath
spec:
podSelector:
matchLabels:
app: admctl-webhook
ingress:
- from:
- ipBlock:
cidr: <cluster-pod-cdr> or "0.0.0.0/0"
Pods cannot communicate with the FQDN on a proxy environment, and the following error is displayed:
System.Net.Http.HttpRequestException: The proxy tunnel request to proxy 'http://<proxyFQDN>:8080/' failed with status code '404'.
System.Net.Http.HttpRequestException: The proxy tunnel request to proxy 'http://<proxyFQDN>:8080/' failed with status code '404'.
ServiceEntry
, as shown in the following example:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: proxy
namespace: uipath
spec:
hosts:
- <proxy-host>
addresses:
- <proxy-ip>/32
ports:
- number: <proxy-port>
name: tcp
protocol: TCP
location: MESH_EXTERNAL
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: proxy
namespace: uipath
spec:
hosts:
- <proxy-host>
addresses:
- <proxy-ip>/32
ports:
- number: <proxy-port>
name: tcp
protocol: TCP
location: MESH_EXTERNAL
The failure occurs mainly on FIPS enabled nodes when using Azure Files with the NFS protocol.
asrobots-pvc-package-cache
fails.
This happens because the AKS cluster cannot connect to Azure Files.
For example, the following error message may be displayed:
failed to provision volume with StorageClass "azurefile-csi-nfs": rpc error: code = Internal desc = update service endpoints failed with error: failed to get the subnet ci-asaks4421698 under vnet ci-asaks4421698: &{false 403 0001-01-01 00:00:00 +0000 UTC {"error":{"code":"AuthorizationFailed","message":"The client '4c200854-2a79-4893-9432-3111795beea0' with object id '4c200854-2a79-4893-9432-3111795beea0' does not have authorization to perform action 'Microsoft.Network/virtualNetworks/subnets/read' over scope '/subscriptions/64fdac10-935b-40e6-bf28-f7dc093f7f76/resourceGroups/ci-asaks4421698/providers/Microsoft.Network/virtualNetworks/ci-asaks4421698/subnets/ci-asaks4421698' or the scope is invalid. If access was recently granted, please refresh your credentials."}}}
failed to provision volume with StorageClass "azurefile-csi-nfs": rpc error: code = Internal desc = update service endpoints failed with error: failed to get the subnet ci-asaks4421698 under vnet ci-asaks4421698: &{false 403 0001-01-01 00:00:00 +0000 UTC {"error":{"code":"AuthorizationFailed","message":"The client '4c200854-2a79-4893-9432-3111795beea0' with object id '4c200854-2a79-4893-9432-3111795beea0' does not have authorization to perform action 'Microsoft.Network/virtualNetworks/subnets/read' over scope '/subscriptions/64fdac10-935b-40e6-bf28-f7dc093f7f76/resourceGroups/ci-asaks4421698/providers/Microsoft.Network/virtualNetworks/ci-asaks4421698/subnets/ci-asaks4421698' or the scope is invalid. If access was recently granted, please refresh your credentials."}}}
To overcome this issue, you need to grant Automation Suite access to the Azure resource
- In Azure, navigate to the AKS resource group, then open the desired virtual
network page. For example, in this case, the virtual network is
ci-asaks4421698
. - From the Subnets list, select the desired subnet. For example, in this
case, the subnet is
ci-asaks4421698
. - At the top of the subnets list, click Manage Users. The Access Control page opens.
- Click Add role assignment.
- Search for the Network Contributor role.
- Select Managed Identity.
- Switch to theMembers tab.
- Select Managed Identity, then select Kubernetes Service.
- Select the name of the AKS cluster.
- Click Review and Assign.
When upgrading from 2023.4.3 to 2023.10, you run into issues with provisioning AI Center.
"exception":"sun.security.pkcs11.wrapper.PKCS11Exception: CKR_KEY_SIZE_RANGE
You can fix the issue by taking the following steps:
-
Capture the existing
coredns
configmap from the running cluster:kubectl get configmap -n kube-system coredns -o yaml > coredns-config.yaml
kubectl get configmap -n kube-system coredns -o yaml > coredns-config.yaml -
Edit the
coredns-config.yaml
file to append thefqdn
rewrite to the config.-
Rename the configmap to
coredns-custom
. -
Add the following code block to your
coredns-config.yaml
file. Make sure the code block comes before thekubernetes cluster.local in-addr.arpa ip6.arp
line.rewrite stop { name exact <cluster-fqdn> istio-ingressgateway.istio-system.svc.cluster.local }
rewrite stop { name exact <cluster-fqdn> istio-ingressgateway.istio-system.svc.cluster.local } -
Replace
<cluster-fqdn>
with the actual value.
apiVersion: v1 data: Corefile: | .:53 { errors log health rewrite stop { name exact mycluster.autosuite.com istio-ingressgateway.istio-system.svc.cluster.local } kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance } kind: ConfigMap metadata: name: coredns-custom namespace: kube-system
apiVersion: v1 data: Corefile: | .:53 { errors log health rewrite stop { name exact mycluster.autosuite.com istio-ingressgateway.istio-system.svc.cluster.local } kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance } kind: ConfigMap metadata: name: coredns-custom namespace: kube-system -
-
Create the
coredns-custom
configmap:kubectl apply -f coredns-config.yaml
kubectl apply -f coredns-config.yaml -
Replace the volume reference from
coredns
tocoredns-custom
in thecoredns
deployment inkube-system
namespace:volumes: - emptyDir: {} name: tmp - configMap: defaultMode: 420 items: - key: Corefile path: Corefile name: coredns-custom name: config-volume
volumes: - emptyDir: {} name: tmp - configMap: defaultMode: 420 items: - key: Corefile path: Corefile name: coredns-custom name: config-volume -
Restart the
coredns
deployment and ensure thecoredns
pods are up and running without any issues:kubectl rollout restart deployment -n kube-system coredns
kubectl rollout restart deployment -n kube-system coredns -
You should now be able to launch Automation Hub and Apps.
To fix the issue, take the following steps:
-
Make sure Helm 3.14 runs on the jumpbox or laptop used for installing Automation Suite.
-
Extract the configuration values of the failed Helm chart, which in this case is Velero:
helm -n velero get values velero > customvals.yaml
helm -n velero get values velero > customvals.yaml -
Add the missing image pull secret in the
customvals.yaml
file, under the.image.imagePullSecrets
path:image: imagePullSecrets: - uipathpullsecret
image: imagePullSecrets: - uipathpullsecret -
If Velero has already been installed, uninstall it:
helm uninstall -n velero velero
helm uninstall -n velero velero -
Create a new file called
velerosecrets.txt
. Populate it with your specific information, as shown in the following example:AZURE_CLIENT_SECRET=<secretforserviceprincipal> AZURE_CLIENT_ID=<clientidforserviceprincipal> AZURE_TENANT_ID=<tenantidforserviceprincipal> AZURE_SUBSCRIPTION_ID=<subscriptionidforserviceprincipal> AZURE_CLOUD_NAME=AzurePublicCloud AZURE_RESOURCE_GROUP=<infraresourcegroupoftheakscluster>
AZURE_CLIENT_SECRET=<secretforserviceprincipal> AZURE_CLIENT_ID=<clientidforserviceprincipal> AZURE_TENANT_ID=<tenantidforserviceprincipal> AZURE_SUBSCRIPTION_ID=<subscriptionidforserviceprincipal> AZURE_CLOUD_NAME=AzurePublicCloud AZURE_RESOURCE_GROUP=<infraresourcegroupoftheakscluster> -
Encode the
velerosecrets.txt
file:export b64velerodata=$(cat velerosecrets.txt | base64)
export b64velerodata=$(cat velerosecrets.txt | base64) -
Create the
velero-azure
secret in thevelero
namespace. Include the following content:apiVersion: v1 kind: Secret metadata: name: velero-azure namespace: velero data: cloud: <put the $b64velerodata value here>
apiVersion: v1 kind: Secret metadata: name: velero-azure namespace: velero data: cloud: <put the $b64velerodata value here> -
Reinstall Velero:
helm install velero -n velero <path to velero - 3.1.6 helm chart tgz> -f customvals.yaml
helm install velero -n velero <path to velero - 3.1.6 helm chart tgz> -f customvals.yaml
- Health check of Automation Suite robots fails
- Description
- Potential issue
- Solution
- The backup setup does not work due to a failure to connect to Azure Government
- Description
- Solution
- Pods in the uipath namespace stuck when enabling custom node taints
- Description
- Solution
- Pods cannot communicate with FQDN in a proxy environment
- Description
- Solution
- Provisioning Automation Suite Robots fails
- Description
- Potential issue
- Solution
- AI Center provisioning failure after upgrading to 2023.10
- Description
- Solution
- Unable to launch Automation Hub and Apps with proxy setup
- Description
- Solution
- Installation fails when Velero is enabled
- Description
- Solution