- Overview
- Requirements
- Installation
- Post-installation
- Cluster administration
- Monitoring and alerting
- Migration and upgrade
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to Troubleshoot Services During Installation
- How to Uninstall the Cluster
- How to clean up offline artifacts to improve disk space
- How to Disable TLS 1.0 and 1.1
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bundle
- How to debug failed Automation Suite installations
- How to disable NIC checksum offloading
- Unable to run an offline installation on RHEL 8.4 OS
- Error in Downloading the Bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- SQL connection string validation error
- Failure After Certificate Update
- Automation Suite Requires Backlog_wait_time to Be Set 1
- Cannot Log in After Migration
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Kinit: Cannot Find KDC for Realm <AD Domain> While Getting Initial Credentials
- Kinit: Keytab Contains No Suitable Keys for *** While Getting Initial Credentials
- GSSAPI Operation Failed With Error: An Invalid Status Code Was Supplied (Client's Credentials Have Been Revoked).
- Login Failed for User <ADDOMAIN><aduser>. Reason: The Account Is Disabled.
- Alarm Received for Failed Kerberos-tgt-update Job
- SSPI Provider: Server Not Found in Kerberos Database
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis Probe Failure
- RKE2 Server Fails to Start
- Secret Not Found in UiPath Namespace
- ArgoCD goes into progressing state after first installation
- Unexpected Inconsistency; Run Fsck Manually
- Missing Self-heal-operator and Sf-k8-utils Repo
- Degraded MongoDB or Business Applications After Cluster Restore
- Unhealthy Services After Cluster Restore or Rollback
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite Support Bundle Tool
- Exploring Logs
Unhealthy Services After Cluster Restore or Rollback
Following a cluster restore or rollback, AI Center, Orchestrator, Platform, Document Understanding or Task Mining might be unhealthy, with the RabbitMQ pod logs showing the following error:
[root@server0 UiPathAutomationSuite]# k -n rabbitmq logs rabbitmq-server-0
2022-10-29 07:38:49.146614+00:00 [info] <0.9223.362> accepting AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672)
2022-10-29 07:38:49.147411+00:00 [info] <0.9223.362> Connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672) has a client-provided name: rabbitConnectionFactory#77049094:2100
2022-10-29 07:38:49.147644+00:00 [erro] <0.9223.362> Error on AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672, state: starting):
2022-10-29 07:38:49.147644+00:00 [erro] <0.9223.362> PLAIN login refused: user 'aicenter-service' - invalid credentials
2022-10-29 07:38:49.147922+00:00 [info] <0.9223.362> closing AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672 - rabbitConnectionFactory#77049094:2100)
2022-10-29 07:38:55.818447+00:00 [info] <0.9533.362> accepting AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672)
2022-10-29 07:38:55.821662+00:00 [info] <0.9533.362> Connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672) has a client-provided name: rabbitConnectionFactory#2100d047:4057
2022-10-29 07:38:55.822058+00:00 [erro] <0.9533.362> Error on AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672, state: starting):
2022-10-29 07:38:55.822058+00:00 [erro] <0.9533.362> PLAIN login refused: user 'aicenter-service' - invalid credentials
2022-10-29 07:38:55.822447+00:00 [info] <0.9533.362> closing AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672 - rabbitConnectionFactory#2100d047:4057)
[root@server0 UiPathAutomationSuite]# k -n rabbitmq logs rabbitmq-server-0
2022-10-29 07:38:49.146614+00:00 [info] <0.9223.362> accepting AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672)
2022-10-29 07:38:49.147411+00:00 [info] <0.9223.362> Connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672) has a client-provided name: rabbitConnectionFactory#77049094:2100
2022-10-29 07:38:49.147644+00:00 [erro] <0.9223.362> Error on AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672, state: starting):
2022-10-29 07:38:49.147644+00:00 [erro] <0.9223.362> PLAIN login refused: user 'aicenter-service' - invalid credentials
2022-10-29 07:38:49.147922+00:00 [info] <0.9223.362> closing AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672 - rabbitConnectionFactory#77049094:2100)
2022-10-29 07:38:55.818447+00:00 [info] <0.9533.362> accepting AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672)
2022-10-29 07:38:55.821662+00:00 [info] <0.9533.362> Connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672) has a client-provided name: rabbitConnectionFactory#2100d047:4057
2022-10-29 07:38:55.822058+00:00 [erro] <0.9533.362> Error on AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672, state: starting):
2022-10-29 07:38:55.822058+00:00 [erro] <0.9533.362> PLAIN login refused: user 'aicenter-service' - invalid credentials
2022-10-29 07:38:55.822447+00:00 [info] <0.9533.362> closing AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672 - rabbitConnectionFactory#2100d047:4057)
CrashLoopBackOff
state due to the Mnesia table data write issue.
If all pods are running, take the following steps:
-
Delete the users in RabbitMQ:
kubectl -n rabbitmq exec rabbitmq-server-0 -c rabbitmq -- rabbitmqctl list_users -s --formatter json | jq '.[]|.user' | grep -v default_user | xargs -I{} kubectl -n rabbitmq exec rabbitmq-server-0 -c rabbitmq -- rabbitmqctl delete_user {}
kubectl -n rabbitmq exec rabbitmq-server-0 -c rabbitmq -- rabbitmqctl list_users -s --formatter json | jq '.[]|.user' | grep -v default_user | xargs -I{} kubectl -n rabbitmq exec rabbitmq-server-0 -c rabbitmq -- rabbitmqctl delete_user {} -
Delete RabbitMQ application secrets in the UiPath namespace:
kubectl -n uipath get secret --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep -i rabbitmq-secret | xargs -I{} kubectl -n uipath delete secret {}
kubectl -n uipath get secret --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep -i rabbitmq-secret | xargs -I{} kubectl -n uipath delete secret {} -
Delete RabbitMQ application secrets in the RabbitMQ namespace:
kubectl -n rabbitmq get secret --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep -i rabbitmq-secret | xargs -I{} kubectl -n rabbitmq delete secret {}
kubectl -n rabbitmq get secret --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep -i rabbitmq-secret | xargs -I{} kubectl -n rabbitmq delete secret {} -
Sync
sfcore
application via ArgoCD and wait for the sync to complete: -
Perform a rollout restart on all applications in the UiPath namespace:
kubectl -n uipath rollout restart deploy
kubectl -n uipath rollout restart deploy
If some pods are in CrashLoopBackOff state, take the following steps:
-
Identify which RabbitMQ pods are stuck in
CrashLoopBackOff
state, and check the RabbitMQCrashLoopBackOff
pod logs:kubectl -n rabbitmq get pods kubectl -n rabbitmq logs <CrashLoopBackOff-Pod-Name>
kubectl -n rabbitmq get pods kubectl -n rabbitmq logs <CrashLoopBackOff-Pod-Name> -
Check the output of the previous commands. If the issue is related to the Mnesia table data write, you should see an error message similar to the following:
Mnesia('rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq'): ** ERROR ** (could not write core file: eacces) ** FATAL ** Failed to merge schema: Bad cookie in table definition rabbit_user_permission: 'rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq' = {cstruct,rabbit_user_permission,set,[],['rabbit@rabbitmq-server-2.rabbitmq-nodes.rabbitmq','rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq','rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'],[],[],0,read_write,false,[],[],false,user_permission,[user_vhost,permission],[],[],[],{{1667351034020261908,-576460752303416575,1},'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'},{{4,0},{'rabbit@rabbitmq-server-2.rabbitmq-nodes.rabbitmq',{1667,351040,418694}}}}, 'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq' = {cstruct,rabbit_user_permission,set,[],['rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'],[],[],0,read_write,false,[],[],false,user_permission,[user_vhost,permission],[],[],[],{{1667372429216834387,-576460752303417087,1},'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'},{{2,0},[]}}
Mnesia('rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq'): ** ERROR ** (could not write core file: eacces) ** FATAL ** Failed to merge schema: Bad cookie in table definition rabbit_user_permission: 'rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq' = {cstruct,rabbit_user_permission,set,[],['rabbit@rabbitmq-server-2.rabbitmq-nodes.rabbitmq','rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq','rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'],[],[],0,read_write,false,[],[],false,user_permission,[user_vhost,permission],[],[],[],{{1667351034020261908,-576460752303416575,1},'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'},{{4,0},{'rabbit@rabbitmq-server-2.rabbitmq-nodes.rabbitmq',{1667,351040,418694}}}}, 'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq' = {cstruct,rabbit_user_permission,set,[],['rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'],[],[],0,read_write,false,[],[],false,user_permission,[user_vhost,permission],[],[],[],{{1667372429216834387,-576460752303417087,1},'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'},{{2,0},[]}} -
To fix the issue, take the following steps:
-
Find the number of RabbitMQ replicas:
rabbitmqReplicas=$(kubectl -n rabbitmq get rabbitmqcluster rabbitmq -o json | jq -r '.spec.replicas')
-
Scale down the RabbitMQ replicas:
kubectl -n rabbitmq patch rabbitmqcluster rabbitmq -p "{\"spec\":{\"replicas\": 0}}" --type=merge
kubectl -n rabbitmq scale sts rabbitmq-server --replicas=0
-
Wait until all RabbitMQ pods are terminated:
kubectl -n rabbitmq get pod
-
Find and delete the PVC of the RabbitMQ pod that is stuck in
CrashLoopBackOff
state:kubectl -n rabbitmq get pvc
kubectl -n rabbitmq delete pvc <crashloopbackupoff_pod_pvc_name>
-
Scale up the RabbitMQ replicas:
kubectl -n rabbitmq patch rabbitmqcluster rabbitmq -p "{\"spec\":{\"replicas\": $rabbitmqReplicas}}" --type=merge
-
Check if all RabbitMQ pods are healthy:
kubectl -n rabbitmq get pod
-