automation-suite
2023.4
false
Automation Suite on Linux Installation Guide
Last updated Oct 4, 2024

Unhealthy services after cluster restore or rollback

Description

Following a cluster restore or rollback, AI Center, Orchestrator, Platform, Document Understanding or Task Mining might be unhealthy, with the RabbitMQ pod logs showing the following error:

[root@server0 UiPathAutomationSuite]# k -n rabbitmq logs rabbitmq-server-0
2022-10-29 07:38:49.146614+00:00 [info] <0.9223.362> accepting AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672)
2022-10-29 07:38:49.147411+00:00 [info] <0.9223.362> Connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672) has a client-provided name: rabbitConnectionFactory#77049094:2100
2022-10-29 07:38:49.147644+00:00 [erro] <0.9223.362> Error on AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672, state: starting):
2022-10-29 07:38:49.147644+00:00 [erro] <0.9223.362> PLAIN login refused: user 'aicenter-service' - invalid credentials
2022-10-29 07:38:49.147922+00:00 [info] <0.9223.362> closing AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672 - rabbitConnectionFactory#77049094:2100)
2022-10-29 07:38:55.818447+00:00 [info] <0.9533.362> accepting AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672)
2022-10-29 07:38:55.821662+00:00 [info] <0.9533.362> Connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672) has a client-provided name: rabbitConnectionFactory#2100d047:4057
2022-10-29 07:38:55.822058+00:00 [erro] <0.9533.362> Error on AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672, state: starting):
2022-10-29 07:38:55.822058+00:00 [erro] <0.9533.362> PLAIN login refused: user 'aicenter-service' - invalid credentials
2022-10-29 07:38:55.822447+00:00 [info] <0.9533.362> closing AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672 - rabbitConnectionFactory#2100d047:4057)[root@server0 UiPathAutomationSuite]# k -n rabbitmq logs rabbitmq-server-0
2022-10-29 07:38:49.146614+00:00 [info] <0.9223.362> accepting AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672)
2022-10-29 07:38:49.147411+00:00 [info] <0.9223.362> Connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672) has a client-provided name: rabbitConnectionFactory#77049094:2100
2022-10-29 07:38:49.147644+00:00 [erro] <0.9223.362> Error on AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672, state: starting):
2022-10-29 07:38:49.147644+00:00 [erro] <0.9223.362> PLAIN login refused: user 'aicenter-service' - invalid credentials
2022-10-29 07:38:49.147922+00:00 [info] <0.9223.362> closing AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672 - rabbitConnectionFactory#77049094:2100)
2022-10-29 07:38:55.818447+00:00 [info] <0.9533.362> accepting AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672)
2022-10-29 07:38:55.821662+00:00 [info] <0.9533.362> Connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672) has a client-provided name: rabbitConnectionFactory#2100d047:4057
2022-10-29 07:38:55.822058+00:00 [erro] <0.9533.362> Error on AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672, state: starting):
2022-10-29 07:38:55.822058+00:00 [erro] <0.9533.362> PLAIN login refused: user 'aicenter-service' - invalid credentials
2022-10-29 07:38:55.822447+00:00 [info] <0.9533.362> closing AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672 - rabbitConnectionFactory#2100d047:4057)

Solution

To fix the issue, check if some or all RabbitMQ pods are in CrashLoopBackOff state due to the Mnesia table data write issue.

If all pods are running, take the following steps:

  1. Delete the users in RabbitMQ:

    kubectl -n rabbitmq exec rabbitmq-server-0 -c rabbitmq -- rabbitmqctl  list_users -s --formatter json | jq '.[]|.user' | grep -v default_user | xargs -I{} kubectl -n rabbitmq exec rabbitmq-server-0 -c rabbitmq -- rabbitmqctl delete_user {}kubectl -n rabbitmq exec rabbitmq-server-0 -c rabbitmq -- rabbitmqctl  list_users -s --formatter json | jq '.[]|.user' | grep -v default_user | xargs -I{} kubectl -n rabbitmq exec rabbitmq-server-0 -c rabbitmq -- rabbitmqctl delete_user {}
  2. Delete RabbitMQ application secrets in the UiPath namespace:

    kubectl -n uipath get secret --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep -i rabbitmq-secret | xargs -I{} kubectl -n uipath delete secret {}kubectl -n uipath get secret --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep -i rabbitmq-secret | xargs -I{} kubectl -n uipath delete secret {}
  3. Delete RabbitMQ application secrets in the RabbitMQ namespace:

    kubectl -n rabbitmq get secret --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep -i rabbitmq-secret | xargs -I{} kubectl -n rabbitmq delete secret {}kubectl -n rabbitmq get secret --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep -i rabbitmq-secret | xargs -I{} kubectl -n rabbitmq delete secret {}
  4. Sync sfcore application via ArgoCD and wait for the sync to complete:


  5. Perform a rollout restart on all applications in the UiPath namespace:

    kubectl -n uipath rollout restart deploykubectl -n uipath rollout restart deploy

If some pods are in CrashLoopBackOff state, take the following steps:

  1. Identify which RabbitMQ pods are stuck in CrashLoopBackOff state, and check the RabbitMQ CrashLoopBackOff pod logs:
    kubectl -n rabbitmq get pods
    kubectl -n rabbitmq logs <CrashLoopBackOff-Pod-Name>kubectl -n rabbitmq get pods
    kubectl -n rabbitmq logs <CrashLoopBackOff-Pod-Name>
  2. Check the output of the previous commands. If the issue is related to the Mnesia table data write, you should see an error message similar to the following:

    Mnesia('rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq'): ** ERROR ** (could not write core file: eacces)
     ** FATAL ** Failed to merge schema: Bad cookie in table definition rabbit_user_permission: 'rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq' = {cstruct,rabbit_user_permission,set,[],['rabbit@rabbitmq-server-2.rabbitmq-nodes.rabbitmq','rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq','rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'],[],[],0,read_write,false,[],[],false,user_permission,[user_vhost,permission],[],[],[],{{1667351034020261908,-576460752303416575,1},'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'},{{4,0},{'rabbit@rabbitmq-server-2.rabbitmq-nodes.rabbitmq',{1667,351040,418694}}}}, 'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq' = {cstruct,rabbit_user_permission,set,[],['rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'],[],[],0,read_write,false,[],[],false,user_permission,[user_vhost,permission],[],[],[],{{1667372429216834387,-576460752303417087,1},'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'},{{2,0},[]}}Mnesia('rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq'): ** ERROR ** (could not write core file: eacces)
     ** FATAL ** Failed to merge schema: Bad cookie in table definition rabbit_user_permission: 'rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq' = {cstruct,rabbit_user_permission,set,[],['rabbit@rabbitmq-server-2.rabbitmq-nodes.rabbitmq','rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq','rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'],[],[],0,read_write,false,[],[],false,user_permission,[user_vhost,permission],[],[],[],{{1667351034020261908,-576460752303416575,1},'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'},{{4,0},{'rabbit@rabbitmq-server-2.rabbitmq-nodes.rabbitmq',{1667,351040,418694}}}}, 'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq' = {cstruct,rabbit_user_permission,set,[],['rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'],[],[],0,read_write,false,[],[],false,user_permission,[user_vhost,permission],[],[],[],{{1667372429216834387,-576460752303417087,1},'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'},{{2,0},[]}}
  3. To fix the issue, take the following steps:

    1. Find the number of RabbitMQ replicas:

      rabbitmqReplicas=$(kubectl -n rabbitmq get rabbitmqcluster rabbitmq -o json | jq -r '.spec.replicas')

    2. Scale down the RabbitMQ replicas:

      kubectl -n rabbitmq patch rabbitmqcluster rabbitmq -p "{\"spec\":{\"replicas\": 0}}" --type=merge

      kubectl -n rabbitmq scale sts rabbitmq-server --replicas=0

    3. Wait until all RabbitMQ pods are terminated:

      kubectl -n rabbitmq get pod

    4. Find and delete the PVC of the RabbitMQ pod that is stuck in CrashLoopBackOff state:

      kubectl -n rabbitmq get pvc

      kubectl -n rabbitmq delete pvc <crashloopbackupoff_pod_pvc_name>

    5. Scale up the RabbitMQ replicas:

      kubectl -n rabbitmq patch rabbitmqcluster rabbitmq -p "{\"spec\":{\"replicas\": $rabbitmqReplicas}}" --type=merge

    6. Check if all RabbitMQ pods are healthy:

      kubectl -n rabbitmq get pod

  • Description
  • Solution

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.