Automation Suite
2022.10
False
Imagen de fondo del banner
Guía de instalación de Automation Suite
Última actualización 24 de abr. de 2024

Upgrade fails due to unhealthy Ceph

Descripción

Al intentar actualizar a una nueva versión de Automation Suite, es posible que vea el siguiente mensaje de error: Ceph objectstore is not completely healthy at the moment. Inner exception - Timeout waiting for all PGs to become active+clean.

Solución

Para solucionar este problema de actualización, verifique si los pods de la OSD están funcionando y en buen estado ejecutando el siguiente comando:

kubectl -n rook-ceph get pod -l app=rook-ceph-osd  --no-headers | grep -P '([0-9])/\1'  -vkubectl -n rook-ceph get pod -l app=rook-ceph-osd  --no-headers | grep -P '([0-9])/\1'  -v
  • Si el comando no genera ningún pod, verifica si los grupos de colocación (PP) de Ceph se están recuperando o no ejecutando el siguiente comando:

    function is_ceph_pg_active_clean() {
      local return_code=1
      if kubectl -n rook-ceph exec  deploy/rook-ceph-tools -- ceph status --format json | jq '. as $root | ($root | .pgmap.num_pgs) as $total_pgs | try ( ($root | .pgmap.pgs_by_state[] | select(.state_name == "active+clean").count)  // 0) as $active_pgs | if $total_pgs == $active_pgs then true else false end' | grep -q 'true';then
        return_code=0
      fi
      [[ $return_code -eq 0 ]] && echo "All Ceph Placement groups(PG) are active+clean"
      if [[ $return_code -ne 0 ]]; then
        echo "All Ceph Placement groups(PG) are not active+clean. Please wait for PGs to become active+clean"
        kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph pg dump --format json | jq -r '.pg_map.pg_stats[] | select(.state!="active+clean") | [.pgid, .state] | @tsv'
      fi
      return "${return_code}"
    }
    # Execute the function multiple times to get updated ceph PG status
    is_ceph_pg_active_cleanfunction is_ceph_pg_active_clean() {
      local return_code=1
      if kubectl -n rook-ceph exec  deploy/rook-ceph-tools -- ceph status --format json | jq '. as $root | ($root | .pgmap.num_pgs) as $total_pgs | try ( ($root | .pgmap.pgs_by_state[] | select(.state_name == "active+clean").count)  // 0) as $active_pgs | if $total_pgs == $active_pgs then true else false end' | grep -q 'true';then
        return_code=0
      fi
      [[ $return_code -eq 0 ]] && echo "All Ceph Placement groups(PG) are active+clean"
      if [[ $return_code -ne 0 ]]; then
        echo "All Ceph Placement groups(PG) are not active+clean. Please wait for PGs to become active+clean"
        kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph pg dump --format json | jq -r '.pg_map.pg_stats[] | select(.state!="active+clean") | [.pgid, .state] | @tsv'
      fi
      return "${return_code}"
    }
    # Execute the function multiple times to get updated ceph PG status
    is_ceph_pg_active_clean
    Nota: si ninguno de los Ceph PG afectados se recupera incluso después de esperar más de 30 minutos, genera un ticket con UiPath® Support.
  • Si el comando genera pod (s), primero debes corregir el problema que los afecta:

    • Si un pod se atasca en Init:0/4, podría ser un problema de proveedor de PV (Longhorn). Para resolver este problema, genera un ticket con UiPath® Support.
    • Si un pod está en CrashLoopBackOff, corrige el problema ejecutando el siguiente comando:
      function cleanup_crashing_osd() {
          local restart_operator="false"
          local min_required_healthy_osd=1
          local in_osd
          local up_osd
          local healthy_osd_pod_count
          local crashed_osd_deploy
          local crashed_pvc_name
      
          if ! kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool ls detail  | grep 'rook-ceph.rgw.buckets.data' | grep -q 'replicated'; then
              min_required_healthy_osd=2
          fi
          in_osd=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph status   -f json  | jq -r '.osdmap.num_in_osds')
          up_osd=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph status   -f json  | jq -r '.osdmap.num_up_osds')
          healthy_osd_pod_count=$(kubectl -n rook-ceph get pod -l app=rook-ceph-osd | grep 'Running' | grep -c -P '([0-9])/\1')
          if ! [[ $in_osd -ge $min_required_healthy_osd && $up_osd -ge $min_required_healthy_osd && $healthy_osd_pod_count -ge $min_required_healthy_osd ]]; then
              return
          fi
          for crashed_osd_deploy in $(kubectl -n rook-ceph get pod -l app=rook-ceph-osd  | grep 'CrashLoopBackOff' | cut -d'-' -f'1-4') ; do
              if kubectl -n rook-ceph logs "deployment/${crashed_osd_deploy}" | grep -q '/crash/'; then
                  echo "Found crashing OSD deployment: '${crashed_osd_deploy}'"
                  crashed_pvc_name=$(kubectl -n rook-ceph get deployment "${crashed_osd_deploy}" -o json | jq -r '.metadata.labels["ceph.rook.io/pvc"]')
                  info "Removing crashing OSD deployment: '${crashed_osd_deploy}' and PVC: '${crashed_pvc_name}'"
                  timeout 60  kubectl -n rook-ceph delete deployment "${crashed_osd_deploy}" || kubectl -n rook-ceph delete deployment "${crashed_osd_deploy}" --force --grace-period=0
                  timeout 100 kubectl -n rook-ceph delete pvc "${crashed_pvc_name}" || kubectl -n rook-ceph delete pvc "${crashed_pvc_name}" --force --grace-period=0
                  restart_operator="true"
              fi
          done
          if [[ $restart_operator == "true" ]]; then
              kubectl -n rook-ceph rollout restart deployment/rook-ceph-operator
          fi
          return 0
      }
      # Execute the cleanup function
      cleanup_crashing_osdfunction cleanup_crashing_osd() {
          local restart_operator="false"
          local min_required_healthy_osd=1
          local in_osd
          local up_osd
          local healthy_osd_pod_count
          local crashed_osd_deploy
          local crashed_pvc_name
      
          if ! kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool ls detail  | grep 'rook-ceph.rgw.buckets.data' | grep -q 'replicated'; then
              min_required_healthy_osd=2
          fi
          in_osd=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph status   -f json  | jq -r '.osdmap.num_in_osds')
          up_osd=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph status   -f json  | jq -r '.osdmap.num_up_osds')
          healthy_osd_pod_count=$(kubectl -n rook-ceph get pod -l app=rook-ceph-osd | grep 'Running' | grep -c -P '([0-9])/\1')
          if ! [[ $in_osd -ge $min_required_healthy_osd && $up_osd -ge $min_required_healthy_osd && $healthy_osd_pod_count -ge $min_required_healthy_osd ]]; then
              return
          fi
          for crashed_osd_deploy in $(kubectl -n rook-ceph get pod -l app=rook-ceph-osd  | grep 'CrashLoopBackOff' | cut -d'-' -f'1-4') ; do
              if kubectl -n rook-ceph logs "deployment/${crashed_osd_deploy}" | grep -q '/crash/'; then
                  echo "Found crashing OSD deployment: '${crashed_osd_deploy}'"
                  crashed_pvc_name=$(kubectl -n rook-ceph get deployment "${crashed_osd_deploy}" -o json | jq -r '.metadata.labels["ceph.rook.io/pvc"]')
                  info "Removing crashing OSD deployment: '${crashed_osd_deploy}' and PVC: '${crashed_pvc_name}'"
                  timeout 60  kubectl -n rook-ceph delete deployment "${crashed_osd_deploy}" || kubectl -n rook-ceph delete deployment "${crashed_osd_deploy}" --force --grace-period=0
                  timeout 100 kubectl -n rook-ceph delete pvc "${crashed_pvc_name}" || kubectl -n rook-ceph delete pvc "${crashed_pvc_name}" --force --grace-period=0
                  restart_operator="true"
              fi
          done
          if [[ $restart_operator == "true" ]]; then
              kubectl -n rook-ceph rollout restart deployment/rook-ceph-operator
          fi
          return 0
      }
      # Execute the cleanup function
      cleanup_crashing_osd

Después de corregir el bloqueo del OSD, verifica si los programadores se están recuperando o no ejecutando el siguiente comando:

is_ceph_pg_active_cleanis_ceph_pg_active_clean
  • Descripción
  • Solución

Was this page helpful?

Obtén la ayuda que necesitas
RPA para el aprendizaje - Cursos de automatización
Foro de la comunidad UiPath
Logotipo blanco de UiPath
Confianza y seguridad
© 2005-2024 UiPath. All rights reserved.