Automation Suite
2022.10
False
- 概述
- 要求
- 安装
- 安装后
- 集群管理
- 监控和警示
- 迁移和升级
- 特定于产品的配置
- 最佳实践和维护
- 故障排除
- 从 2021.10 自动升级后,集群运行状况不佳
- 由于 Ceph 运行状况不佳,升级失败
- 由于空间问题,RKE2 未启动
- 无法获取沙盒映像
- Pod 未显示在 ArgoCD 用户界面中
- Redis 探测器失败
- RKE2 服务器无法启动
- 在 UiPath 命名空间中找不到密码
- ArgoCD 在首次安装后进入“进行中”状态
- 意外不一致;手动运行 fsck
- MongoDB Pod 处于 CrashLoopBackOff 状态或在删除后处于“等待 PVC 配置”状态
- MongoDB Pod 从 4.4.4-ent 升级到 5.0.7-ent 失败
- 集群还原或回滚后服务运行状况不佳
- Pod 在 Init:0/X 中卡住
- Prometheus 处于 CrashLoopBackoff 状态,并出现内存不足 (OOM) 错误
- 监控仪表板中缺少 Ceph-rook 指标
- 使用 Automation Suite 诊断工具
- 使用 Automation Suite 支持包工具
- 探索日志
由于 Ceph 运行状况不佳,升级失败
Automation Suite 安装指南
上次更新日期 2024年4月24日
由于 Ceph 运行状况不佳,升级失败
尝试升级到新的 Automation Suite 版本时,您可能会看到以下错误消息:
Ceph objectstore is not completely healthy at the moment. Inner exception - Timeout waiting for all PGs to become active+clean
。
要修复此升级问题,请运行以下命令,验证 OSD Pod 是否正在运行且运行状况良好:
kubectl -n rook-ceph get pod -l app=rook-ceph-osd --no-headers | grep -P '([0-9])/\1' -v
kubectl -n rook-ceph get pod -l app=rook-ceph-osd --no-headers | grep -P '([0-9])/\1' -v
-
如果该命令未输出任何 Pod,请运行以下命令,验证 Ceph 归置组 (PG) 是否正在恢复:
function is_ceph_pg_active_clean() { local return_code=1 if kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph status --format json | jq '. as $root | ($root | .pgmap.num_pgs) as $total_pgs | try ( ($root | .pgmap.pgs_by_state[] | select(.state_name == "active+clean").count) // 0) as $active_pgs | if $total_pgs == $active_pgs then true else false end' | grep -q 'true';then return_code=0 fi [[ $return_code -eq 0 ]] && echo "All Ceph Placement groups(PG) are active+clean" if [[ $return_code -ne 0 ]]; then echo "All Ceph Placement groups(PG) are not active+clean. Please wait for PGs to become active+clean" kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph pg dump --format json | jq -r '.pg_map.pg_stats[] | select(.state!="active+clean") | [.pgid, .state] | @tsv' fi return "${return_code}" } # Execute the function multiple times to get updated ceph PG status is_ceph_pg_active_clean
function is_ceph_pg_active_clean() { local return_code=1 if kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph status --format json | jq '. as $root | ($root | .pgmap.num_pgs) as $total_pgs | try ( ($root | .pgmap.pgs_by_state[] | select(.state_name == "active+clean").count) // 0) as $active_pgs | if $total_pgs == $active_pgs then true else false end' | grep -q 'true';then return_code=0 fi [[ $return_code -eq 0 ]] && echo "All Ceph Placement groups(PG) are active+clean" if [[ $return_code -ne 0 ]]; then echo "All Ceph Placement groups(PG) are not active+clean. Please wait for PGs to become active+clean" kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph pg dump --format json | jq -r '.pg_map.pg_stats[] | select(.state!="active+clean") | [.pgid, .state] | @tsv' fi return "${return_code}" } # Execute the function multiple times to get updated ceph PG status is_ceph_pg_active_cleanNote: If none of the affected Ceph PG recovers even after waiting for more than 30 minutes, raise a ticket with UiPath® Support. -
如果命令输出 Pod,则必须首先修复影响 Pod 的问题:
- If a pod is stuck in
Init:0/4
, then it could be a PV provider (Longhorn) issue. To debut this issue, raise a ticket with UiPath® Support. -
如果 Pod 位于
CrashLoopBackOff
中,请运行以下命令来解决此问题:function cleanup_crashing_osd() { local restart_operator="false" local min_required_healthy_osd=1 local in_osd local up_osd local healthy_osd_pod_count local crashed_osd_deploy local crashed_pvc_name if ! kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool ls detail | grep 'rook-ceph.rgw.buckets.data' | grep -q 'replicated'; then min_required_healthy_osd=2 fi in_osd=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph status -f json | jq -r '.osdmap.num_in_osds') up_osd=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph status -f json | jq -r '.osdmap.num_up_osds') healthy_osd_pod_count=$(kubectl -n rook-ceph get pod -l app=rook-ceph-osd | grep 'Running' | grep -c -P '([0-9])/\1') if ! [[ $in_osd -ge $min_required_healthy_osd && $up_osd -ge $min_required_healthy_osd && $healthy_osd_pod_count -ge $min_required_healthy_osd ]]; then return fi for crashed_osd_deploy in $(kubectl -n rook-ceph get pod -l app=rook-ceph-osd | grep 'CrashLoopBackOff' | cut -d'-' -f'1-4') ; do if kubectl -n rook-ceph logs "deployment/${crashed_osd_deploy}" | grep -q '/crash/'; then echo "Found crashing OSD deployment: '${crashed_osd_deploy}'" crashed_pvc_name=$(kubectl -n rook-ceph get deployment "${crashed_osd_deploy}" -o json | jq -r '.metadata.labels["ceph.rook.io/pvc"]') info "Removing crashing OSD deployment: '${crashed_osd_deploy}' and PVC: '${crashed_pvc_name}'" timeout 60 kubectl -n rook-ceph delete deployment "${crashed_osd_deploy}" || kubectl -n rook-ceph delete deployment "${crashed_osd_deploy}" --force --grace-period=0 timeout 100 kubectl -n rook-ceph delete pvc "${crashed_pvc_name}" || kubectl -n rook-ceph delete pvc "${crashed_pvc_name}" --force --grace-period=0 restart_operator="true" fi done if [[ $restart_operator == "true" ]]; then kubectl -n rook-ceph rollout restart deployment/rook-ceph-operator fi return 0 } # Execute the cleanup function cleanup_crashing_osd
function cleanup_crashing_osd() { local restart_operator="false" local min_required_healthy_osd=1 local in_osd local up_osd local healthy_osd_pod_count local crashed_osd_deploy local crashed_pvc_name if ! kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool ls detail | grep 'rook-ceph.rgw.buckets.data' | grep -q 'replicated'; then min_required_healthy_osd=2 fi in_osd=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph status -f json | jq -r '.osdmap.num_in_osds') up_osd=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph status -f json | jq -r '.osdmap.num_up_osds') healthy_osd_pod_count=$(kubectl -n rook-ceph get pod -l app=rook-ceph-osd | grep 'Running' | grep -c -P '([0-9])/\1') if ! [[ $in_osd -ge $min_required_healthy_osd && $up_osd -ge $min_required_healthy_osd && $healthy_osd_pod_count -ge $min_required_healthy_osd ]]; then return fi for crashed_osd_deploy in $(kubectl -n rook-ceph get pod -l app=rook-ceph-osd | grep 'CrashLoopBackOff' | cut -d'-' -f'1-4') ; do if kubectl -n rook-ceph logs "deployment/${crashed_osd_deploy}" | grep -q '/crash/'; then echo "Found crashing OSD deployment: '${crashed_osd_deploy}'" crashed_pvc_name=$(kubectl -n rook-ceph get deployment "${crashed_osd_deploy}" -o json | jq -r '.metadata.labels["ceph.rook.io/pvc"]') info "Removing crashing OSD deployment: '${crashed_osd_deploy}' and PVC: '${crashed_pvc_name}'" timeout 60 kubectl -n rook-ceph delete deployment "${crashed_osd_deploy}" || kubectl -n rook-ceph delete deployment "${crashed_osd_deploy}" --force --grace-period=0 timeout 100 kubectl -n rook-ceph delete pvc "${crashed_pvc_name}" || kubectl -n rook-ceph delete pvc "${crashed_pvc_name}" --force --grace-period=0 restart_operator="true" fi done if [[ $restart_operator == "true" ]]; then kubectl -n rook-ceph rollout restart deployment/rook-ceph-operator fi return 0 } # Execute the cleanup function cleanup_crashing_osd
- If a pod is stuck in
修复崩溃的 OSD 后,通过运行以下命令验证 PG 是否正在恢复:
is_ceph_pg_active_clean
is_ceph_pg_active_clean