Automation Suite
2022.4
False
Imagen de fondo del banner
Guía de instalación de Automation Suite
Última actualización 24 de abr. de 2024

Manual: Migración del grupo de datos Ceph de tipo replicado a tipo de código de borrado

Paso 1: seleccionar una ruta para los objetos Ceph

Debe seleccionar una ruta del sistema de archivos que tenga espacio de almacenamiento disponible para contener objetos Ceph. Por ejemplo, supongamos que decide utilizar la ruta /ceph-data en el nodo de Kubernetes server0 .
Importante: debes alinear la herramienta Ceph para usar esta ruta de host y todos los comandos posteriores deben ejecutarse en la misma máquina (en este caso, server0).
export ROOK_CEPH_EXPORT_PATH="/ceph-data"export ROOK_CEPH_EXPORT_PATH="/ceph-data"

Para ver el espacio de almacenamiento utilizado por los objetos en el clúster de Ceph, ejecuta el siguiente comando:

ceph_objects_bytes=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph status --format json | jq -r '.pgmap.data_bytes')
numfmt --to=iec-i $ceph_objects_bytesceph_objects_bytes=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph status --format json | jq -r '.pgmap.data_bytes')
numfmt --to=iec-i $ceph_objects_bytes

Paso 2: preparar las herramientas Ceph para usar la ruta

Para preparar las herramientas Ceph para utilizar la ruta seleccionada en el paso 1, siga los siguientes pasos:

  1. Deshabilitar la autocuración para rook-ceph-object-store:
    kubectl -n argocd patch application rook-ceph-operator --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":false}]'
    kubectl -n argocd patch application rook-ceph-object-store --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":false}]'
    kubectl -n argocd patch application fabric-installer --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":false}]'kubectl -n argocd patch application rook-ceph-operator --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":false}]'
    kubectl -n argocd patch application rook-ceph-object-store --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":false}]'
    kubectl -n argocd patch application fabric-installer --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":false}]'
  2. Edite la implementación de herramientas Ceph para montar ${ROOK_CEPH_EXPORT_PATH} en el nodo de Kubernetes: Server0
    kubectl -n rook-ceph patch deploy rook-ceph-tools --type='json' -p='[{"op": "add", "path":"/spec/template/spec/nodeName", "value": "server0"},{"op": "add", "path":"/spec/template/spec/volumes/2", "value": {"name":"ceph-export", "hostPath": {"path": "'${ROOK_CEPH_EXPORT_PATH}'", "type":"Directory"}  }}, {"op":"add", "path": "/spec/template/spec/containers/0/volumeMounts/2", "value": {"name": "ceph-export", "mountPath": "'${ROOK_CEPH_EXPORT_PATH}'"}},{"op": "remove", "path": "/spec/template/spec/containers/0/resources/limits"}]'
    kubectl -n rook-ceph rollout status deploy rook-ceph-toolskubectl -n rook-ceph patch deploy rook-ceph-tools --type='json' -p='[{"op": "add", "path":"/spec/template/spec/nodeName", "value": "server0"},{"op": "add", "path":"/spec/template/spec/volumes/2", "value": {"name":"ceph-export", "hostPath": {"path": "'${ROOK_CEPH_EXPORT_PATH}'", "type":"Directory"}  }}, {"op":"add", "path": "/spec/template/spec/containers/0/volumeMounts/2", "value": {"name": "ceph-export", "mountPath": "'${ROOK_CEPH_EXPORT_PATH}'"}},{"op": "remove", "path": "/spec/template/spec/containers/0/resources/limits"}]'
    kubectl -n rook-ceph rollout status deploy rook-ceph-tools
  3. Permitir que los pods de herramientas escriban dentro de ${ROOK_CEPH_EXPORT_PATH}
    chmod 777 ${ROOK_CEPH_EXPORT_PATH}chmod 777 ${ROOK_CEPH_EXPORT_PATH}

Paso 3: bloquear el acceso al espacio de nombres rook-ceph desde otros espacios de nombres

  1. Bloquea el tráfico que se dirige al espacio de nombres rook-ceph desde cualquier otro espacio de nombres excepto el espacio de nombres rook-ceph :
    kubectl apply -f - <<EOF
    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      namespace: rook-ceph
      name: block-rook-ceph-from-other-ns
    spec:
      podSelector:
        matchLabels:
      ingress:
      - from:
        - podSelector: {}
    EOFkubectl apply -f - <<EOF
    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      namespace: rook-ceph
      name: block-rook-ceph-from-other-ns
    spec:
      podSelector:
        matchLabels:
      ingress:
      - from:
        - podSelector: {}
    EOF
  2. Reinicia la implementación de RGW para cerrar conexiones ya establecidas desde otros espacios de nombres:
    for rgw_deploy in $(kubectl -n rook-ceph get deploy -l app=rook-ceph-rgw  -o name);do
        kubectl -n rook-ceph rollout restart "${rgw_deploy}"
        kubectl -n rook-ceph rollout status "${rgw_deploy}"
    donefor rgw_deploy in $(kubectl -n rook-ceph get deploy -l app=rook-ceph-rgw  -o name);do
        kubectl -n rook-ceph rollout restart "${rgw_deploy}"
        kubectl -n rook-ceph rollout status "${rgw_deploy}"
    done

Paso 4: Ver el recuento de objetos del clúster

Para ver el recuento de objetos del clúster, ejecuta el siguiente comando:

BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- rados df --format json | jq -r --arg poolName "rook-ceph.rgw.buckets.data" '.pools[] | select(.name==$poolName).num_objects')
echo "BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT=${BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT}"BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- rados df --format json | jq -r --arg poolName "rook-ceph.rgw.buckets.data" '.pools[] | select(.name==$poolName).num_objects')
echo "BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT=${BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT}"

Se recomienda volver a verificar el recuento de objetos después de la migración para garantizar que no se haya producido una pérdida de datos.

Paso 5: Exportar el grupo de datos de Ceph

Para exportar el grupo de datos de Ceph, ejecute el siguiente comando:

nohup kubectl -n rook-ceph exec deploy/rook-ceph-tools -- rados -p 'rook-ceph.rgw.buckets.data' export --workers 5   ${ROOK_CEPH_EXPORT_PATH}/ceph-data-pool >> /tmp/ceph-data-pool-export.log 2>&1 & 
wait $!
if [[ $? -eq 0 && -f ${ROOK_CEPH_EXPORT_PATH}/ceph-data-pool ]]; then
  echo "Export ran successfully"
else
 echo "Error while running export"
finohup kubectl -n rook-ceph exec deploy/rook-ceph-tools -- rados -p 'rook-ceph.rgw.buckets.data' export --workers 5   ${ROOK_CEPH_EXPORT_PATH}/ceph-data-pool >> /tmp/ceph-data-pool-export.log 2>&1 & 
wait $!
if [[ $? -eq 0 && -f ${ROOK_CEPH_EXPORT_PATH}/ceph-data-pool ]]; then
  echo "Export ran successfully"
else
 echo "Error while running export"
fi

Paso 6: reducir el operador rook-ceph

Para reducir el operador rook-ceph , ejecuta el siguiente comando:
kubectl -n rook-ceph scale --replicas=0 deployment/rook-ceph-operatorkubectl -n rook-ceph scale --replicas=0 deployment/rook-ceph-operator

Paso 7: recreación del grupo con código de borrado

Para volver a crear el grupo con código de borrado, ejecuta el siguiente comando:

kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool rm rook-ceph.rgw.buckets.data rook-ceph.rgw.buckets.data --yes-i-really-really-mean-it --yes-i-really-really-mean-it-not-faking
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd crush rule rm rook-ceph.rgw.buckets.data
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd erasure-code-profile set rook-ceph_ecprofile k=2 m=1  crush-failure-domain=host
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool create rook-ceph.rgw.buckets.data erasure rook-ceph_ecprofile
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool set rook-ceph.rgw.buckets.data compression_mode none
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool application enable rook-ceph.rgw.buckets.data rook-ceph-rgw  --yes-i-really-mean-itkubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool rm rook-ceph.rgw.buckets.data rook-ceph.rgw.buckets.data --yes-i-really-really-mean-it --yes-i-really-really-mean-it-not-faking
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd crush rule rm rook-ceph.rgw.buckets.data
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd erasure-code-profile set rook-ceph_ecprofile k=2 m=1  crush-failure-domain=host
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool create rook-ceph.rgw.buckets.data erasure rook-ceph_ecprofile
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool set rook-ceph.rgw.buckets.data compression_mode none
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool application enable rook-ceph.rgw.buckets.data rook-ceph-rgw  --yes-i-really-mean-it

Paso 8: importar datos al grupo de datos

Para importar los datos al grupo de datos, ejecuta el siguiente comando:

nohup kubectl -n rook-ceph exec deploy/rook-ceph-tools -- rados -p 'rook-ceph.rgw.buckets.data'  import --workers 5   ${ROOK_CEPH_EXPORT_PATH}/ceph-data-pool >> /tmp/ceph-data-pool-import.log 2>&1 & 
wait $!
if [[ $? -eq 0 ]]; then
  echo "Import ran successfully"
else
 echo "Error while running import"
finohup kubectl -n rook-ceph exec deploy/rook-ceph-tools -- rados -p 'rook-ceph.rgw.buckets.data'  import --workers 5   ${ROOK_CEPH_EXPORT_PATH}/ceph-data-pool >> /tmp/ceph-data-pool-import.log 2>&1 & 
wait $!
if [[ $? -eq 0 ]]; then
  echo "Import ran successfully"
else
 echo "Error while running import"
fi

Paso 9: verificar los datos cargados

Para verificar los datos cargados, ejecute el siguiente comando:

try=120
return_code=1
for index in $(seq 0 "${try}"); do
  AFTER_MIGRATION_DATA_POOL_OBJECT_COUNT=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- rados df --format json | jq -r --arg poolName "rook-ceph.rgw.buckets.data" '.pools[] | select(.name==$poolName).num_objects')
  if [[ $AFTER_MIGRATION_DATA_POOL_OBJECT_COUNT -eq $BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT ]]; then
    return_code=0
    break
  fi
  [[ $index -eq $try ]] || sleep 5
done
if [[ $return_code -eq 0 ]]; then
  echo "Found equal object count(${BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT})"
else
  echo "Found difference in object count for pool before(${BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT}) and after(${AFTER_MIGRATION_DATA_POOL_OBJECT_COUNT})"
  echo "Please raise a support ticket with uipath to complete the migration"
fitry=120
return_code=1
for index in $(seq 0 "${try}"); do
  AFTER_MIGRATION_DATA_POOL_OBJECT_COUNT=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- rados df --format json | jq -r --arg poolName "rook-ceph.rgw.buckets.data" '.pools[] | select(.name==$poolName).num_objects')
  if [[ $AFTER_MIGRATION_DATA_POOL_OBJECT_COUNT -eq $BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT ]]; then
    return_code=0
    break
  fi
  [[ $index -eq $try ]] || sleep 5
done
if [[ $return_code -eq 0 ]]; then
  echo "Found equal object count(${BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT})"
else
  echo "Found difference in object count for pool before(${BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT}) and after(${AFTER_MIGRATION_DATA_POOL_OBJECT_COUNT})"
  echo "Please raise a support ticket with uipath to complete the migration"
fi

Paso 10: revertir los cambios temporales

Para revertir los cambios temporales, ejecute el siguiente comando:

kubectl -n argocd patch application rook-ceph-operator --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":true}]'
kubectl -n argocd patch application rook-ceph-object-store --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":true}]'
kubectl -n argocd patch application fabric-installer --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":true}]'
kubectl -n rook-ceph scale --replicas=1 deployment/rook-ceph-operator
kubectl -n rook-ceph patch deploy rook-ceph-tools --type='json' -p='[{"op": "remove", "path":"/spec/template/spec/nodeName"},{"op": "remove", "path":"/spec/template/spec/volumes/2"}, {"op":"remove", "path": "/spec/template/spec/containers/0/volumeMounts/2"},{"op": "add", "path": "/spec/template/spec/containers/0/resources/limits", "value": {"memory": "256Mi"}}]'
kubectl -n rook-ceph delete NetworkPolicy block-rook-ceph-from-other-nskubectl -n argocd patch application rook-ceph-operator --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":true}]'
kubectl -n argocd patch application rook-ceph-object-store --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":true}]'
kubectl -n argocd patch application fabric-installer --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":true}]'
kubectl -n rook-ceph scale --replicas=1 deployment/rook-ceph-operator
kubectl -n rook-ceph patch deploy rook-ceph-tools --type='json' -p='[{"op": "remove", "path":"/spec/template/spec/nodeName"},{"op": "remove", "path":"/spec/template/spec/volumes/2"}, {"op":"remove", "path": "/spec/template/spec/containers/0/volumeMounts/2"},{"op": "add", "path": "/spec/template/spec/containers/0/resources/limits", "value": {"memory": "256Mi"}}]'
kubectl -n rook-ceph delete NetworkPolicy block-rook-ceph-from-other-ns

Paso 11: Actualizar la configuración de ArgoCD

Ahora debe asegurarse de mantener sincronizados la configuración y el estado real. Para hacerlo, actualice la configuración de ArgoCD ejecutando el siguiente comando:

kubectl  -n argocd get application  fabric-installer -o json | jq 'if ([.spec.source.helm.parameters[].name] | index ("global.rook.dataPoolType")) == null then .spec.source.helm.parameters +=  [{"name": "global.rook.dataPoolType" , "value": "erasure-coded"}] else (.spec.source.helm.parameters[] | select(.name == "global.rook.dataPoolType").value) |= "erasure-coded" end'  | kubectl apply  -f -kubectl  -n argocd get application  fabric-installer -o json | jq 'if ([.spec.source.helm.parameters[].name] | index ("global.rook.dataPoolType")) == null then .spec.source.helm.parameters +=  [{"name": "global.rook.dataPoolType" , "value": "erasure-coded"}] else (.spec.source.helm.parameters[] | select(.name == "global.rook.dataPoolType").value) |= "erasure-coded" end'  | kubectl apply  -f -

Was this page helpful?

Obtén la ayuda que necesitas
RPA para el aprendizaje - Cursos de automatización
Foro de la comunidad UiPath
Logotipo blanco de UiPath
Confianza y seguridad
© 2005-2024 UiPath. All rights reserved.