Automation Suite
2022.4
falso
Imagem de fundo do banner
Guia de instalação do Automation Suite
Última atualização 24 de abr de 2024

Manual: migrando o pool de dados do Ceph do tipo replicado para o tipo codificado para eliminação

Etapa 1: Selecionar um caminho para objetos Ceph

Você deve selecionar um caminho de sistema de arquivos que tenha espaço de armazenamento disponível para armazenar objetos Ceph. Por exemplo, vamos supor que você decida usar o caminho /ceph-data no nó server0 do Kubernetes.
Importante: Você deve alinhar a ferramenta Ceph para usar este caminho de host e todos os comandos subsequentes devem ser executados na mesma máquina (neste caso, server0).
export ROOK_CEPH_EXPORT_PATH="/ceph-data"export ROOK_CEPH_EXPORT_PATH="/ceph-data"

Para visualizar o espaço de armazenamento usado pelos objetos no cluster do Ceph, execute o seguinte comando:

ceph_objects_bytes=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph status --format json | jq -r '.pgmap.data_bytes')
numfmt --to=iec-i $ceph_objects_bytesceph_objects_bytes=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph status --format json | jq -r '.pgmap.data_bytes')
numfmt --to=iec-i $ceph_objects_bytes

Passo 2: Preparando as ferramentas do Ceph para usar o caminho

Para preparar as ferramentas do Ceph para usar o caminho selecionado na Etapa 1, execute as seguintes etapas:

  1. Desativar a autocura para rook-ceph-object-store:
    kubectl -n argocd patch application rook-ceph-operator --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":false}]'
    kubectl -n argocd patch application rook-ceph-object-store --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":false}]'
    kubectl -n argocd patch application fabric-installer --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":false}]'kubectl -n argocd patch application rook-ceph-operator --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":false}]'
    kubectl -n argocd patch application rook-ceph-object-store --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":false}]'
    kubectl -n argocd patch application fabric-installer --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":false}]'
  2. Edite a implantação das ferramentas Ceph para montar ${ROOK_CEPH_EXPORT_PATH} no nó kubernetes: server0
    kubectl -n rook-ceph patch deploy rook-ceph-tools --type='json' -p='[{"op": "add", "path":"/spec/template/spec/nodeName", "value": "server0"},{"op": "add", "path":"/spec/template/spec/volumes/2", "value": {"name":"ceph-export", "hostPath": {"path": "'${ROOK_CEPH_EXPORT_PATH}'", "type":"Directory"}  }}, {"op":"add", "path": "/spec/template/spec/containers/0/volumeMounts/2", "value": {"name": "ceph-export", "mountPath": "'${ROOK_CEPH_EXPORT_PATH}'"}},{"op": "remove", "path": "/spec/template/spec/containers/0/resources/limits"}]'
    kubectl -n rook-ceph rollout status deploy rook-ceph-toolskubectl -n rook-ceph patch deploy rook-ceph-tools --type='json' -p='[{"op": "add", "path":"/spec/template/spec/nodeName", "value": "server0"},{"op": "add", "path":"/spec/template/spec/volumes/2", "value": {"name":"ceph-export", "hostPath": {"path": "'${ROOK_CEPH_EXPORT_PATH}'", "type":"Directory"}  }}, {"op":"add", "path": "/spec/template/spec/containers/0/volumeMounts/2", "value": {"name": "ceph-export", "mountPath": "'${ROOK_CEPH_EXPORT_PATH}'"}},{"op": "remove", "path": "/spec/template/spec/containers/0/resources/limits"}]'
    kubectl -n rook-ceph rollout status deploy rook-ceph-tools
  3. Permitir que os pods de ferramentas gravem dentro de ${ROOK_CEPH_EXPORT_PATH}
    chmod 777 ${ROOK_CEPH_EXPORT_PATH}chmod 777 ${ROOK_CEPH_EXPORT_PATH}

Etapa 3: bloqueando o acesso ao namespace rook-ceph de outros namespaces

  1. Bloqueie o tráfego vindo para o namespace rook-ceph de qualquer outro namespace, exceto o namespace rook-ceph :
    kubectl apply -f - <<EOF
    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      namespace: rook-ceph
      name: block-rook-ceph-from-other-ns
    spec:
      podSelector:
        matchLabels:
      ingress:
      - from:
        - podSelector: {}
    EOFkubectl apply -f - <<EOF
    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      namespace: rook-ceph
      name: block-rook-ceph-from-other-ns
    spec:
      podSelector:
        matchLabels:
      ingress:
      - from:
        - podSelector: {}
    EOF
  2. Reinicie a implantação do RGW para fechar as conexões já estabelecidas de outros namespaces:
    for rgw_deploy in $(kubectl -n rook-ceph get deploy -l app=rook-ceph-rgw  -o name);do
        kubectl -n rook-ceph rollout restart "${rgw_deploy}"
        kubectl -n rook-ceph rollout status "${rgw_deploy}"
    donefor rgw_deploy in $(kubectl -n rook-ceph get deploy -l app=rook-ceph-rgw  -o name);do
        kubectl -n rook-ceph rollout restart "${rgw_deploy}"
        kubectl -n rook-ceph rollout status "${rgw_deploy}"
    done

Etapa 4: visualizar a contagem de objetos do cluster

Para visualizar a contagem de objetos do cluster, execute o seguinte comando:

BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- rados df --format json | jq -r --arg poolName "rook-ceph.rgw.buckets.data" '.pools[] | select(.name==$poolName).num_objects')
echo "BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT=${BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT}"BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- rados df --format json | jq -r --arg poolName "rook-ceph.rgw.buckets.data" '.pools[] | select(.name==$poolName).num_objects')
echo "BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT=${BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT}"

Recomenda-se verificar novamente a contagem de objetos após a migração para garantir que não haja perda de dados.

Etapa 5: exportando o pool de dados do Ceph

Para exportar o pool de dados do Ceph, execute o seguinte comando:

nohup kubectl -n rook-ceph exec deploy/rook-ceph-tools -- rados -p 'rook-ceph.rgw.buckets.data' export --workers 5   ${ROOK_CEPH_EXPORT_PATH}/ceph-data-pool >> /tmp/ceph-data-pool-export.log 2>&1 & 
wait $!
if [[ $? -eq 0 && -f ${ROOK_CEPH_EXPORT_PATH}/ceph-data-pool ]]; then
  echo "Export ran successfully"
else
 echo "Error while running export"
finohup kubectl -n rook-ceph exec deploy/rook-ceph-tools -- rados -p 'rook-ceph.rgw.buckets.data' export --workers 5   ${ROOK_CEPH_EXPORT_PATH}/ceph-data-pool >> /tmp/ceph-data-pool-export.log 2>&1 & 
wait $!
if [[ $? -eq 0 && -f ${ROOK_CEPH_EXPORT_PATH}/ceph-data-pool ]]; then
  echo "Export ran successfully"
else
 echo "Error while running export"
fi

Etapa 6: reduzir o operador rook-ceph

Para reduzir o operador rook-ceph , execute o seguinte comando:
kubectl -n rook-ceph scale --replicas=0 deployment/rook-ceph-operatorkubectl -n rook-ceph scale --replicas=0 deployment/rook-ceph-operator

Etapa 7: recriar o pool com código de eliminação

Para recriar o pool com código de eliminação, execute o seguinte comando:

kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool rm rook-ceph.rgw.buckets.data rook-ceph.rgw.buckets.data --yes-i-really-really-mean-it --yes-i-really-really-mean-it-not-faking
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd crush rule rm rook-ceph.rgw.buckets.data
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd erasure-code-profile set rook-ceph_ecprofile k=2 m=1  crush-failure-domain=host
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool create rook-ceph.rgw.buckets.data erasure rook-ceph_ecprofile
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool set rook-ceph.rgw.buckets.data compression_mode none
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool application enable rook-ceph.rgw.buckets.data rook-ceph-rgw  --yes-i-really-mean-itkubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool rm rook-ceph.rgw.buckets.data rook-ceph.rgw.buckets.data --yes-i-really-really-mean-it --yes-i-really-really-mean-it-not-faking
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd crush rule rm rook-ceph.rgw.buckets.data
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd erasure-code-profile set rook-ceph_ecprofile k=2 m=1  crush-failure-domain=host
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool create rook-ceph.rgw.buckets.data erasure rook-ceph_ecprofile
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool set rook-ceph.rgw.buckets.data compression_mode none
kubectl -n rook-ceph exec deploy/rook-ceph-tools -- ceph osd pool application enable rook-ceph.rgw.buckets.data rook-ceph-rgw  --yes-i-really-mean-it

Etapa 8: Importando dados para o pool de dados

Para importar os dados para o datapool, execute o seguinte comando:

nohup kubectl -n rook-ceph exec deploy/rook-ceph-tools -- rados -p 'rook-ceph.rgw.buckets.data'  import --workers 5   ${ROOK_CEPH_EXPORT_PATH}/ceph-data-pool >> /tmp/ceph-data-pool-import.log 2>&1 & 
wait $!
if [[ $? -eq 0 ]]; then
  echo "Import ran successfully"
else
 echo "Error while running import"
finohup kubectl -n rook-ceph exec deploy/rook-ceph-tools -- rados -p 'rook-ceph.rgw.buckets.data'  import --workers 5   ${ROOK_CEPH_EXPORT_PATH}/ceph-data-pool >> /tmp/ceph-data-pool-import.log 2>&1 & 
wait $!
if [[ $? -eq 0 ]]; then
  echo "Import ran successfully"
else
 echo "Error while running import"
fi

Passo 9: Verificando os dados carregados

Para verificar os dados carregados, execute o seguinte comando:

try=120
return_code=1
for index in $(seq 0 "${try}"); do
  AFTER_MIGRATION_DATA_POOL_OBJECT_COUNT=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- rados df --format json | jq -r --arg poolName "rook-ceph.rgw.buckets.data" '.pools[] | select(.name==$poolName).num_objects')
  if [[ $AFTER_MIGRATION_DATA_POOL_OBJECT_COUNT -eq $BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT ]]; then
    return_code=0
    break
  fi
  [[ $index -eq $try ]] || sleep 5
done
if [[ $return_code -eq 0 ]]; then
  echo "Found equal object count(${BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT})"
else
  echo "Found difference in object count for pool before(${BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT}) and after(${AFTER_MIGRATION_DATA_POOL_OBJECT_COUNT})"
  echo "Please raise a support ticket with uipath to complete the migration"
fitry=120
return_code=1
for index in $(seq 0 "${try}"); do
  AFTER_MIGRATION_DATA_POOL_OBJECT_COUNT=$(kubectl -n rook-ceph exec deploy/rook-ceph-tools -- rados df --format json | jq -r --arg poolName "rook-ceph.rgw.buckets.data" '.pools[] | select(.name==$poolName).num_objects')
  if [[ $AFTER_MIGRATION_DATA_POOL_OBJECT_COUNT -eq $BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT ]]; then
    return_code=0
    break
  fi
  [[ $index -eq $try ]] || sleep 5
done
if [[ $return_code -eq 0 ]]; then
  echo "Found equal object count(${BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT})"
else
  echo "Found difference in object count for pool before(${BEFORE_MIGRATION_DATA_POOL_OBJECT_COUNT}) and after(${AFTER_MIGRATION_DATA_POOL_OBJECT_COUNT})"
  echo "Please raise a support ticket with uipath to complete the migration"
fi

Passo 10: Revertendo mudanças temporárias

Para reverter alterações temporárias, execute o seguinte comando:

kubectl -n argocd patch application rook-ceph-operator --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":true}]'
kubectl -n argocd patch application rook-ceph-object-store --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":true}]'
kubectl -n argocd patch application fabric-installer --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":true}]'
kubectl -n rook-ceph scale --replicas=1 deployment/rook-ceph-operator
kubectl -n rook-ceph patch deploy rook-ceph-tools --type='json' -p='[{"op": "remove", "path":"/spec/template/spec/nodeName"},{"op": "remove", "path":"/spec/template/spec/volumes/2"}, {"op":"remove", "path": "/spec/template/spec/containers/0/volumeMounts/2"},{"op": "add", "path": "/spec/template/spec/containers/0/resources/limits", "value": {"memory": "256Mi"}}]'
kubectl -n rook-ceph delete NetworkPolicy block-rook-ceph-from-other-nskubectl -n argocd patch application rook-ceph-operator --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":true}]'
kubectl -n argocd patch application rook-ceph-object-store --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":true}]'
kubectl -n argocd patch application fabric-installer --type=json -p '[{"op":"replace","path":"/spec/syncPolicy/automated/selfHeal","value":true}]'
kubectl -n rook-ceph scale --replicas=1 deployment/rook-ceph-operator
kubectl -n rook-ceph patch deploy rook-ceph-tools --type='json' -p='[{"op": "remove", "path":"/spec/template/spec/nodeName"},{"op": "remove", "path":"/spec/template/spec/volumes/2"}, {"op":"remove", "path": "/spec/template/spec/containers/0/volumeMounts/2"},{"op": "add", "path": "/spec/template/spec/containers/0/resources/limits", "value": {"memory": "256Mi"}}]'
kubectl -n rook-ceph delete NetworkPolicy block-rook-ceph-from-other-ns

Passo 11: Atualizando a configuração do ArgoCD

Agora você deve garantir que mantém a configuração e o estado real em sincronia. Para fazer isso, atualize a configuração do ArgoCD executando o seguinte comando:

kubectl  -n argocd get application  fabric-installer -o json | jq 'if ([.spec.source.helm.parameters[].name] | index ("global.rook.dataPoolType")) == null then .spec.source.helm.parameters +=  [{"name": "global.rook.dataPoolType" , "value": "erasure-coded"}] else (.spec.source.helm.parameters[] | select(.name == "global.rook.dataPoolType").value) |= "erasure-coded" end'  | kubectl apply  -f -kubectl  -n argocd get application  fabric-installer -o json | jq 'if ([.spec.source.helm.parameters[].name] | index ("global.rook.dataPoolType")) == null then .spec.source.helm.parameters +=  [{"name": "global.rook.dataPoolType" , "value": "erasure-coded"}] else (.spec.source.helm.parameters[] | select(.name == "global.rook.dataPoolType").value) |= "erasure-coded" end'  | kubectl apply  -f -

Was this page helpful?

Obtenha a ajuda que você precisa
Aprendendo RPA - Cursos de automação
Fórum da comunidade da Uipath
Logotipo branco da Uipath
Confiança e segurança
© 2005-2024 UiPath. All rights reserved.