Automation Suite
2022.10
False
Imagen de fondo del banner
Guía de instalación de Automation Suite
Última actualización 24 de abr. de 2024

Cómo limpiar los registros antiguos almacenados en el paquete sf-logs

Un error puede causar la acumulación de registros en el depósito de almacén de objetos sf-logs . Para limpiar registros antiguos en el depósito sf-logs , sigue las instrucciones sobre cómo ejecutar el script dedicado. Asegúrate de seguir los pasos pertinentes para tu tipo de entorno.
Para limpiar registros antiguos almacenados en el paquete sf-logs , sigue los siguientes pasos:
  1. Obtenga la versión de la imagen sf-k8-utils-rhel disponible en su entorno:
    • en un entorno sin conexión, ejecute el siguiente comando: podman search localhost:30071/uipath/sf-k8-utils-rhel --tls-verify=false --list-tags
    • en un entorno en línea, ejecuta el siguiente comando: podman search registry.uipath.com/uipath/sf-k8-utils-rhel --list-tags
  2. Actualiza la línea 121 en la siguiente definición de yaml para incluir la etiqueta de imagen adecuada:
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cleanup-script
      namespace: uipath-infra
    data:
      cleanup_old_logs.sh:  |
        #!/bin/bash
    
        function parse_args() {
          CUTOFFDAY=7
          SKIPDRYRUN=0
          while getopts 'c:sh' flag "$@"; do
            case "${flag}" in
            c)
              CUTOFFDAY=${OPTARG}
              ;;
            s)
              SKIPDRYRUN=1
              ;;
            h)
              display_usage
              exit 0
              ;;
            *)
              echo "Unexpected option ${flag}"
              display_usage
              exit 1
              ;;
            esac
          done
    
          shift $((OPTIND - 1))
        }
    
        function display_usage() {
          echo "usage: $(basename "$0") -c <number> [-s]"
          echo "  -s                                                         skip dry run, Really deletes the log dirs"
          echo "  -c                                                         logs older than how many days to be deleted. Default is 7 days"
          echo "  -h                                                         help"
          echo "NOTE: Default is dry run, to really delete logs set -s"
        }
    
        function setS3CMDContext() {
            OBJECT_GATEWAY_INTERNAL_HOST=$(kubectl -n rook-ceph get services/rook-ceph-rgw-rook-ceph -o jsonpath="{.spec.clusterIP}")
            OBJECT_GATEWAY_INTERNAL_PORT=$(kubectl -n rook-ceph get services/rook-ceph-rgw-rook-ceph -o jsonpath="{.spec.ports[0].port}")
            AWS_ACCESS_KEY=$1
            AWS_SECRET_KEY=$2
    
            # Reference https://rook.io/docs/rook/v1.5/ceph-object.html#consume-the-object-storage
            export AWS_HOST=$OBJECT_GATEWAY_INTERNAL_HOST
            export AWS_ENDPOINT=$OBJECT_GATEWAY_INTERNAL_HOST:$OBJECT_GATEWAY_INTERNAL_PORT
            export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY
            export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_KEY
        }
    
        # Set s3cmd context by passing correct AccessKey and SecretKey
        function setS3CMDContextForLogs() {
            BUCKET_NAME='sf-logs'
            AWS_ACCESS_KEY=$(kubectl -n cattle-logging-system get secret s3-store-secret -o json | jq '.data.OBJECT_STORAGE_ACCESSKEY' | sed -e 's/^"//' -e 's/"$//' | base64 -d)
            AWS_SECRET_KEY=$(kubectl -n cattle-logging-system get secret s3-store-secret -o json | jq '.data.OBJECT_STORAGE_SECRETKEY' | sed -e 's/^"//' -e 's/"$//' | base64 -d)
    
            setS3CMDContext "$AWS_ACCESS_KEY" "$AWS_SECRET_KEY"
        }
    
        function delete_old_logs() {
            local cutoffdate=$1
            days=$(s3cmd ls  s3://sf-logs/ --host="${AWS_HOST}" --host-bucket= s3://sf-logs --no-check-certificate --no-ssl)
            days=${days//DIR}
            if [[ $SKIPDRYRUN -eq 0 ]];
            then
                echo "DRY RUN. Following log dirs are selected for deletion"
            fi
            for day in $days
            do
                day=${day#*sf-logs/}
                day=${day::-1}
                if [[ ${day} < ${cutoffdate} ]];
                then
                    if [[ $SKIPDRYRUN -eq 0 ]];
                    then
                        echo "s3://$BUCKET_NAME/$day"
                    else
                        echo "###############################################################"
                        echo "Deleting Logs for day: {$day}"
                        echo "###############################################################"
                        s3cmd del "s3://$BUCKET_NAME/$day/" --host="${AWS_HOST}" --host-bucket= --no-ssl --recursive || true
                    fi
                fi
            done
        }
    
        function main() {
            # Set S3 context by setting correct env variables
            setS3CMDContextForLogs
            echo "Bucket name is $BUCKET_NAME"
    
            CUTOFFDATE=$(date --date="${CUTOFFDAY} day ago" +%Y_%m_%d)
            echo "logs older than ${CUTOFFDATE} will be deleted"
    
            delete_old_logs "${CUTOFFDATE}"
            if [[ $SKIPDRYRUN -eq 0 ]];
            then
                echo "NOTE: For really deleting the old log directories run with -s option"
            fi
        }
    
        parse_args "$@"
        main
        exit 0
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: cleanup-old-logs
      namespace: uipath-infra
    spec:
       serviceAccountName: fluentd-logs-cleanup-sa
       containers:
       - name: cleanup
         image: localhost:30071/uipath/sf-k8-utils-rhel:0.8
         command: ["/bin/bash"]
         args: ["/scripts-dir/cleanup_old_logs.sh", "-s"]
         volumeMounts:
         - name: scripts-vol
           mountPath: /scripts-dir
         securityContext:
           privileged: false
           allowPrivilegeEscalation: false
           readOnlyRootFilesystem: true
           runAsUser: 9999
           runAsGroup: 9999
           runAsNonRoot: true
           capabilities:
             drop: ["NET_RAW"]
       volumes:
       - name: scripts-vol
         configMap:
           name: cleanup-scriptapiVersion: v1
    kind: ConfigMap
    metadata:
      name: cleanup-script
      namespace: uipath-infra
    data:
      cleanup_old_logs.sh:  |
        #!/bin/bash
    
        function parse_args() {
          CUTOFFDAY=7
          SKIPDRYRUN=0
          while getopts 'c:sh' flag "$@"; do
            case "${flag}" in
            c)
              CUTOFFDAY=${OPTARG}
              ;;
            s)
              SKIPDRYRUN=1
              ;;
            h)
              display_usage
              exit 0
              ;;
            *)
              echo "Unexpected option ${flag}"
              display_usage
              exit 1
              ;;
            esac
          done
    
          shift $((OPTIND - 1))
        }
    
        function display_usage() {
          echo "usage: $(basename "$0") -c <number> [-s]"
          echo "  -s                                                         skip dry run, Really deletes the log dirs"
          echo "  -c                                                         logs older than how many days to be deleted. Default is 7 days"
          echo "  -h                                                         help"
          echo "NOTE: Default is dry run, to really delete logs set -s"
        }
    
        function setS3CMDContext() {
            OBJECT_GATEWAY_INTERNAL_HOST=$(kubectl -n rook-ceph get services/rook-ceph-rgw-rook-ceph -o jsonpath="{.spec.clusterIP}")
            OBJECT_GATEWAY_INTERNAL_PORT=$(kubectl -n rook-ceph get services/rook-ceph-rgw-rook-ceph -o jsonpath="{.spec.ports[0].port}")
            AWS_ACCESS_KEY=$1
            AWS_SECRET_KEY=$2
    
            # Reference https://rook.io/docs/rook/v1.5/ceph-object.html#consume-the-object-storage
            export AWS_HOST=$OBJECT_GATEWAY_INTERNAL_HOST
            export AWS_ENDPOINT=$OBJECT_GATEWAY_INTERNAL_HOST:$OBJECT_GATEWAY_INTERNAL_PORT
            export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY
            export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_KEY
        }
    
        # Set s3cmd context by passing correct AccessKey and SecretKey
        function setS3CMDContextForLogs() {
            BUCKET_NAME='sf-logs'
            AWS_ACCESS_KEY=$(kubectl -n cattle-logging-system get secret s3-store-secret -o json | jq '.data.OBJECT_STORAGE_ACCESSKEY' | sed -e 's/^"//' -e 's/"$//' | base64 -d)
            AWS_SECRET_KEY=$(kubectl -n cattle-logging-system get secret s3-store-secret -o json | jq '.data.OBJECT_STORAGE_SECRETKEY' | sed -e 's/^"//' -e 's/"$//' | base64 -d)
    
            setS3CMDContext "$AWS_ACCESS_KEY" "$AWS_SECRET_KEY"
        }
    
        function delete_old_logs() {
            local cutoffdate=$1
            days=$(s3cmd ls  s3://sf-logs/ --host="${AWS_HOST}" --host-bucket= s3://sf-logs --no-check-certificate --no-ssl)
            days=${days//DIR}
            if [[ $SKIPDRYRUN -eq 0 ]];
            then
                echo "DRY RUN. Following log dirs are selected for deletion"
            fi
            for day in $days
            do
                day=${day#*sf-logs/}
                day=${day::-1}
                if [[ ${day} < ${cutoffdate} ]];
                then
                    if [[ $SKIPDRYRUN -eq 0 ]];
                    then
                        echo "s3://$BUCKET_NAME/$day"
                    else
                        echo "###############################################################"
                        echo "Deleting Logs for day: {$day}"
                        echo "###############################################################"
                        s3cmd del "s3://$BUCKET_NAME/$day/" --host="${AWS_HOST}" --host-bucket= --no-ssl --recursive || true
                    fi
                fi
            done
        }
    
        function main() {
            # Set S3 context by setting correct env variables
            setS3CMDContextForLogs
            echo "Bucket name is $BUCKET_NAME"
    
            CUTOFFDATE=$(date --date="${CUTOFFDAY} day ago" +%Y_%m_%d)
            echo "logs older than ${CUTOFFDATE} will be deleted"
    
            delete_old_logs "${CUTOFFDATE}"
            if [[ $SKIPDRYRUN -eq 0 ]];
            then
                echo "NOTE: For really deleting the old log directories run with -s option"
            fi
        }
    
        parse_args "$@"
        main
        exit 0
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: cleanup-old-logs
      namespace: uipath-infra
    spec:
       serviceAccountName: fluentd-logs-cleanup-sa
       containers:
       - name: cleanup
         image: localhost:30071/uipath/sf-k8-utils-rhel:0.8
         command: ["/bin/bash"]
         args: ["/scripts-dir/cleanup_old_logs.sh", "-s"]
         volumeMounts:
         - name: scripts-vol
           mountPath: /scripts-dir
         securityContext:
           privileged: false
           allowPrivilegeEscalation: false
           readOnlyRootFilesystem: true
           runAsUser: 9999
           runAsGroup: 9999
           runAsNonRoot: true
           capabilities:
             drop: ["NET_RAW"]
       volumes:
       - name: scripts-vol
         configMap:
           name: cleanup-script
  3. Copia el contenido de la definición de yaml antes mencionada en un archivo llamado cleanup.yaml. Active un pod para limpiar los registros antiguos:
    kubectl apply -f cleanup.yamlkubectl apply -f cleanup.yaml
  4. Obtener detalles sobre el progreso:

    kubectl -n uipath-infra logs cleanup-old-logs -fkubectl -n uipath-infra logs cleanup-old-logs -f
  5. Eliminar el trabajo:

    kubectl delete -f cleanup.yamlkubectl delete -f cleanup.yaml

Was this page helpful?

Obtén la ayuda que necesitas
RPA para el aprendizaje - Cursos de automatización
Foro de la comunidad UiPath
Logotipo blanco de UiPath
Confianza y seguridad
© 2005-2024 UiPath. All rights reserved.