automation-suite
2023.4
false
Important :
Veuillez noter que ce contenu a été localisé en partie à l’aide de la traduction automatique.
Guide d'installation d'Automation Suite sur Linux
Last updated 4 oct. 2024

Comment nettoyer les anciens journaux stockés dans le bundle sf-logs

Un bogue peut entraîner une accumulation de journaux dans le compartiment de la librairie d'objets sf-logs . Pour nettoyer les anciens journaux dans le sf-logs , suivez les instructions d'exécution du script dédié. Assurez-vous de suivre les étapes pertinentes pour votre type d’environnement.
Pour nettoyer les anciens journaux stockés dans le bundle sf-logs , procédez comme suit :
  1. Obtenez la version de l'image sf-k8-utils-rhel disponible dans votre environnement :
    • dans un environnement hors ligne, exécutez la commande suivante : podman search localhost:30071/uipath/sf-k8-utils-rhel --tls-verify=false --list-tags
    • dans un environnement en ligne, exécutez la commande suivante : podman search registry.uipath.com/uipath/sf-k8-utils-rhel --list-tags
  2. Mettez à jour la ligne 121 dans la définition yaml suivante en conséquence pour inclure la balise d’image appropriée :
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cleanup-script
      namespace: uipath-infra
    data:
      cleanup_old_logs.sh:  |
        #!/bin/bash
    
        function parse_args() {
          CUTOFFDAY=7
          SKIPDRYRUN=0
          while getopts 'c:sh' flag "$@"; do
            case "${flag}" in
            c)
              CUTOFFDAY=${OPTARG}
              ;;
            s)
              SKIPDRYRUN=1
              ;;
            h)
              display_usage
              exit 0
              ;;
            *)
              echo "Unexpected option ${flag}"
              display_usage
              exit 1
              ;;
            esac
          done
    
          shift $((OPTIND - 1))
        }
    
        function display_usage() {
          echo "usage: $(basename "$0") -c <number> [-s]"
          echo "  -s                                                         skip dry run, Really deletes the log dirs"
          echo "  -c                                                         logs older than how many days to be deleted. Default is 7 days"
          echo "  -h                                                         help"
          echo "NOTE: Default is dry run, to really delete logs set -s"
        }
    
        function setS3CMDContext() {
            OBJECT_GATEWAY_INTERNAL_HOST=$(kubectl -n rook-ceph get services/rook-ceph-rgw-rook-ceph -o jsonpath="{.spec.clusterIP}")
            OBJECT_GATEWAY_INTERNAL_PORT=$(kubectl -n rook-ceph get services/rook-ceph-rgw-rook-ceph -o jsonpath="{.spec.ports[0].port}")
            AWS_ACCESS_KEY=$1
            AWS_SECRET_KEY=$2
    
            # Reference https://rook.io/docs/rook/v1.5/ceph-object.html#consume-the-object-storage
            export AWS_HOST=$OBJECT_GATEWAY_INTERNAL_HOST
            export AWS_ENDPOINT=$OBJECT_GATEWAY_INTERNAL_HOST:$OBJECT_GATEWAY_INTERNAL_PORT
            export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY
            export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_KEY
        }
    
        # Set s3cmd context by passing correct AccessKey and SecretKey
        function setS3CMDContextForLogs() {
            BUCKET_NAME='sf-logs'
            AWS_ACCESS_KEY=$(kubectl -n cattle-logging-system get secret s3-store-secret -o json | jq '.data.OBJECT_STORAGE_ACCESSKEY' | sed -e 's/^"//' -e 's/"$//' | base64 -d)
            AWS_SECRET_KEY=$(kubectl -n cattle-logging-system get secret s3-store-secret -o json | jq '.data.OBJECT_STORAGE_SECRETKEY' | sed -e 's/^"//' -e 's/"$//' | base64 -d)
    
            setS3CMDContext "$AWS_ACCESS_KEY" "$AWS_SECRET_KEY"
        }
    
        function delete_old_logs() {
            local cutoffdate=$1
            days=$(s3cmd ls  s3://sf-logs/ --host="${AWS_HOST}" --host-bucket= s3://sf-logs --no-check-certificate --no-ssl)
            days=${days//DIR}
            if [[ $SKIPDRYRUN -eq 0 ]];
            then
                echo "DRY RUN. Following log dirs are selected for deletion"
            fi
            for day in $days
            do
                day=${day#*sf-logs/}
                day=${day::-1}
                if [[ ${day} < ${cutoffdate} ]];
                then
                    if [[ $SKIPDRYRUN -eq 0 ]];
                    then
                        echo "s3://$BUCKET_NAME/$day"
                    else
                        echo "###############################################################"
                        echo "Deleting Logs for day: {$day}"
                        echo "###############################################################"
                        s3cmd del "s3://$BUCKET_NAME/$day/" --host="${AWS_HOST}" --host-bucket= --no-ssl --recursive || true
                    fi
                fi
            done
        }
    
        function main() {
            # Set S3 context by setting correct env variables
            setS3CMDContextForLogs
            echo "Bucket name is $BUCKET_NAME"
    
            CUTOFFDATE=$(date --date="${CUTOFFDAY} day ago" +%Y_%m_%d)
            echo "logs older than ${CUTOFFDATE} will be deleted"
    
            delete_old_logs "${CUTOFFDATE}"
            if [[ $SKIPDRYRUN -eq 0 ]];
            then
                echo "NOTE: For really deleting the old log directories run with -s option"
            fi
        }
    
        parse_args "$@"
        main
        exit 0
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: cleanup-old-logs
      namespace: uipath-infra
    spec:
       serviceAccountName: fluentd-logs-cleanup-sa
       containers:
       - name: cleanup
         image: localhost:30071/uipath/sf-k8-utils-rhel:0.8
         command: ["/bin/bash"]
         args: ["/scripts-dir/cleanup_old_logs.sh", "-s"]
         volumeMounts:
         - name: scripts-vol
           mountPath: /scripts-dir
         securityContext:
           privileged: false
           allowPrivilegeEscalation: false
           readOnlyRootFilesystem: true
           runAsUser: 9999
           runAsGroup: 9999
           runAsNonRoot: true
           capabilities:
             drop: ["NET_RAW"]
       volumes:
       - name: scripts-vol
         configMap:
           name: cleanup-scriptapiVersion: v1
    kind: ConfigMap
    metadata:
      name: cleanup-script
      namespace: uipath-infra
    data:
      cleanup_old_logs.sh:  |
        #!/bin/bash
    
        function parse_args() {
          CUTOFFDAY=7
          SKIPDRYRUN=0
          while getopts 'c:sh' flag "$@"; do
            case "${flag}" in
            c)
              CUTOFFDAY=${OPTARG}
              ;;
            s)
              SKIPDRYRUN=1
              ;;
            h)
              display_usage
              exit 0
              ;;
            *)
              echo "Unexpected option ${flag}"
              display_usage
              exit 1
              ;;
            esac
          done
    
          shift $((OPTIND - 1))
        }
    
        function display_usage() {
          echo "usage: $(basename "$0") -c <number> [-s]"
          echo "  -s                                                         skip dry run, Really deletes the log dirs"
          echo "  -c                                                         logs older than how many days to be deleted. Default is 7 days"
          echo "  -h                                                         help"
          echo "NOTE: Default is dry run, to really delete logs set -s"
        }
    
        function setS3CMDContext() {
            OBJECT_GATEWAY_INTERNAL_HOST=$(kubectl -n rook-ceph get services/rook-ceph-rgw-rook-ceph -o jsonpath="{.spec.clusterIP}")
            OBJECT_GATEWAY_INTERNAL_PORT=$(kubectl -n rook-ceph get services/rook-ceph-rgw-rook-ceph -o jsonpath="{.spec.ports[0].port}")
            AWS_ACCESS_KEY=$1
            AWS_SECRET_KEY=$2
    
            # Reference https://rook.io/docs/rook/v1.5/ceph-object.html#consume-the-object-storage
            export AWS_HOST=$OBJECT_GATEWAY_INTERNAL_HOST
            export AWS_ENDPOINT=$OBJECT_GATEWAY_INTERNAL_HOST:$OBJECT_GATEWAY_INTERNAL_PORT
            export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY
            export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_KEY
        }
    
        # Set s3cmd context by passing correct AccessKey and SecretKey
        function setS3CMDContextForLogs() {
            BUCKET_NAME='sf-logs'
            AWS_ACCESS_KEY=$(kubectl -n cattle-logging-system get secret s3-store-secret -o json | jq '.data.OBJECT_STORAGE_ACCESSKEY' | sed -e 's/^"//' -e 's/"$//' | base64 -d)
            AWS_SECRET_KEY=$(kubectl -n cattle-logging-system get secret s3-store-secret -o json | jq '.data.OBJECT_STORAGE_SECRETKEY' | sed -e 's/^"//' -e 's/"$//' | base64 -d)
    
            setS3CMDContext "$AWS_ACCESS_KEY" "$AWS_SECRET_KEY"
        }
    
        function delete_old_logs() {
            local cutoffdate=$1
            days=$(s3cmd ls  s3://sf-logs/ --host="${AWS_HOST}" --host-bucket= s3://sf-logs --no-check-certificate --no-ssl)
            days=${days//DIR}
            if [[ $SKIPDRYRUN -eq 0 ]];
            then
                echo "DRY RUN. Following log dirs are selected for deletion"
            fi
            for day in $days
            do
                day=${day#*sf-logs/}
                day=${day::-1}
                if [[ ${day} < ${cutoffdate} ]];
                then
                    if [[ $SKIPDRYRUN -eq 0 ]];
                    then
                        echo "s3://$BUCKET_NAME/$day"
                    else
                        echo "###############################################################"
                        echo "Deleting Logs for day: {$day}"
                        echo "###############################################################"
                        s3cmd del "s3://$BUCKET_NAME/$day/" --host="${AWS_HOST}" --host-bucket= --no-ssl --recursive || true
                    fi
                fi
            done
        }
    
        function main() {
            # Set S3 context by setting correct env variables
            setS3CMDContextForLogs
            echo "Bucket name is $BUCKET_NAME"
    
            CUTOFFDATE=$(date --date="${CUTOFFDAY} day ago" +%Y_%m_%d)
            echo "logs older than ${CUTOFFDATE} will be deleted"
    
            delete_old_logs "${CUTOFFDATE}"
            if [[ $SKIPDRYRUN -eq 0 ]];
            then
                echo "NOTE: For really deleting the old log directories run with -s option"
            fi
        }
    
        parse_args "$@"
        main
        exit 0
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: cleanup-old-logs
      namespace: uipath-infra
    spec:
       serviceAccountName: fluentd-logs-cleanup-sa
       containers:
       - name: cleanup
         image: localhost:30071/uipath/sf-k8-utils-rhel:0.8
         command: ["/bin/bash"]
         args: ["/scripts-dir/cleanup_old_logs.sh", "-s"]
         volumeMounts:
         - name: scripts-vol
           mountPath: /scripts-dir
         securityContext:
           privileged: false
           allowPrivilegeEscalation: false
           readOnlyRootFilesystem: true
           runAsUser: 9999
           runAsGroup: 9999
           runAsNonRoot: true
           capabilities:
             drop: ["NET_RAW"]
       volumes:
       - name: scripts-vol
         configMap:
           name: cleanup-script
  3. Copiez le contenu de la définition yaml susmentionnée dans un fichier appelé cleanup.yaml. Déclenchez un pod pour nettoyer les anciens journaux :
    kubectl apply -f cleanup.yamlkubectl apply -f cleanup.yaml
  4. Obtenez des détails sur la progression :

    kubectl -n uipath-infra logs cleanup-old-logs -fkubectl -n uipath-infra logs cleanup-old-logs -f
  5. Supprimer la tâche :

    kubectl delete -f cleanup.yamlkubectl delete -f cleanup.yaml

Cette page vous a-t-elle été utile ?

Obtenez l'aide dont vous avez besoin
Formation RPA - Cours d'automatisation
Forum de la communauté UiPath
Uipath Logo White
Confiance et sécurité
© 2005-2024 UiPath Tous droits réservés.