automation-suite
2023.10
false
- 概述
- 要求
- 推荐:部署模板
- 手动:准备安装
- 手动:准备安装
- 步骤 1:为离线安装配置符合 OCI 的注册表
- 步骤 2:配置外部对象存储
- 步骤 3:配置 High Availability Add-on
- 步骤 4:配置 Microsoft SQL Server
- 步骤 5:配置负载均衡器
- 步骤 6:配置 DNS
- 步骤 7:配置磁盘
- 步骤 8:配置内核和操作系统级别设置
- 步骤 9:配置节点端口
- 步骤 10:应用其他设置
- 步骤 12:验证并安装所需的 RPM 包
- 步骤 13:生成 cluster_config.json
- 证书配置
- 数据库配置
- 外部对象存储配置
- 预签名 URL 配置
- 符合 OCI 的外部注册表配置
- Disaster Recovery:主动/被动和主动/主动配置
- High Availability Add-on 配置
- 特定于 Orchestrator 的配置
- Insights 特定配置
- Process Mining 特定配置
- Document Understanding 特定配置
- Automation Suite Robot 特定配置
- 监控配置
- 可选:配置代理服务器
- 可选:在多节点 HA 就绪生产集群中启用区域故障恢复
- 可选:传递自定义 resolv.conf
- 可选:提高容错能力
- install-uipath.sh 参数
- 添加具有 GPU 支持的专用代理节点
- 为 Task Mining 添加专用代理节点
- 连接 Task Mining 应用程序
- 为 Automation Suite Robot 添加专用代理节点
- 步骤 15:为离线安装配置临时 Docker 注册表
- 步骤 16:验证安装的先决条件
- 手动:执行安装
- 安装后
- 集群管理
- 监控和警示
- 迁移和升级
- 特定于产品的配置
- 最佳实践和维护
- 故障排除
- 如何在安装过程中对服务进行故障排除
- 如何卸载集群
- 如何清理离线工件以改善磁盘空间
- 如何清除 Redis 数据
- 如何启用 Istio 日志记录
- 如何手动清理日志
- 如何清理存储在 sf-logs 存储桶中的旧日志
- 如何禁用 AI Center 的流日志
- 如何对失败的 Automation Suite 安装进行调试
- 如何在升级后从旧安装程序中删除映像
- 如何禁用 TX 校验和卸载
- 如何从 Automation Suite 2022.10.10 和 2022.4.11 升级到 2023.10.2
- 如何手动将 ArgoCD 日志级别设置为 Info
- 如何扩展 AI Center 存储
- 如何为外部注册表生成已编码的 pull_secret_value
- 如何解决 TLS 1.2 中的弱密码问题
- 单节点升级在结构阶段失败
- 从 2021.10 自动升级后,集群运行状况不佳
- 由于 Ceph 运行状况不佳,升级失败
- 由于空间问题,RKE2 未启动
- 卷无法装载,且仍处于附加/分离循环状态
- 由于 Orchestrator 数据库中的传统对象,升级失败
- 并行升级后,发现 Ceph 集群处于降级状态
- Insights 组件运行状况不佳导致迁移失败
- Apps 服务升级失败
- 就地升级超时
- Docker 注册表迁移卡在 PVC 删除阶段
- 升级到 2023.10 或更高版本后 AI Center 配置失败
- 在离线环境中升级失败
- 升级期间 SQL 验证失败
- 快照-控制器-crds Pod 在升级后处于 CrashLoopBackOff 状态
- Longhorn REST API 端点升级/重新安装错误
- 运行诊断工具
- 使用 Automation Suite 支持捆绑包
- 探索日志
如何清理存储在 sf-logs 存储桶中的旧日志
Linux 版 Automation Suite 安装指南
Last updated 2024年12月3日
如何清理存储在 sf-logs 存储桶中的旧日志
某个错误可能会导致
sf-logs
对象存储存储桶中的日志累积。 要清理 sf-logs
存储桶中的旧日志,请按照有关运行专用脚本的说明进行操作。 请务必按照与您的环境类型相关的步骤操作。
要清理
sf-logs
存储桶中存储的旧日志,请执行以下步骤:
-
获取您的环境中可用的
sf-k8-utils-rhel
映像的版本:- 在离线环境中,运行以下命令:
podman search localhost:30071/uipath/sf-k8-utils-rhel --tls-verify=false --list-tags
- 在在线环境中,运行以下命令:
podman search registry.uipath.com/uipath/sf-k8-utils-rhel --list-tags
- 在离线环境中,运行以下命令:
-
相应地更新以下
yaml
定义中的第 121 行,以包含正确的图像标签:apiVersion: v1 kind: ConfigMap metadata: name: cleanup-script namespace: uipath-infra data: cleanup_old_logs.sh: | #!/bin/bash function parse_args() { CUTOFFDAY=7 SKIPDRYRUN=0 while getopts 'c:sh' flag "$@"; do case "${flag}" in c) CUTOFFDAY=${OPTARG} ;; s) SKIPDRYRUN=1 ;; h) display_usage exit 0 ;; *) echo "Unexpected option ${flag}" display_usage exit 1 ;; esac done shift $((OPTIND - 1)) } function display_usage() { echo "usage: $(basename "$0") -c <number> [-s]" echo " -s skip dry run, Really deletes the log dirs" echo " -c logs older than how many days to be deleted. Default is 7 days" echo " -h help" echo "NOTE: Default is dry run, to really delete logs set -s" } function setS3CMDContext() { OBJECT_GATEWAY_INTERNAL_HOST=$(kubectl -n rook-ceph get services/rook-ceph-rgw-rook-ceph -o jsonpath="{.spec.clusterIP}") OBJECT_GATEWAY_INTERNAL_PORT=$(kubectl -n rook-ceph get services/rook-ceph-rgw-rook-ceph -o jsonpath="{.spec.ports[0].port}") AWS_ACCESS_KEY=$1 AWS_SECRET_KEY=$2 # Reference https://rook.io/docs/rook/v1.5/ceph-object.html#consume-the-object-storage export AWS_HOST=$OBJECT_GATEWAY_INTERNAL_HOST export AWS_ENDPOINT=$OBJECT_GATEWAY_INTERNAL_HOST:$OBJECT_GATEWAY_INTERNAL_PORT export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_KEY } # Set s3cmd context by passing correct AccessKey and SecretKey function setS3CMDContextForLogs() { BUCKET_NAME='sf-logs' AWS_ACCESS_KEY=$(kubectl -n cattle-logging-system get secret s3-store-secret -o json | jq '.data.OBJECT_STORAGE_ACCESSKEY' | sed -e 's/^"//' -e 's/"$//' | base64 -d) AWS_SECRET_KEY=$(kubectl -n cattle-logging-system get secret s3-store-secret -o json | jq '.data.OBJECT_STORAGE_SECRETKEY' | sed -e 's/^"//' -e 's/"$//' | base64 -d) setS3CMDContext "$AWS_ACCESS_KEY" "$AWS_SECRET_KEY" } function delete_old_logs() { local cutoffdate=$1 days=$(s3cmd ls s3://sf-logs/ --host="${AWS_HOST}" --host-bucket= s3://sf-logs --no-check-certificate --no-ssl) days=${days//DIR} if [[ $SKIPDRYRUN -eq 0 ]]; then echo "DRY RUN. Following log dirs are selected for deletion" fi for day in $days do day=${day#*sf-logs/} day=${day::-1} if [[ ${day} < ${cutoffdate} ]]; then if [[ $SKIPDRYRUN -eq 0 ]]; then echo "s3://$BUCKET_NAME/$day" else echo "###############################################################" echo "Deleting Logs for day: {$day}" echo "###############################################################" s3cmd del "s3://$BUCKET_NAME/$day/" --host="${AWS_HOST}" --host-bucket= --no-ssl --recursive || true fi fi done } function main() { # Set S3 context by setting correct env variables setS3CMDContextForLogs echo "Bucket name is $BUCKET_NAME" CUTOFFDATE=$(date --date="${CUTOFFDAY} day ago" +%Y_%m_%d) echo "logs older than ${CUTOFFDATE} will be deleted" delete_old_logs "${CUTOFFDATE}" if [[ $SKIPDRYRUN -eq 0 ]]; then echo "NOTE: For really deleting the old log directories run with -s option" fi } parse_args "$@" main exit 0 --- apiVersion: v1 kind: Pod metadata: name: cleanup-old-logs namespace: uipath-infra spec: serviceAccountName: fluentd-logs-cleanup-sa containers: - name: cleanup image: localhost:30071/uipath/sf-k8-utils-rhel:0.8 command: ["/bin/bash"] args: ["/scripts-dir/cleanup_old_logs.sh", "-s"] volumeMounts: - name: scripts-vol mountPath: /scripts-dir securityContext: privileged: false allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 9999 runAsGroup: 9999 runAsNonRoot: true capabilities: drop: ["NET_RAW"] volumes: - name: scripts-vol configMap: name: cleanup-script
apiVersion: v1 kind: ConfigMap metadata: name: cleanup-script namespace: uipath-infra data: cleanup_old_logs.sh: | #!/bin/bash function parse_args() { CUTOFFDAY=7 SKIPDRYRUN=0 while getopts 'c:sh' flag "$@"; do case "${flag}" in c) CUTOFFDAY=${OPTARG} ;; s) SKIPDRYRUN=1 ;; h) display_usage exit 0 ;; *) echo "Unexpected option ${flag}" display_usage exit 1 ;; esac done shift $((OPTIND - 1)) } function display_usage() { echo "usage: $(basename "$0") -c <number> [-s]" echo " -s skip dry run, Really deletes the log dirs" echo " -c logs older than how many days to be deleted. Default is 7 days" echo " -h help" echo "NOTE: Default is dry run, to really delete logs set -s" } function setS3CMDContext() { OBJECT_GATEWAY_INTERNAL_HOST=$(kubectl -n rook-ceph get services/rook-ceph-rgw-rook-ceph -o jsonpath="{.spec.clusterIP}") OBJECT_GATEWAY_INTERNAL_PORT=$(kubectl -n rook-ceph get services/rook-ceph-rgw-rook-ceph -o jsonpath="{.spec.ports[0].port}") AWS_ACCESS_KEY=$1 AWS_SECRET_KEY=$2 # Reference https://rook.io/docs/rook/v1.5/ceph-object.html#consume-the-object-storage export AWS_HOST=$OBJECT_GATEWAY_INTERNAL_HOST export AWS_ENDPOINT=$OBJECT_GATEWAY_INTERNAL_HOST:$OBJECT_GATEWAY_INTERNAL_PORT export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_KEY } # Set s3cmd context by passing correct AccessKey and SecretKey function setS3CMDContextForLogs() { BUCKET_NAME='sf-logs' AWS_ACCESS_KEY=$(kubectl -n cattle-logging-system get secret s3-store-secret -o json | jq '.data.OBJECT_STORAGE_ACCESSKEY' | sed -e 's/^"//' -e 's/"$//' | base64 -d) AWS_SECRET_KEY=$(kubectl -n cattle-logging-system get secret s3-store-secret -o json | jq '.data.OBJECT_STORAGE_SECRETKEY' | sed -e 's/^"//' -e 's/"$//' | base64 -d) setS3CMDContext "$AWS_ACCESS_KEY" "$AWS_SECRET_KEY" } function delete_old_logs() { local cutoffdate=$1 days=$(s3cmd ls s3://sf-logs/ --host="${AWS_HOST}" --host-bucket= s3://sf-logs --no-check-certificate --no-ssl) days=${days//DIR} if [[ $SKIPDRYRUN -eq 0 ]]; then echo "DRY RUN. Following log dirs are selected for deletion" fi for day in $days do day=${day#*sf-logs/} day=${day::-1} if [[ ${day} < ${cutoffdate} ]]; then if [[ $SKIPDRYRUN -eq 0 ]]; then echo "s3://$BUCKET_NAME/$day" else echo "###############################################################" echo "Deleting Logs for day: {$day}" echo "###############################################################" s3cmd del "s3://$BUCKET_NAME/$day/" --host="${AWS_HOST}" --host-bucket= --no-ssl --recursive || true fi fi done } function main() { # Set S3 context by setting correct env variables setS3CMDContextForLogs echo "Bucket name is $BUCKET_NAME" CUTOFFDATE=$(date --date="${CUTOFFDAY} day ago" +%Y_%m_%d) echo "logs older than ${CUTOFFDATE} will be deleted" delete_old_logs "${CUTOFFDATE}" if [[ $SKIPDRYRUN -eq 0 ]]; then echo "NOTE: For really deleting the old log directories run with -s option" fi } parse_args "$@" main exit 0 --- apiVersion: v1 kind: Pod metadata: name: cleanup-old-logs namespace: uipath-infra spec: serviceAccountName: fluentd-logs-cleanup-sa containers: - name: cleanup image: localhost:30071/uipath/sf-k8-utils-rhel:0.8 command: ["/bin/bash"] args: ["/scripts-dir/cleanup_old_logs.sh", "-s"] volumeMounts: - name: scripts-vol mountPath: /scripts-dir securityContext: privileged: false allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 9999 runAsGroup: 9999 runAsNonRoot: true capabilities: drop: ["NET_RAW"] volumes: - name: scripts-vol configMap: name: cleanup-script -
将上述
yaml
定义的内容复制到名为cleanup.yaml
的文件中。 触发 Pod 以清理旧日志:kubectl apply -f cleanup.yaml
kubectl apply -f cleanup.yaml -
获取有关进度的详细信息:
kubectl -n uipath-infra logs cleanup-old-logs -f
kubectl -n uipath-infra logs cleanup-old-logs -f -
删除作业:
kubectl delete -f cleanup.yaml
kubectl delete -f cleanup.yaml