- 概述
- 要求
- 推荐:部署模板
- 手动:准备安装
- 手动:准备安装
- 步骤 1:为离线安装配置符合 OCI 的注册表
- 步骤 2:配置外部对象存储
- 步骤 3:配置 High Availability Add-on
- 步骤 4:配置 Microsoft SQL Server
- 步骤 5:配置负载均衡器
- 步骤 6:配置 DNS
- 步骤 7:配置磁盘
- 步骤 8:配置内核和操作系统级别设置
- 步骤 9:配置节点端口
- 步骤 10:应用其他设置
- 步骤 12:验证并安装所需的 RPM 包
- 步骤 13:生成 cluster_config.json
- 证书配置
- 数据库配置
- 外部对象存储配置
- 预签名 URL 配置
- 符合 OCI 的外部注册表配置
- Disaster Recovery:主动/被动和主动/主动配置
- High Availability Add-on 配置
- 特定于 Orchestrator 的配置
- Insights 特定配置
- Process Mining 特定配置
- Document Understanding 特定配置
- Automation Suite Robot 特定配置
- 监控配置
- 可选:配置代理服务器
- 可选:在多节点 HA 就绪生产集群中启用区域故障恢复
- 可选:传递自定义 resolv.conf
- 可选:提高容错能力
- install-uipath.sh 参数
- 添加具有 GPU 支持的专用代理节点
- 为 Task Mining 添加专用代理节点
- 连接 Task Mining 应用程序
- 为 Automation Suite Robot 添加专用代理节点
- 步骤 15:为离线安装配置临时 Docker 注册表
- 步骤 16:验证安装的先决条件
- 手动:执行安装
- 安装后
- 集群管理
- 监控和警示
- 迁移和升级
- 特定于产品的配置
- 最佳实践和维护
- 故障排除
- 如何在安装过程中对服务进行故障排除
- 如何卸载集群
- 如何清理离线工件以改善磁盘空间
- 如何清除 Redis 数据
- 如何启用 Istio 日志记录
- 如何手动清理日志
- 如何清理存储在 sf-logs 存储桶中的旧日志
- 如何禁用 AI Center 的流日志
- 如何对失败的 Automation Suite 安装进行调试
- 如何在升级后从旧安装程序中删除映像
- 如何禁用 TX 校验和卸载
- 如何从 Automation Suite 2022.10.10 和 2022.4.11 升级到 2023.10.2
- 如何手动将 ArgoCD 日志级别设置为 Info
- 如何扩展 AI Center 存储
- 如何为外部注册表生成已编码的 pull_secret_value
- 如何解决 TLS 1.2 中的弱密码问题
- 单节点升级在结构阶段失败
- 从 2021.10 自动升级后,集群运行状况不佳
- 由于 Ceph 运行状况不佳,升级失败
- 由于空间问题,RKE2 未启动
- 卷无法装载,且仍处于附加/分离循环状态
- 由于 Orchestrator 数据库中的传统对象,升级失败
- 并行升级后,发现 Ceph 集群处于降级状态
- Insights 组件运行状况不佳导致迁移失败
- Apps 服务升级失败
- 就地升级超时
- Docker 注册表迁移卡在 PVC 删除阶段
- 升级到 2023.10 或更高版本后 AI Center 配置失败
- 在离线环境中升级失败
- 升级期间 SQL 验证失败
- 快照-控制器-crds Pod 在升级后处于 CrashLoopBackOff 状态
- Longhorn REST API 端点升级/重新安装错误
- Task Mining troubleshooting
- 运行诊断工具
- 使用 Automation Suite 支持捆绑包
- 探索日志
Linux 版 Automation Suite 安装指南
集群还原或回滚后服务运行状况不佳
在集群还原或回滚后,AI Center、Orchestrator、Platform、Document Understanding 或 Task Mining 可能运行不正常,RabbitMQ Pod 日志显示以下错误:
[root@server0 UiPathAutomationSuite]# k -n rabbitmq logs rabbitmq-server-0
2022-10-29 07:38:49.146614+00:00 [info] <0.9223.362> accepting AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672)
2022-10-29 07:38:49.147411+00:00 [info] <0.9223.362> Connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672) has a client-provided name: rabbitConnectionFactory#77049094:2100
2022-10-29 07:38:49.147644+00:00 [erro] <0.9223.362> Error on AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672, state: starting):
2022-10-29 07:38:49.147644+00:00 [erro] <0.9223.362> PLAIN login refused: user 'aicenter-service' - invalid credentials
2022-10-29 07:38:49.147922+00:00 [info] <0.9223.362> closing AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672 - rabbitConnectionFactory#77049094:2100)
2022-10-29 07:38:55.818447+00:00 [info] <0.9533.362> accepting AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672)
2022-10-29 07:38:55.821662+00:00 [info] <0.9533.362> Connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672) has a client-provided name: rabbitConnectionFactory#2100d047:4057
2022-10-29 07:38:55.822058+00:00 [erro] <0.9533.362> Error on AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672, state: starting):
2022-10-29 07:38:55.822058+00:00 [erro] <0.9533.362> PLAIN login refused: user 'aicenter-service' - invalid credentials
2022-10-29 07:38:55.822447+00:00 [info] <0.9533.362> closing AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672 - rabbitConnectionFactory#2100d047:4057)
[root@server0 UiPathAutomationSuite]# k -n rabbitmq logs rabbitmq-server-0
2022-10-29 07:38:49.146614+00:00 [info] <0.9223.362> accepting AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672)
2022-10-29 07:38:49.147411+00:00 [info] <0.9223.362> Connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672) has a client-provided name: rabbitConnectionFactory#77049094:2100
2022-10-29 07:38:49.147644+00:00 [erro] <0.9223.362> Error on AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672, state: starting):
2022-10-29 07:38:49.147644+00:00 [erro] <0.9223.362> PLAIN login refused: user 'aicenter-service' - invalid credentials
2022-10-29 07:38:49.147922+00:00 [info] <0.9223.362> closing AMQP connection <0.9223.362> (10.42.1.161:37524 -> 10.42.0.228:5672 - rabbitConnectionFactory#77049094:2100)
2022-10-29 07:38:55.818447+00:00 [info] <0.9533.362> accepting AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672)
2022-10-29 07:38:55.821662+00:00 [info] <0.9533.362> Connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672) has a client-provided name: rabbitConnectionFactory#2100d047:4057
2022-10-29 07:38:55.822058+00:00 [erro] <0.9533.362> Error on AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672, state: starting):
2022-10-29 07:38:55.822058+00:00 [erro] <0.9533.362> PLAIN login refused: user 'aicenter-service' - invalid credentials
2022-10-29 07:38:55.822447+00:00 [info] <0.9533.362> closing AMQP connection <0.9533.362> (10.42.0.198:45032 -> 10.42.0.228:5672 - rabbitConnectionFactory#2100d047:4057)
CrashLoopBackOff
状态。
如果所有 Pod 都在运行,请执行以下步骤:
-
删除 RabbitMQ 中的用户:
kubectl -n rabbitmq exec rabbitmq-server-0 -c rabbitmq -- rabbitmqctl list_users -s --formatter json | jq '.[]|.user' | grep -v default_user | xargs -I{} kubectl -n rabbitmq exec rabbitmq-server-0 -c rabbitmq -- rabbitmqctl delete_user {}
kubectl -n rabbitmq exec rabbitmq-server-0 -c rabbitmq -- rabbitmqctl list_users -s --formatter json | jq '.[]|.user' | grep -v default_user | xargs -I{} kubectl -n rabbitmq exec rabbitmq-server-0 -c rabbitmq -- rabbitmqctl delete_user {} -
删除 UiPath 命名空间中的 RabbitMQ 应用程序密码:
kubectl -n uipath get secret --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep -i rabbitmq-secret | xargs -I{} kubectl -n uipath delete secret {}
kubectl -n uipath get secret --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep -i rabbitmq-secret | xargs -I{} kubectl -n uipath delete secret {} -
删除 RabbitMQ 命名空间中的 RabbitMQ 应用程序密码:
kubectl -n rabbitmq get secret --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep -i rabbitmq-secret | xargs -I{} kubectl -n rabbitmq delete secret {}
kubectl -n rabbitmq get secret --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep -i rabbitmq-secret | xargs -I{} kubectl -n rabbitmq delete secret {} -
通过 ArgoCD 同步
sfcore
应用程序并等待同步完成: -
对 UiPath 命名空间中的所有应用程序执行首次重新启动:
kubectl -n uipath rollout restart deploy
kubectl -n uipath rollout restart deploy
如果某些 Pod 处于 CrashLoopBackOff 状态,请执行以下步骤:
-
确定哪些 RabbitMQ Pod 卡在
CrashLoopBackOff
状态,并检查 RabbitMQCrashLoopBackOff
Pod 日志:kubectl -n rabbitmq get pods kubectl -n rabbitmq logs <CrashLoopBackOff-Pod-Name>
kubectl -n rabbitmq get pods kubectl -n rabbitmq logs <CrashLoopBackOff-Pod-Name> -
检查前面命令的输出。如果问题与 Mnesia 表数据写入有关,您应该会看到类似于以下内容的错误消息:
Mnesia('rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq'): ** ERROR ** (could not write core file: eacces) ** FATAL ** Failed to merge schema: Bad cookie in table definition rabbit_user_permission: 'rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq' = {cstruct,rabbit_user_permission,set,[],['rabbit@rabbitmq-server-2.rabbitmq-nodes.rabbitmq','rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq','rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'],[],[],0,read_write,false,[],[],false,user_permission,[user_vhost,permission],[],[],[],{{1667351034020261908,-576460752303416575,1},'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'},{{4,0},{'rabbit@rabbitmq-server-2.rabbitmq-nodes.rabbitmq',{1667,351040,418694}}}}, 'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq' = {cstruct,rabbit_user_permission,set,[],['rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'],[],[],0,read_write,false,[],[],false,user_permission,[user_vhost,permission],[],[],[],{{1667372429216834387,-576460752303417087,1},'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'},{{2,0},[]}}
Mnesia('rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq'): ** ERROR ** (could not write core file: eacces) ** FATAL ** Failed to merge schema: Bad cookie in table definition rabbit_user_permission: 'rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq' = {cstruct,rabbit_user_permission,set,[],['rabbit@rabbitmq-server-2.rabbitmq-nodes.rabbitmq','rabbit@rabbitmq-server-0.rabbitmq-nodes.rabbitmq','rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'],[],[],0,read_write,false,[],[],false,user_permission,[user_vhost,permission],[],[],[],{{1667351034020261908,-576460752303416575,1},'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'},{{4,0},{'rabbit@rabbitmq-server-2.rabbitmq-nodes.rabbitmq',{1667,351040,418694}}}}, 'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq' = {cstruct,rabbit_user_permission,set,[],['rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'],[],[],0,read_write,false,[],[],false,user_permission,[user_vhost,permission],[],[],[],{{1667372429216834387,-576460752303417087,1},'rabbit@rabbitmq-server-1.rabbitmq-nodes.rabbitmq'},{{2,0},[]}} -
要解决此问题,请执行以下步骤:
-
查找 RabbitMQ 副本的数量:
rabbitmqReplicas=$(kubectl -n rabbitmq get rabbitmqcluster rabbitmq -o json | jq -r '.spec.replicas')
-
缩减 RabbitMQ 副本:
kubectl -n rabbitmq patch rabbitmqcluster rabbitmq -p "{\"spec\":{\"replicas\": 0}}" --type=merge
kubectl -n rabbitmq scale sts rabbitmq-server --replicas=0
-
等待所有 RabbitMQ Pod 终止:
kubectl -n rabbitmq get pod
-
检查待处理的卷快照:
kubectl get volumesnapshot -n rabbitmq <crashloopbackupoff_pod_pvc_name> | grep false
kubectl get volumesnapshot -n rabbitmq <crashloopbackupoff_pod_pvc_name> | grep false -
如果存在任何待处理的卷快照,请将其删除:
kubectl delete volumesnapshot -n rabbitmq <pending_volume_snapshot_name>
kubectl delete volumesnapshot -n rabbitmq <pending_volume_snapshot_name> -
查找并删除卡在
CrashLoopBackOff
状态的 RabbitMQ Pod 的 PVC:kubectl -n rabbitmq get pvc
kubectl -n rabbitmq delete pvc <crashloopbackupoff_pod_pvc_name>
-
扩展 RabbitMQ 副本:
kubectl -n rabbitmq patch rabbitmqcluster rabbitmq -p "{\"spec\":{\"replicas\": $rabbitmqReplicas}}" --type=merge
-
检查所有 RabbitMQ Pod 是否运行状况良好:
kubectl -n rabbitmq get pod
-