- Visão geral
- Requisitos
- Instalação
- Pós-instalação
- Migração e atualização
- Atualização do Automation Suite no EKS/AKS
- Etapa 1: mover os dados da organização do Identity, de independente para o Automation Suite
- Etapa 2: restauração do banco de dados de produtos independente
- Etapa 3: backup do banco de dados da plataforma no Automation Suite
- Etapa 4: mesclando organizações no Automation Suite
- Etapa 5: atualização das strings de conexão do produto migradas
- Etapa 6: migração do Orchestrator independente
- Etapa 7: migração do Insights independente
- Etapa 8: exclusão do tenant padrão
- B) Migração de um único tenant
- Migração do Automation Suite no Linux para o Automation Suite no EKS/AKS
- Monitoramento e alertas
- Administração de cluster
- Configuração específica do produto
- Configuração de parâmetros do Orchestrator
- Configurações de aplicativo do Orchestrator
- Configuração do AppSettings
- Configuração do tamanho máximo da solicitação
- Substituição da configuração de armazenamento no nível do cluster
- Configuração dos repositórios de credenciais
- Configuração da chave de criptografia por tenant
- Limpeza do banco de dados do Orchestrator
- Solução de problemas
- A configuração de backup não funciona devido a uma falha na conexão com o Azure Government
- Pods no namespace uipath travaram ao habilitar taints de nó personalizado
- Não é possível iniciar o Automation Hub e o Apps com configuração de proxy
- Os pods não podem se comunicar com o FQDN em um ambiente de proxy
- A cadeia de caracteres de conexão SQL da Automação de Teste é ignorada
- Execução da ferramenta de diagnóstico
- Execução da ferramenta do pacote de suporte
Automation Suite no guia de instalação do EKS/AKS
Execução da ferramenta de diagnóstico
A ferramenta de diagnóstico do Automation Suite executa um conjunto de verificações para gerar um relatório sobre a integridade do cluster, o qual você pode analisar para identificar problemas e suas possíveis causas. A ferramenta ajuda a encontrar problemas comuns, como a conectividade perdida do banco de dados ou credenciais inválidas ou expiradas.
uipathctl
e uipathtools
, e você pode baixar na sua máquina de gerenciamento.
Para obter instruções de download, consulte uipathctl e uipathtools.
uipathtools
é uma ferramenta de CLI que contém um subconjunto de recursos uipathctl
específicos para comandos de integridade. A ferramenta é compatível com versões anteriores e funciona com qualquer uma das versões do Automation Suite suportadas. Recomendamos usar uipathtools
como a primeira etapa se você tiver qualquer problema.
uipath-check
. Você deve permitir a criação do namespace uipath-check
ou criá-lo por conta própria antes de executar as verificações/testes. Além disso, algumas verificações/testes exigem que você permita a comunicação entre os namespaces uipath-check
e uipath
ou que você habilite o uso de hostNetwork
.
check
e test
fornecem insights rápidos sobre o estado do cluster sem executar uma análise profunda.
-
check
depende da integridade e do status de sincronização do ArgoCD e não modifica nenhum estado no cluster -
test
examina os aplicativos, implantação ou pods e altera temporariamente o estado do cluster para fornecer esses insights a você.
Para executar uma verificação de integridade, use um dos seguintes comandos, dependendo da ferramenta de CLI que você usar:
- Se você usar
uipathctl
, execute:./uipathctl health check
./uipathctl health check - Se você usar
uipathtools
, execute:./uipathtools health check
./uipathtools health check
--namespace
(opcional) se você não fornecer input.json
. Você precisa usar o sinalizador apenas se a instalação não estiver no namespace uipath
. Sem o sinalizador, a verificação de integridade recupera dados de diagnóstico de todos os namespaces.
Saída de amostra do relatório gerado:
Checks run on cluster/
✔ [NOTIFICATIONSERVICE]
✔ [NOTIFICATIONSERVICE_HEALTH] Application is healthy and in sync
✔ [ACTION_CENTER]
✔ [ACTIONCENTER_HEALTH] Application is healthy and in sync
❌ [SYNC]
❌ [namespace:"argocd" | kind:"Application" | name:"dataservice"] Application health check failed: health status is Progressing and sync status is Synced
✔ [RELOADER]
✔ [RELOADER_HEALTH] Application is healthy and in sync
❌ [POD]
✔ [LIST_NAMESPACES] Retrieved 25 namespaces to check pod health
❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
✔ [ISTIO]
✔ [LIST_PODS] Found 2 pods for Istio
✔ [ISTIOD_EXISTS] The Istio pods are present and running version -
✔ [ISTIOD_READY] Istio pods are healthy
✔ [AIEVENTS]
✔ [AIEVENTS_HEALTH] Application is healthy and in sync
❌ [DATASERVICE]
❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
✔ [PLATFORM]
✔ [PLATFORM_HEALTH] Application is healthy and in sync
✔ [TASK_MINING]
✔ [TASKMINING_HEALTH] Application is healthy and in sync
✔ [LOGGING]
✔ [LOGGING_HEALTH] Application is healthy and in sync
✔ [WEBHOOK]
✔ [WEBHOOK_HEALTH] Application is healthy and in sync
Checks run on cluster/
✔ [NOTIFICATIONSERVICE]
✔ [NOTIFICATIONSERVICE_HEALTH] Application is healthy and in sync
✔ [ACTION_CENTER]
✔ [ACTIONCENTER_HEALTH] Application is healthy and in sync
❌ [SYNC]
❌ [namespace:"argocd" | kind:"Application" | name:"dataservice"] Application health check failed: health status is Progressing and sync status is Synced
✔ [RELOADER]
✔ [RELOADER_HEALTH] Application is healthy and in sync
❌ [POD]
✔ [LIST_NAMESPACES] Retrieved 25 namespaces to check pod health
❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
✔ [ISTIO]
✔ [LIST_PODS] Found 2 pods for Istio
✔ [ISTIOD_EXISTS] The Istio pods are present and running version -
✔ [ISTIOD_READY] Istio pods are healthy
✔ [AIEVENTS]
✔ [AIEVENTS_HEALTH] Application is healthy and in sync
❌ [DATASERVICE]
❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
✔ [PLATFORM]
✔ [PLATFORM_HEALTH] Application is healthy and in sync
✔ [TASK_MINING]
✔ [TASKMINING_HEALTH] Application is healthy and in sync
✔ [LOGGING]
✔ [LOGGING_HEALTH] Application is healthy and in sync
✔ [WEBHOOK]
✔ [WEBHOOK_HEALTH] Application is healthy and in sync
uipathctl health check
verifica a integridade de todos os componentes. No entanto, também permite verificar estritamente os componentes nos quais você está interessado:
- Se quiser excluir componentes da execução, use o sinalizador
--excluded
. Por exemplo, se você não quiser verificar a integridade do SQL, executeuipathctl health check --excluded SQL
. O comando verifica a integridade de todos os componentes, exceto SQL. - Se quiser incluir apenas determinados componentes na execução, use o sinalizador
--included
. Por exemplo, se você quiser apenas verificar a integridade do DNS e do objectstore, executeuipathctl health check --included DNS,OBJECTSTORAGE
.
Você encontra os nomes dos componentes que você pode incluir ou excluir das verificações de integridade aqui. No exemplo, a primeira palavra em cada linha sem identação representa o nome do componente. Por exemplo: SQL, OBJECTSTORE, DNS etc.
Análise dos logs
- Após executar uma verificação de integridade, os logs mostram que a verificação de integridade do aplicativo Data Service falhou.
❌ [DATASERVICE] ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
❌ [DATASERVICE] ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced - Após uma investigação mais aprofundada, fica claro que o aplicativo Data Service falhou porque os pods
dataservice-runtime-8f5bb7d56-v5krg
edataservice-taskrunner-787df76c74-98h5l
estão em um estado de falha. Se você analisar mais a fundo, poderá descobrir que odataservice-external-storage-secret
ausente está ausente.❌ [POD] ✔ [LIST_NAMESPACES] Retrieved 25 namespaces to check pod health ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
❌ [POD] ✔ [LIST_NAMESPACES] Retrieved 25 namespaces to check pod health ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found - Para corrigir esse problema, certifique-se de que você forneceu as credenciais corretas para o objectstore no
input.json
.Para obter detalhes, consulte Atualização de credenciais.
Para executar um teste de integridade, use um dos seguintes comandos, dependendo da ferramenta de CLI utilizada:
- Se você usar
uipathctl
, execute:./uipathctl health test
./uipathctl health test - Se você usar
uipathtools
, execute:./uipathtools health test
./uipathtools health test
Saída de amostra do relatório gerado:
Checks run on cluster/
✔ [GATEKEEPER]
✔ [CREATE_CONSTRAINT] Created test constraint
✔ [VERIFY] Constraint verified
✔ [CLEANUP] Cleaned up the test constraint
✔ [ACTION_CENTER]
✔ [CREATE_NAMESPACE] Created namespace prereqk6b72
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqk6b72
✔ [CREATE_NAMESPACE] Created namespace prereqbxjx8
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqbxjx8
✔ [CREATE_NAMESPACE] Created namespace prereq8zvw4
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq8zvw4
✔ [DATASERVICE]
✔ [CREATE_NAMESPACE] Created namespace prereqxwlsb
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqxwlsb
✔ [CREATE_NAMESPACE] Created namespace prereq5szsn
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq5szsn
✔ [APPS]
✔ [CREATE_NAMESPACE] Created namespace prereq9z6nb
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq9z6nb
✔ [CREATE_NAMESPACE] Created namespace prereq6v7lm
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq6v7lm
✔ [CREATE_NAMESPACE] Created namespace prereqxxn5v
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqxxn5v
✔ [AUTOMATION_HUB]
✔ [CREATE_NAMESPACE] Created namespace prereq4jkbt
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4jkbt
✔ [TEST_MANAGER]
✔ [CREATE_NAMESPACE] Created namespace prereqnvvpc
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqnvvpc
✔ [ORCHESTRATOR]
✔ [CREATE_NAMESPACE] Created namespace prereq8pf2f
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq8pf2f
✔ [CREATE_NAMESPACE] Created namespace prereq4w4v4
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4w4v4
✔ [CREATE_NAMESPACE] Created namespace prereqkzwqg
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqkzwqg
✔ [INSIGHTS]
✔ [CREATE_NAMESPACE] Created namespace prereqqmgjc
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqqmgjc
✔ [CREATE_NAMESPACE] Created namespace prereq4vnjx
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4vnjx
✔ [CREATE_NAMESPACE] Created namespace prereqgtg9g
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqgtg9g
✔ [AUTOMATION_OPS]
✔ [CREATE_NAMESPACE] Created namespace prereqgkkrz
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqgkkrz
✔ [AICENTER]
✔ [CREATE_NAMESPACE] Created namespace prereqdls88
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqdls88
✔ [CREATE_NAMESPACE] Created namespace prereq6m7x9
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq6m7x9
Checks run on cluster/
✔ [GATEKEEPER]
✔ [CREATE_CONSTRAINT] Created test constraint
✔ [VERIFY] Constraint verified
✔ [CLEANUP] Cleaned up the test constraint
✔ [ACTION_CENTER]
✔ [CREATE_NAMESPACE] Created namespace prereqk6b72
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqk6b72
✔ [CREATE_NAMESPACE] Created namespace prereqbxjx8
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqbxjx8
✔ [CREATE_NAMESPACE] Created namespace prereq8zvw4
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq8zvw4
✔ [DATASERVICE]
✔ [CREATE_NAMESPACE] Created namespace prereqxwlsb
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqxwlsb
✔ [CREATE_NAMESPACE] Created namespace prereq5szsn
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq5szsn
✔ [APPS]
✔ [CREATE_NAMESPACE] Created namespace prereq9z6nb
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq9z6nb
✔ [CREATE_NAMESPACE] Created namespace prereq6v7lm
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq6v7lm
✔ [CREATE_NAMESPACE] Created namespace prereqxxn5v
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqxxn5v
✔ [AUTOMATION_HUB]
✔ [CREATE_NAMESPACE] Created namespace prereq4jkbt
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4jkbt
✔ [TEST_MANAGER]
✔ [CREATE_NAMESPACE] Created namespace prereqnvvpc
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqnvvpc
✔ [ORCHESTRATOR]
✔ [CREATE_NAMESPACE] Created namespace prereq8pf2f
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq8pf2f
✔ [CREATE_NAMESPACE] Created namespace prereq4w4v4
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4w4v4
✔ [CREATE_NAMESPACE] Created namespace prereqkzwqg
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqkzwqg
✔ [INSIGHTS]
✔ [CREATE_NAMESPACE] Created namespace prereqqmgjc
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqqmgjc
✔ [CREATE_NAMESPACE] Created namespace prereq4vnjx
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4vnjx
✔ [CREATE_NAMESPACE] Created namespace prereqgtg9g
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqgtg9g
✔ [AUTOMATION_OPS]
✔ [CREATE_NAMESPACE] Created namespace prereqgkkrz
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqgkkrz
✔ [AICENTER]
✔ [CREATE_NAMESPACE] Created namespace prereqdls88
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqdls88
✔ [CREATE_NAMESPACE] Created namespace prereq6m7x9
✔ [CREATE_POD] Created test pod curl-pod in namespace prereq6m7x9
uipathctl health test
executa testes de integridade em todos os componentes. No entanto, também permite verificar estritamente os componentes nos quais você está interessado:
- Se quiser excluir componentes da execução, use o sinalizador
--excluded
. Por exemplo, se você não quiser verificar a integridade do SQL, executeuipathctl health test --excluded SQL
. O comando verifica a integridade de todos os componentes, exceto SQL. - Se quiser incluir apenas determinados componentes na execução, use o sinalizador
--included
. Por exemplo, se você quiser apenas verificar a integridade do DNS e do objectstore, executeuipathctl health test --included DNS,OBJECTSTORAGE
.
Você pode encontrar os nomes dos componentes que você pode incluir ou excluir dos testes de integridade aqui. No exemplo, a primeira palavra em cada linha sem identação representa o nome do componente. Por exemplo: SQL, OBJECTSTORE, DNS etc.
check
e test
para o aplicativo Data Service, poderá ver que o primeiro valida a integridade do aplicativo, enquanto o último verifica o roteamento.
Problema conhecido
Você pode receber uma mensagem de erro semelhante ao exemplo a seguir. Você pode ignorá-lo porque nenhuma Actions é necessária do seu lado.
E0621 23:32:56.426321 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.426392 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.444420 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.446150 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.513357 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.426321 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.426392 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.444420 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.446150 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.513357 24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
diagnose
fornece insights profundos sobre o estado do cluster. Ele ajuda a identificar problemas em todos os níveis, como SQL, Objectstore, Node, Secret, Istio, Metworking etc.
- Ele abrange os comandos
check
etest
. - Ele executa as verificações de pré-requisitos realizadas antes da instalação do Automation Suite para validar as alterações na configuração do ambiente que foram feitas após a instalação e que podem ser a causa potencial do problema.
-
Ele é executado em todos os nós para coletar quaisquer problemas específicos do nó, como indisponibilidade de recursos, qualquer interferência de rede, etc.
Para executar uma verificação de diagnóstico, use um dos seguintes comandos, dependendo da ferramenta de CLI utilizada:
- Se você usar
uipathctl
, execute:./uipathctl health diagnose input.json --versions version.json
./uipathctl health diagnose input.json --versions version.json - Se você usar
uipathtools
, execute:./uipathtools health diagnose input.json --versions version.json
./uipathtools health diagnose input.json --versions version.json
--namespace
(opcional) se você não fornecer input.json
. Você precisa usar o sinalizador apenas se a instalação não estiver no namespace uipath
. Sem o sinalizador, os dados de diagnóstico serão buscados de todos os namespaces.
Saída de amostra do relatório gerado:
Checks run on nodes/aks-pool0-27031798-vmss000001
✔ [REDIS(PORT=6380)]
✔ [CONNECTIVITY] Successfully made Redis connection on ci-asaks4011056.redis.cache.windows.net:6380
✔ [OBJECTSTORAGE(PRODUCT=ORCHESTRATOR)]
✔ [CHECK_API] Object storage test passed for orchestrator
✔ [SQL(PRODUCT=PROCESSMINING, TYPE=ADO)]
✔ [EXECUTE_NATIVE] Successfully executed command
✔ [BUILD_CLIENT] Successfully built ADO client
✔ [CONNECT] Successfully connected ADO client to DB
✔ [DB_ROLES] SQL user has the required roles to DB
✔ [DNS(FQDN=INSIGHTS.<FQDN>)]
✔ [VALIDATE_FQDN] FQDN is valid
✔ [RESOLVE_SUBDOMAIN] Resolved insights.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }]
✔ [IPS_MATCH] Subdomain resolves to top domain
✔ [DNS(FQDN=ALM.<FQDN>)]
✔ [VALIDATE_FQDN] FQDN is valid
✔ [RESOLVE_SUBDOMAIN] Resolved alm.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }]
✔ [IPS_MATCH] Subdomain resolves to top domain
Checks run on cluster/
✔ [NODE]
✔ [NODE_EXISTS] 12 Nodes present in the cluster
✔ [NODE_READY] All the nodes are in ready state
✔ [GATEKEEPER]
✔ [GATEKEEPER_HEALTH] Application is healthy and in sync
✔ [CREATE_CONSTRAINT] Created test constraint
✔ [VERIFY] Constraint verified
✔ [CLEANUP] Cleaned up the test constraint
✔ [LOGGING]
✔ [LOGGING_HEALTH] Application is healthy and in sync
✔ [DATASERVICE]
✔ [CREATE_NAMESPACE] Created namespace prereqctzhp
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqctzhp
✔ [ROBOTUBE]
✔ [ROBOTUBE_HEALTH] Application is healthy and in sync
✔ [AIRFLOW]
✔ [AIRFLOW_HEALTH] Application is healthy and in sync
✔ [ARGOCD]
✔ [ARGOCD_SERVER_PODS] Component argocd-server has ready Pods
✔ [ARGOCD_REPO_SERVER_PODS] Component argocd-repo-server has ready Pods
✔ [ARGOCD_APP_CONTROLLER_PODS] Component argocd-application-controller has ready Pods
✔ [ARGOCD_REDIS_PODS] Component redis-ha has ready Pods
✔ [ISTIO]
✔ [LIST_PODS] Found 2 pods for Istio
✔ [ISTIOD_EXISTS] The Istio pods are present and running version -
✔ [ISTIOD_READY] Istio pods are healthy
✔ [AICENTER]
✔ [AICENTER_HEALTH] Application is healthy and in sync
✔ [CREATE_NAMESPACE] Created namespace prereqn6sqn
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqn6sqn
Checks run on local/
✔ [CONNECTIVITY]
✔ [OVERLAY_CONNECTIVITY_TEST] echo-a-4rffj on aks-pool0-27031798-vmss000002 can reach echo-a-4rffj's IP 10.240.1.86 on aks-pool0-27031798-vmss000002
✔ [OVERLAY_CONNECTIVITY_TEST] echo-a-4rffj on aks-pool0-27031798-vmss000002 can reach echo-a-8c6t5's IP 10.240.3.57 on aks-pool3-27031798-vmss000000
✔ [POD_TO_A] Scenario: http check between two random pods completed successfully
✔ [POD_TO_B_MULTI_NODE_CLUSTERIP] Scenario: http check between from pod to a multinode ClusterIP completed successfully
✔ [POD_TO_B_MULTI_NODE_HEADLESS] Scenario: http check between from pod to a multinode ClusterIP without a clusterIP set completed successfully
✔ [POD_TO_B_INTRA_NODE_CLUSTERIP] Scenario: http check between from two pods colocated on the same node via ClusterIP completed successfully
✔ [INGRESS]
✔ [INGRESS_GATEWAY_FOUND] Found service istio-ingressgateway in the cluster
✔ [INGRESS_GATEWAY_PORT_CHECK] Service istio-ingressgateway is configured to allow traffic on http://ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com
✔ [INGRESS_GATEWAY_PORT_CHECK] Service istio-ingressgateway is configured to allow traffic on https://ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com:443
✔ [OSS(COMPONENT=MONITORING)]
✔ [OSS(component=monitoring)] Check for component monitoring passed
✔ [OSS(COMPONENT=GATEKEEPER)]
✔ [OSS(component=gatekeeper)] Check for component gatekeeper passed
✔ [STORAGECLASS(NAME=STORAGE_CLASS_SINGLE_REPLICA)]
✔ [STORAGE_CLASS_EXISTS] Storage class azurefile-csi exists
✔ [LIST_NODES] Listed 12 nodes
✔ [CREATE_NAMESPACE] Created namespace prereqhcpkc
✔ [CREATE_STATEFULSET] Created statefulset storage-class-check-5n272
✔ [LIST_PODS] Listed 1 pods on node aks-pool3-27031798-vmss000001
✔ [POD_RUNNING] Found one pod running on node aks-pool3-27031798-vmss000001
✔ [REGISTRY]
✔ [CONNECTIVITY] Successfully made Registry connection on sfbrdevhelmweacr.azurecr.io
✔ [NETWORK-POLICIES]
✔ [CREATE_NAMESPACE] Namespace prereqw4t9b created
✔ [CREATE_EGRESS_NETWORK_POLICY] Created the egress network policies allow-coredns-egress and block-external-traffic
✔ [CREATE_INGRESS_NETWORK_POLICY] Created the ingress network policy: block-echo-server-ingress
✔ [CREATE_SERVICE] Service echo-server-svc created
✔ [STORAGECLASS(NAME=STORAGE_CLASS)]
✔ [STORAGE_CLASS_EXISTS] Storage class managed-premium exists
✔ [LIST_NODES] Listed 12 nodes
✔ [CREATE_NAMESPACE] Created namespace prereqgjhcb
✔ [CREATE_STATEFULSET] Created statefulset storage-class-check-nm9th
✔ [LIST_PODS] Listed 1 pods on node aks-pool0-27031798-vmss000003
✔ [POD_RUNNING] Found one pod running on node aks-pool0-27031798-vmss000003
✔ [LIST_PODS] Listed 1 pods on node aks-pool0-27031798-vmss000001
✔ [POD_RUNNING] Found one pod running on node aks-pool0-27031798-vmss000001
✔ [DNS(FQDN=INSIGHTS.<FQDN>)]
✔ [VALIDATE_FQDN] FQDN is valid
✔ [RESOLVE_TOP_DOMAIN] Resolved ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }]
✔ [RESOLVE_SUBDOMAIN] Resolved insights.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }]
✔ [IPS_MATCH] Subdomain resolves to top domain
✔ [NODE(CPU >= 8, RAM >= 16GI)]
✔ [LIST_NODES] Listed 12 nodes
✔ [AT_LEAST_ONE_NODE] At least one node found
✔ [CPU_USAGE] Node aks-pool0-27031798-vmss000000 has 12.50% CPU usage
✔ [MEMORY_USAGE] Node aks-pool0-27031798-vmss000000 has 38.27% memory usage
✔ [POD_USAGE] Node aks-pool0-27031798-vmss000000 has 40.00% of pods in use. Number of pods: 40.00 max allowed: 100.00
✔ [OSS(COMPONENT=CERT-MANAGER)]
✔ [OSS(component=cert-manager)] Check for component cert-manager passed
✔ [RESOURCE]
✔ [Capacity] Automation suite already installed on cluster
✔ [OSS(COMPONENT=LOGGING)]
✔ [OSS(component=logging)] Check for component logging passed
✔ [GPU(PRODUCT=DOCUMENTUNDERSTANDING)]
✔ [BASIC_GPU_SUCCESS] Was able to start a CUDA job on a GPU node
Checks run on cluster/
❌ [DATASERVICE]
❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
❌ [ISTIO]
✔ [ISTIO_SYNC_STATUS] Istio sync is up-to-date
❌ [ISTIO_ENVOY_CONFIG_STATUS] Istio Envoy configs are not healthy: Error [IST0101] (VirtualService uipath/du-platform-vs) Referenced host:port not found: "aistorage:5000"
✔ [ISTIO_SERVICEMESH_VALIDATION_GET_REGISTRY_FQDN] Successfully retrieved registry url
✔ [ISTIO_SERVICEMESH_VALIDATION_GET_CLUSTER_FQDN] Successfully retrieved cluster fqdn
✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_DEPLOYMENT] Successfully created the test deployment istio-validation-deployment
✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_SERVICE] Successfully created the test service istio-validation-service
✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_GATEWAY] Successfully created the test gateway istio-validation-gateway
✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_VIRTUALSERVICE] Successfully created the test virtual service istio-validation-vs
✔ [ISTIO_SERVICEMESH_VALIDATION_URL_ACCESS] Success exposing the service via servicemesh
❌ [POD]
✔ [LIST_NAMESPACES] Retrieved 25 namespaces to check pod health
❌ [CANNOT_MOUNT_VOLUME] Pod uipath/ah-tenant-service-sync-insights-data-job-28122960-p6rzg cannot mount volume: MountVolume.SetUp failed for volume "ah-insights-secrets" : failed to sync secret cache: timed out waiting for the condition
❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[external-storage-creds], unattached volumes=[workload-socket is-secrets openssl istio-podinfo temp-location cert-location istio-data external-storage-creds workload-certs istio-envoy java domain-cert-config edk2 credential-socket tmp additional-ca-cert-config pem istiod-ca-cert istio-token app-secrets ceph-storage-creds]: timed out waiting for the condition
❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
❌ [POD_UNHEALTHY] Latest event for pod uipath/du-documentmanager-dm-maintenance-cron-28122960-4sm5z: Error: failed to sync configmap cache: timed out waiting for the condition
❌ [SYNC]
❌ [namespace:"argocd" | kind:"Application" | name:"dataservice"] Application health check failed: health status is Progressing and sync status is Synced
Checks run on nodes/aks-pool0-27031798-vmss000001
✔ [REDIS(PORT=6380)]
✔ [CONNECTIVITY] Successfully made Redis connection on ci-asaks4011056.redis.cache.windows.net:6380
✔ [OBJECTSTORAGE(PRODUCT=ORCHESTRATOR)]
✔ [CHECK_API] Object storage test passed for orchestrator
✔ [SQL(PRODUCT=PROCESSMINING, TYPE=ADO)]
✔ [EXECUTE_NATIVE] Successfully executed command
✔ [BUILD_CLIENT] Successfully built ADO client
✔ [CONNECT] Successfully connected ADO client to DB
✔ [DB_ROLES] SQL user has the required roles to DB
✔ [DNS(FQDN=INSIGHTS.<FQDN>)]
✔ [VALIDATE_FQDN] FQDN is valid
✔ [RESOLVE_SUBDOMAIN] Resolved insights.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }]
✔ [IPS_MATCH] Subdomain resolves to top domain
✔ [DNS(FQDN=ALM.<FQDN>)]
✔ [VALIDATE_FQDN] FQDN is valid
✔ [RESOLVE_SUBDOMAIN] Resolved alm.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }]
✔ [IPS_MATCH] Subdomain resolves to top domain
Checks run on cluster/
✔ [NODE]
✔ [NODE_EXISTS] 12 Nodes present in the cluster
✔ [NODE_READY] All the nodes are in ready state
✔ [GATEKEEPER]
✔ [GATEKEEPER_HEALTH] Application is healthy and in sync
✔ [CREATE_CONSTRAINT] Created test constraint
✔ [VERIFY] Constraint verified
✔ [CLEANUP] Cleaned up the test constraint
✔ [LOGGING]
✔ [LOGGING_HEALTH] Application is healthy and in sync
✔ [DATASERVICE]
✔ [CREATE_NAMESPACE] Created namespace prereqctzhp
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqctzhp
✔ [ROBOTUBE]
✔ [ROBOTUBE_HEALTH] Application is healthy and in sync
✔ [AIRFLOW]
✔ [AIRFLOW_HEALTH] Application is healthy and in sync
✔ [ARGOCD]
✔ [ARGOCD_SERVER_PODS] Component argocd-server has ready Pods
✔ [ARGOCD_REPO_SERVER_PODS] Component argocd-repo-server has ready Pods
✔ [ARGOCD_APP_CONTROLLER_PODS] Component argocd-application-controller has ready Pods
✔ [ARGOCD_REDIS_PODS] Component redis-ha has ready Pods
✔ [ISTIO]
✔ [LIST_PODS] Found 2 pods for Istio
✔ [ISTIOD_EXISTS] The Istio pods are present and running version -
✔ [ISTIOD_READY] Istio pods are healthy
✔ [AICENTER]
✔ [AICENTER_HEALTH] Application is healthy and in sync
✔ [CREATE_NAMESPACE] Created namespace prereqn6sqn
✔ [CREATE_POD] Created test pod curl-pod in namespace prereqn6sqn
Checks run on local/
✔ [CONNECTIVITY]
✔ [OVERLAY_CONNECTIVITY_TEST] echo-a-4rffj on aks-pool0-27031798-vmss000002 can reach echo-a-4rffj's IP 10.240.1.86 on aks-pool0-27031798-vmss000002
✔ [OVERLAY_CONNECTIVITY_TEST] echo-a-4rffj on aks-pool0-27031798-vmss000002 can reach echo-a-8c6t5's IP 10.240.3.57 on aks-pool3-27031798-vmss000000
✔ [POD_TO_A] Scenario: http check between two random pods completed successfully
✔ [POD_TO_B_MULTI_NODE_CLUSTERIP] Scenario: http check between from pod to a multinode ClusterIP completed successfully
✔ [POD_TO_B_MULTI_NODE_HEADLESS] Scenario: http check between from pod to a multinode ClusterIP without a clusterIP set completed successfully
✔ [POD_TO_B_INTRA_NODE_CLUSTERIP] Scenario: http check between from two pods colocated on the same node via ClusterIP completed successfully
✔ [INGRESS]
✔ [INGRESS_GATEWAY_FOUND] Found service istio-ingressgateway in the cluster
✔ [INGRESS_GATEWAY_PORT_CHECK] Service istio-ingressgateway is configured to allow traffic on http://ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com
✔ [INGRESS_GATEWAY_PORT_CHECK] Service istio-ingressgateway is configured to allow traffic on https://ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com:443
✔ [OSS(COMPONENT=MONITORING)]
✔ [OSS(component=monitoring)] Check for component monitoring passed
✔ [OSS(COMPONENT=GATEKEEPER)]
✔ [OSS(component=gatekeeper)] Check for component gatekeeper passed
✔ [STORAGECLASS(NAME=STORAGE_CLASS_SINGLE_REPLICA)]
✔ [STORAGE_CLASS_EXISTS] Storage class azurefile-csi exists
✔ [LIST_NODES] Listed 12 nodes
✔ [CREATE_NAMESPACE] Created namespace prereqhcpkc
✔ [CREATE_STATEFULSET] Created statefulset storage-class-check-5n272
✔ [LIST_PODS] Listed 1 pods on node aks-pool3-27031798-vmss000001
✔ [POD_RUNNING] Found one pod running on node aks-pool3-27031798-vmss000001
✔ [REGISTRY]
✔ [CONNECTIVITY] Successfully made Registry connection on sfbrdevhelmweacr.azurecr.io
✔ [NETWORK-POLICIES]
✔ [CREATE_NAMESPACE] Namespace prereqw4t9b created
✔ [CREATE_EGRESS_NETWORK_POLICY] Created the egress network policies allow-coredns-egress and block-external-traffic
✔ [CREATE_INGRESS_NETWORK_POLICY] Created the ingress network policy: block-echo-server-ingress
✔ [CREATE_SERVICE] Service echo-server-svc created
✔ [STORAGECLASS(NAME=STORAGE_CLASS)]
✔ [STORAGE_CLASS_EXISTS] Storage class managed-premium exists
✔ [LIST_NODES] Listed 12 nodes
✔ [CREATE_NAMESPACE] Created namespace prereqgjhcb
✔ [CREATE_STATEFULSET] Created statefulset storage-class-check-nm9th
✔ [LIST_PODS] Listed 1 pods on node aks-pool0-27031798-vmss000003
✔ [POD_RUNNING] Found one pod running on node aks-pool0-27031798-vmss000003
✔ [LIST_PODS] Listed 1 pods on node aks-pool0-27031798-vmss000001
✔ [POD_RUNNING] Found one pod running on node aks-pool0-27031798-vmss000001
✔ [DNS(FQDN=INSIGHTS.<FQDN>)]
✔ [VALIDATE_FQDN] FQDN is valid
✔ [RESOLVE_TOP_DOMAIN] Resolved ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }]
✔ [RESOLVE_SUBDOMAIN] Resolved insights.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }]
✔ [IPS_MATCH] Subdomain resolves to top domain
✔ [NODE(CPU >= 8, RAM >= 16GI)]
✔ [LIST_NODES] Listed 12 nodes
✔ [AT_LEAST_ONE_NODE] At least one node found
✔ [CPU_USAGE] Node aks-pool0-27031798-vmss000000 has 12.50% CPU usage
✔ [MEMORY_USAGE] Node aks-pool0-27031798-vmss000000 has 38.27% memory usage
✔ [POD_USAGE] Node aks-pool0-27031798-vmss000000 has 40.00% of pods in use. Number of pods: 40.00 max allowed: 100.00
✔ [OSS(COMPONENT=CERT-MANAGER)]
✔ [OSS(component=cert-manager)] Check for component cert-manager passed
✔ [RESOURCE]
✔ [Capacity] Automation suite already installed on cluster
✔ [OSS(COMPONENT=LOGGING)]
✔ [OSS(component=logging)] Check for component logging passed
✔ [GPU(PRODUCT=DOCUMENTUNDERSTANDING)]
✔ [BASIC_GPU_SUCCESS] Was able to start a CUDA job on a GPU node
Checks run on cluster/
❌ [DATASERVICE]
❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
❌ [ISTIO]
✔ [ISTIO_SYNC_STATUS] Istio sync is up-to-date
❌ [ISTIO_ENVOY_CONFIG_STATUS] Istio Envoy configs are not healthy: Error [IST0101] (VirtualService uipath/du-platform-vs) Referenced host:port not found: "aistorage:5000"
✔ [ISTIO_SERVICEMESH_VALIDATION_GET_REGISTRY_FQDN] Successfully retrieved registry url
✔ [ISTIO_SERVICEMESH_VALIDATION_GET_CLUSTER_FQDN] Successfully retrieved cluster fqdn
✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_DEPLOYMENT] Successfully created the test deployment istio-validation-deployment
✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_SERVICE] Successfully created the test service istio-validation-service
✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_GATEWAY] Successfully created the test gateway istio-validation-gateway
✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_VIRTUALSERVICE] Successfully created the test virtual service istio-validation-vs
✔ [ISTIO_SERVICEMESH_VALIDATION_URL_ACCESS] Success exposing the service via servicemesh
❌ [POD]
✔ [LIST_NAMESPACES] Retrieved 25 namespaces to check pod health
❌ [CANNOT_MOUNT_VOLUME] Pod uipath/ah-tenant-service-sync-insights-data-job-28122960-p6rzg cannot mount volume: MountVolume.SetUp failed for volume "ah-insights-secrets" : failed to sync secret cache: timed out waiting for the condition
❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[external-storage-creds], unattached volumes=[workload-socket is-secrets openssl istio-podinfo temp-location cert-location istio-data external-storage-creds workload-certs istio-envoy java domain-cert-config edk2 credential-socket tmp additional-ca-cert-config pem istiod-ca-cert istio-token app-secrets ceph-storage-creds]: timed out waiting for the condition
❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
❌ [POD_UNHEALTHY] Latest event for pod uipath/du-documentmanager-dm-maintenance-cron-28122960-4sm5z: Error: failed to sync configmap cache: timed out waiting for the condition
❌ [SYNC]
❌ [namespace:"argocd" | kind:"Application" | name:"dataservice"] Application health check failed: health status is Progressing and sync status is Synced
diagnose
é executado em vários níveis, como infraestrutura, rede, armazenamento, pods, DNS etc.
Análise dos logs
Há dois problemas potenciais que você pode observar nos logs anteriores:
- O Istio tem uma configuração incorreta, o que pode causar problemas no acesso à plataforma do Document Understanding:
❌ [ISTIO] ✔ [ISTIO_SYNC_STATUS] Istio sync is up-to-date ❌ [ISTIO_ENVOY_CONFIG_STATUS] Istio Envoy configs are not healthy: Error [IST0101] (VirtualService uipath/du-platform-vs) Referenced host:port not found: "aistorage:5000"
❌ [ISTIO] ✔ [ISTIO_SYNC_STATUS] Istio sync is up-to-date ❌ [ISTIO_ENVOY_CONFIG_STATUS] Istio Envoy configs are not healthy: Error [IST0101] (VirtualService uipath/du-platform-vs) Referenced host:port not found: "aistorage:5000" - Data Service indisponível. Consulte Ceph no exemplo de código.
❌ [DATASERVICE] ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[external-storage-creds], unattached volumes=[workload-socket is-secrets openssl istio-podinfo temp-location cert-location istio-data external-storage-creds workload-certs istio-envoy java domain-cert-config edk2 credential-socket tmp additional-ca-cert-config pem istiod-ca-cert istio-token app-secrets ceph-storage-creds]: timed out waiting for the condition ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
❌ [DATASERVICE] ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[external-storage-creds], unattached volumes=[workload-socket is-secrets openssl istio-podinfo temp-location cert-location istio-data external-storage-creds workload-certs istio-envoy java domain-cert-config edk2 credential-socket tmp additional-ca-cert-config pem istiod-ca-cert istio-token app-secrets ceph-storage-creds]: timed out waiting for the condition ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
Problemas conhecidos
Você pode receber uma mensagem de erro semelhante ao exemplo a seguir. Você pode ignorá-lo porque nenhuma Actions é necessária do seu lado.
I0622 01:31:28.917107 28815 request.go:601] Waited for 1.017599292s due to client-side throttling, not priority and fairness, request: GET:https://ci-asaks4011056-fwwpyxm7.hcp.westeurope.azmk8s.io:443/apis/networking.istio.io/v1alpha3
I0622 01:31:28.917107 28815 request.go:601] Waited for 1.017599292s due to client-side throttling, not priority and fairness, request: GET:https://ci-asaks4011056-fwwpyxm7.hcp.westeurope.azmk8s.io:443/apis/networking.istio.io/v1alpha3
check
, test
e diagnose
) suportam filtragem e formato de saída adicionais.
Filtragem
Filtros |
Description |
Usos |
---|---|---|
|
Lista separada por vírgulas dos serviços a serem incluídos na validação |
Esse comando executa o diagnóstico apenas em relação ao Istio e ao Insights. |
|
Lista separada por vírgulas dos serviços a serem excluídos da validação |
Esse comando executa o teste em todo o cluster, exceto no Istio e no Insights. |
Formato de saída
json
, yaml
, text
e junit
. Você pode passar esses valores para qualquer comando por meio do sinalizador --output
. Esses formatos de saída são úteis quando você deseja aproveitar essas ferramentas para criar sua própria estrutura de solução de problemas com base nelas.
Usos de exemplos
Uso |
Saída de exemplo |
---|---|
|
|
|
|
|
|
|
|