automation-suite
2023.10
false
UiPath logo, featuring letters U and I in white
Guide d'installation d'Automation Suite sur EKS/AKS
Last updated 21 nov. 2024

Exécution de l'outil de diagnostic

L'outil de diagnostic Automation Suite exécute un ensemble de vérifications pour générer un rapport sur la santé du cluster, que vous pouvez analyser pour identifier les problèmes et leurs potentielles causes profondes. L’outil vous aide à trouver les problèmes courants, tels que la perte de connectivité de la base de données ou les informations d’identification non valides ou expirées.

L'outil de diagnostic Automation Suite est disponible à la fois dans uipathctl et uipathtools, que vous pouvez télécharger sur votre machine de gestion.

Pour obtenir des instructions de téléchargement, consultez uipathctl et uipathtools.

uipathtools est un outil CLI qui contient un sous-ensemble de capacités uipathctl spécifiques aux commandes d'intégrité. L'outil est rétrocompatible et fonctionne avec toutes les versions d'Automation Suite prises en charge. Nous vous recommandons d'utiliser uipathtools comme première étape si vous rencontrez un problème.
Les prérequis et les vérifications/tests d'intégrité s'exécutent dans l'espace de noms uipath-check. Vous devez soit autoriser la création de l'espace de noms uipath-check, soit le créer vous-même avant d'exécuter les vérifications/tests. De plus, des vérifications/tests nécessitent que vous autorisiez la communication entre les espaces de noms uipath-check et uipath, ou que vous activiez l'utilisation de hostNetwork.

Validation rapide

Validation rapide

Les commandes check et test fournissent des informations rapides sur l'état du cluster sans exécuter une analyse approfondie.
  • check repose sur l'état de santé et de synchronisation d'ArgoCD et ne modifie aucun état dans le cluster
  • test examine les applications, le déploiement ou les pods et mute temporairement l'état du cluster pour vous fournir ces informations.

Vérification de l'état

Pour exécuter un test d'intégrité, utilisez l'une des commandes suivantes, selon l'outil CLI que vous utilisez :

  • Si vous utilisez uipathctl, exécutez :
    ./uipathctl health check./uipathctl health check
  • Si vous utilisez uipathtools, exécutez :
    ./uipathtools health check./uipathtools health check
Remarque :
Utilisez l'indicateur --namespace (facultatif) si vous ne fournissez pas input.json . Vous devez utiliser l'indicateur uniquement si l'installation ne se trouve pas dans l'espace de noms uipath . Sans l'indicateur, la vérification de l'état récupère les données de diagnostic de tous les espaces de noms.

Exemple de sortie du rapport généré :

Checks run on cluster/[NOTIFICATIONSERVICE][NOTIFICATIONSERVICE_HEALTH] Application is healthy and in sync
 ✔ [ACTION_CENTER][ACTIONCENTER_HEALTH] Application is healthy and in sync
 ❌ [SYNC][namespace:"argocd" | kind:"Application" | name:"dataservice"] Application health check failed: health status is Progressing and sync status is Synced
 ✔ [RELOADER][RELOADER_HEALTH] Application is healthy and in sync
 ❌ [POD][LIST_NAMESPACES] Retrieved 25 namespaces to check pod health
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
 ✔ [ISTIO][LIST_PODS] Found 2 pods for Istio
    ✔ [ISTIOD_EXISTS] The Istio pods are present and running version -[ISTIOD_READY] Istio pods are healthy
 ✔ [AIEVENTS][AIEVENTS_HEALTH] Application is healthy and in sync
 ❌ [DATASERVICE][DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
 ✔ [PLATFORM][PLATFORM_HEALTH] Application is healthy and in sync
 ✔ [TASK_MINING][TASKMINING_HEALTH] Application is healthy and in sync
 ✔ [LOGGING][LOGGING_HEALTH] Application is healthy and in sync
 ✔ [WEBHOOK][WEBHOOK_HEALTH] Application is healthy and in syncChecks run on cluster/
 ✔ [NOTIFICATIONSERVICE]
    ✔ [NOTIFICATIONSERVICE_HEALTH] Application is healthy and in sync
 ✔ [ACTION_CENTER]
    ✔ [ACTIONCENTER_HEALTH] Application is healthy and in sync
 ❌ [SYNC]
    ❌ [namespace:"argocd" | kind:"Application" | name:"dataservice"] Application health check failed: health status is Progressing and sync status is Synced
 ✔ [RELOADER]
    ✔ [RELOADER_HEALTH] Application is healthy and in sync
 ❌ [POD]
    ✔ [LIST_NAMESPACES] Retrieved 25 namespaces to check pod health
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
 ✔ [ISTIO]
    ✔ [LIST_PODS] Found 2 pods for Istio
    ✔ [ISTIOD_EXISTS] The Istio pods are present and running version - 
    ✔ [ISTIOD_READY] Istio pods are healthy
 ✔ [AIEVENTS]
    ✔ [AIEVENTS_HEALTH] Application is healthy and in sync
 ❌ [DATASERVICE]
    ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
 ✔ [PLATFORM]
    ✔ [PLATFORM_HEALTH] Application is healthy and in sync
 ✔ [TASK_MINING]
    ✔ [TASKMINING_HEALTH] Application is healthy and in sync
 ✔ [LOGGING]
    ✔ [LOGGING_HEALTH] Application is healthy and in sync
 ✔ [WEBHOOK]
    ✔ [WEBHOOK_HEALTH] Application is healthy and in sync
Par défaut, la commande uipathctl health check vérifie l'intégrité de tous les composants. Cependant, cela vous permet également de vérifier strictement les composants qui vous intéressent :
  • Si vous souhaitez exclure des composants de l'exécution, utilisez l'indicateur --excluded. Par exemple, si vous ne souhaitez pas vérifier l'intégrité de SQL, exécutez uipathctl health check --excluded SQL. La commande vérifie l'intégrité de tous les composants à l'exception de SQL.
  • Si vous souhaitez inclure uniquement certains composants dans l'exécution, utilisez l'indicateur --included. Par exemple, si vous souhaitez uniquement vérifier l'intégrité de DNS et l'objectstore, exécutez uipathctl health check --included DNS,OBJECTSTORAGE.
Remarque :

Vous pouvez trouver les noms des composants que vous pouvez inclure ou exclure des vérifications de l'intégrité ici. Dans l'exemple, le premier mot de chaque ligne en retrait représente le nom du composant. Par exemple : SQL, OBJECTSTORE, DNS, etc.

Analyse des journaux

  1. Après avoir exécuté une vérification de l'état, les journaux montrent que la vérification de l'état de l'application Data Service a échoué.
    [DATASERVICE][DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced❌ [DATASERVICE]
        ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
  2. Après une enquête plus approfondie, il devient clair que l'application Data Service a échoué car les pods dataservice-runtime-8f5bb7d56-v5krg et dataservice-taskrunner-787df76c74-98h5l sont en état d'échec. Si vous analysez plus avant, vous pouvez constater que le dataservice-external-storage-secret manquant est manquant.
    [POD][LIST_NAMESPACES] Retrieved 25 namespaces to check pod health
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found❌ [POD]
        ✔ [LIST_NAMESPACES] Retrieved 25 namespaces to check pod health
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
  3. Pour résoudre ce problème, assurez-vous que vous avez fourni les informations d’identification correctes pour le magasin d’objets dans input.json .

    Pour plus de détails, consultez la section Mise à jour des informations d’identification.

Test d'intégrité

Pour exécuter un test d'intégrité, utilisez l'une des commandes suivantes, selon l'outil CLI que vous utilisez :

  • Si vous utilisez uipathctl, exécutez :
    ./uipathctl health test./uipathctl health test
  • Si vous utilisez uipathtools, exécutez :
    ./uipathtools health test./uipathtools health test

Exemple de sortie du rapport généré :

Checks run on cluster/[GATEKEEPER][CREATE_CONSTRAINT] Created test constraint
    ✔ [VERIFY] Constraint verified
    ✔ [CLEANUP] Cleaned up the test constraint
 ✔ [ACTION_CENTER][CREATE_NAMESPACE] Created namespace prereqk6b72
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqk6b72
    ✔ [CREATE_NAMESPACE] Created namespace prereqbxjx8
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqbxjx8
    ✔ [CREATE_NAMESPACE] Created namespace prereq8zvw4
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq8zvw4
 ✔ [DATASERVICE][CREATE_NAMESPACE] Created namespace prereqxwlsb
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqxwlsb
    ✔ [CREATE_NAMESPACE] Created namespace prereq5szsn
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq5szsn
 ✔ [APPS][CREATE_NAMESPACE] Created namespace prereq9z6nb
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq9z6nb
    ✔ [CREATE_NAMESPACE] Created namespace prereq6v7lm
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq6v7lm
    ✔ [CREATE_NAMESPACE] Created namespace prereqxxn5v
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqxxn5v
 ✔ [AUTOMATION_HUB][CREATE_NAMESPACE] Created namespace prereq4jkbt
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4jkbt
 ✔ [TEST_MANAGER][CREATE_NAMESPACE] Created namespace prereqnvvpc
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqnvvpc
 ✔ [ORCHESTRATOR][CREATE_NAMESPACE] Created namespace prereq8pf2f
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq8pf2f
    ✔ [CREATE_NAMESPACE] Created namespace prereq4w4v4
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4w4v4
    ✔ [CREATE_NAMESPACE] Created namespace prereqkzwqg
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqkzwqg
 ✔ [INSIGHTS][CREATE_NAMESPACE] Created namespace prereqqmgjc
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqqmgjc
    ✔ [CREATE_NAMESPACE] Created namespace prereq4vnjx
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4vnjx
    ✔ [CREATE_NAMESPACE] Created namespace prereqgtg9g
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqgtg9g
 ✔ [AUTOMATION_OPS][CREATE_NAMESPACE] Created namespace prereqgkkrz
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqgkkrz
 ✔ [AICENTER][CREATE_NAMESPACE] Created namespace prereqdls88
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqdls88
    ✔ [CREATE_NAMESPACE] Created namespace prereq6m7x9
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq6m7x9Checks run on cluster/
 ✔ [GATEKEEPER]
    ✔ [CREATE_CONSTRAINT] Created test constraint
    ✔ [VERIFY] Constraint verified
    ✔ [CLEANUP] Cleaned up the test constraint
 ✔ [ACTION_CENTER]
    ✔ [CREATE_NAMESPACE] Created namespace prereqk6b72
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqk6b72
    ✔ [CREATE_NAMESPACE] Created namespace prereqbxjx8
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqbxjx8
    ✔ [CREATE_NAMESPACE] Created namespace prereq8zvw4
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq8zvw4
 ✔ [DATASERVICE]
    ✔ [CREATE_NAMESPACE] Created namespace prereqxwlsb
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqxwlsb
    ✔ [CREATE_NAMESPACE] Created namespace prereq5szsn
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq5szsn
 ✔ [APPS]
    ✔ [CREATE_NAMESPACE] Created namespace prereq9z6nb
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq9z6nb
    ✔ [CREATE_NAMESPACE] Created namespace prereq6v7lm
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq6v7lm
    ✔ [CREATE_NAMESPACE] Created namespace prereqxxn5v
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqxxn5v
 ✔ [AUTOMATION_HUB]
    ✔ [CREATE_NAMESPACE] Created namespace prereq4jkbt
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4jkbt
 ✔ [TEST_MANAGER]
    ✔ [CREATE_NAMESPACE] Created namespace prereqnvvpc
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqnvvpc
 ✔ [ORCHESTRATOR]
    ✔ [CREATE_NAMESPACE] Created namespace prereq8pf2f
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq8pf2f
    ✔ [CREATE_NAMESPACE] Created namespace prereq4w4v4
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4w4v4
    ✔ [CREATE_NAMESPACE] Created namespace prereqkzwqg
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqkzwqg
 ✔ [INSIGHTS]
    ✔ [CREATE_NAMESPACE] Created namespace prereqqmgjc
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqqmgjc
    ✔ [CREATE_NAMESPACE] Created namespace prereq4vnjx
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4vnjx
    ✔ [CREATE_NAMESPACE] Created namespace prereqgtg9g
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqgtg9g
 ✔ [AUTOMATION_OPS]
    ✔ [CREATE_NAMESPACE] Created namespace prereqgkkrz
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqgkkrz
 ✔ [AICENTER]
    ✔ [CREATE_NAMESPACE] Created namespace prereqdls88
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqdls88
    ✔ [CREATE_NAMESPACE] Created namespace prereq6m7x9
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq6m7x9
Par défaut, la commande uipathctl health test exécute des tests de santé sur tous les composants. Cependant, cela vous permet également de vérifier strictement les composants qui vous intéressent :
  • Si vous souhaitez exclure des composants de l'exécution, utilisez l'indicateur --excluded. Par exemple, si vous ne souhaitez pas vérifier l'intégrité de SQL, exécutez uipathctl health test --excluded SQL. La commande vérifie l'intégrité de tous les composants à l'exception de SQL.
  • Si vous souhaitez inclure uniquement certains composants dans l'exécution, utilisez l'indicateur --included. Par exemple, si vous souhaitez uniquement vérifier l'intégrité de DNS et l'objectstore, exécutez uipathctl health test --included DNS,OBJECTSTORAGE.
Remarque :

Vous pouvez trouver les noms des composants que vous pouvez inclure ou exclure des tests d'intégrité ici. Dans l'exemple, le premier mot de chaque ligne en retrait représente le nom du composant. Par exemple : SQL, OBJECTSTORE, DNS, etc.

Remarque :
Si vous comparez la sortie des commandes check et test pour l'application Data Service, vous pouvez voir que la première valide l'intégrité de l'application, tandis que la seconde vérifie le routage.

Problème connu

Vous pouvez obtenir un message d'erreur semblable à l'exemple suivant. Vous pouvez l’ignorer car aucune Actions n’est requise de votre part.

E0621 23:32:56.426321   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.426392   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.444420   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.446150   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.513357   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceededE0621 23:32:56.426321   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.426392   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.444420   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.446150   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.513357   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded

Validation approfondie

Validation approfondie

La commande diagnose fournit des informations détaillées sur l'état du cluster. Il vous aide à identifier les problèmes à tous les niveaux, tels que SQL, objectstore, nœud, secret, Istio, metworking, etc.
  • Il couvre à la fois les commandes check et test .
  • Il exécute les vérifications des prérequis effectuées avant l'installation d'Automation Suite pour valider les modifications apportées à la configuration de l'environnement après l'installation et qui peuvent être la cause potentielle du problème.
  • Il s'exécute sur tous les nœuds pour recueillir tous les problèmes spécifiques aux nœuds, tels que l'indisponibilité des ressources, toute interférence réseau, etc.

Pour exécuter une vérification de diagnostic, utilisez l'une des commandes suivantes, selon l'outil CLI que vous utilisez :

  • Si vous utilisez uipathctl, exécutez :
    ./uipathctl health diagnose input.json --versions version.json./uipathctl health diagnose input.json --versions version.json
  • Si vous utilisez uipathtools, exécutez :
    ./uipathtools health diagnose input.json --versions version.json./uipathtools health diagnose input.json --versions version.json
Remarque :
Utilisez l'indicateur --namespace (facultatif) si vous ne fournissez pas input.json . Vous devez utiliser l'indicateur uniquement si l'installation ne se trouve pas dans l'espace de noms uipath . Sans l'indicateur, les données de diagnostic seront extraites de tous les espaces de noms.

Exemple de sortie du rapport généré :

Checks run on nodes/aks-pool0-27031798-vmss000001
 ✔ [REDIS(PORT=6380)][CONNECTIVITY] Successfully made Redis connection on ci-asaks4011056.redis.cache.windows.net:6380[OBJECTSTORAGE(PRODUCT=ORCHESTRATOR)][CHECK_API] Object storage test passed for orchestrator
 ✔ [SQL(PRODUCT=PROCESSMINING, TYPE=ADO)][EXECUTE_NATIVE] Successfully executed command
    ✔ [BUILD_CLIENT] Successfully built ADO client
    ✔ [CONNECT] Successfully connected ADO client to DB[DB_ROLES] SQL user has the required roles to DB[DNS(FQDN=INSIGHTS.<FQDN>)][VALIDATE_FQDN] FQDN is valid
    ✔ [RESOLVE_SUBDOMAIN] Resolved insights.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }][IPS_MATCH] Subdomain resolves to top domain
 ✔ [DNS(FQDN=ALM.<FQDN>)][VALIDATE_FQDN] FQDN is valid
    ✔ [RESOLVE_SUBDOMAIN] Resolved alm.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }][IPS_MATCH] Subdomain resolves to top domain
 Checks run on cluster/[NODE][NODE_EXISTS] 12 Nodes present in the cluster
    ✔ [NODE_READY] All the nodes are in ready state
 ✔ [GATEKEEPER][GATEKEEPER_HEALTH] Application is healthy and in sync
    ✔ [CREATE_CONSTRAINT] Created test constraint
    ✔ [VERIFY] Constraint verified
    ✔ [CLEANUP] Cleaned up the test constraint
 ✔ [LOGGING][LOGGING_HEALTH] Application is healthy and in sync
 ✔ [DATASERVICE][CREATE_NAMESPACE] Created namespace prereqctzhp
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqctzhp
 ✔ [ROBOTUBE][ROBOTUBE_HEALTH] Application is healthy and in sync
 ✔ [AIRFLOW][AIRFLOW_HEALTH] Application is healthy and in sync
 ✔ [ARGOCD][ARGOCD_SERVER_PODS] Component argocd-server has ready Pods
    ✔ [ARGOCD_REPO_SERVER_PODS] Component argocd-repo-server has ready Pods
    ✔ [ARGOCD_APP_CONTROLLER_PODS] Component argocd-application-controller has ready Pods
    ✔ [ARGOCD_REDIS_PODS] Component redis-ha has ready Pods
 ✔ [ISTIO][LIST_PODS] Found 2 pods for Istio
    ✔ [ISTIOD_EXISTS] The Istio pods are present and running version -[ISTIOD_READY] Istio pods are healthy
 ✔ [AICENTER][AICENTER_HEALTH] Application is healthy and in sync
    ✔ [CREATE_NAMESPACE] Created namespace prereqn6sqn
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqn6sqn
Checks run on local/[CONNECTIVITY][OVERLAY_CONNECTIVITY_TEST] echo-a-4rffj on aks-pool0-27031798-vmss000002 can reach echo-a-4rffj's IP 10.240.1.86 on aks-pool0-27031798-vmss000002
    ✔ [OVERLAY_CONNECTIVITY_TEST] echo-a-4rffj on aks-pool0-27031798-vmss000002 can reach echo-a-8c6t5's IP 10.240.3.57 on aks-pool3-27031798-vmss000000
    ✔ [POD_TO_A] Scenario: http check between two random pods completed successfully
    ✔ [POD_TO_B_MULTI_NODE_CLUSTERIP] Scenario: http check between from pod to a multinode ClusterIP completed successfully
    ✔ [POD_TO_B_MULTI_NODE_HEADLESS] Scenario: http check between from pod to a multinode ClusterIP without a clusterIP set completed successfully
    ✔ [POD_TO_B_INTRA_NODE_CLUSTERIP] Scenario: http check between from two pods colocated on the same node via ClusterIP completed successfully
 ✔ [INGRESS][INGRESS_GATEWAY_FOUND] Found service istio-ingressgateway in the cluster
    ✔ [INGRESS_GATEWAY_PORT_CHECK] Service istio-ingressgateway is configured to allow traffic on http://ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com
    ✔ [INGRESS_GATEWAY_PORT_CHECK] Service istio-ingressgateway is configured to allow traffic on https://ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com:443[OSS(COMPONENT=MONITORING)][OSS(component=monitoring)] Check for component monitoring passed
 ✔ [OSS(COMPONENT=GATEKEEPER)][OSS(component=gatekeeper)] Check for component gatekeeper passed
 ✔ [STORAGECLASS(NAME=STORAGE_CLASS_SINGLE_REPLICA)][STORAGE_CLASS_EXISTS] Storage class azurefile-csi exists
    ✔ [LIST_NODES] Listed 12 nodes
    ✔ [CREATE_NAMESPACE] Created namespace prereqhcpkc
    ✔ [CREATE_STATEFULSET] Created statefulset storage-class-check-5n272
    ✔ [LIST_PODS] Listed 1 pods on node aks-pool3-27031798-vmss000001
    ✔ [POD_RUNNING] Found one pod running on node aks-pool3-27031798-vmss000001
 ✔ [REGISTRY][CONNECTIVITY] Successfully made Registry connection on sfbrdevhelmweacr.azurecr.io
 ✔ [NETWORK-POLICIES][CREATE_NAMESPACE] Namespace prereqw4t9b created
    ✔ [CREATE_EGRESS_NETWORK_POLICY] Created the egress network policies allow-coredns-egress and block-external-traffic
    ✔ [CREATE_INGRESS_NETWORK_POLICY] Created the ingress network policy: block-echo-server-ingress
    ✔ [CREATE_SERVICE] Service echo-server-svc created
 ✔ [STORAGECLASS(NAME=STORAGE_CLASS)][STORAGE_CLASS_EXISTS] Storage class managed-premium exists
    ✔ [LIST_NODES] Listed 12 nodes
    ✔ [CREATE_NAMESPACE] Created namespace prereqgjhcb
    ✔ [CREATE_STATEFULSET] Created statefulset storage-class-check-nm9th
    ✔ [LIST_PODS] Listed 1 pods on node aks-pool0-27031798-vmss000003
    ✔ [POD_RUNNING] Found one pod running on node aks-pool0-27031798-vmss000003
    ✔ [LIST_PODS] Listed 1 pods on node aks-pool0-27031798-vmss000001
    ✔ [POD_RUNNING] Found one pod running on node aks-pool0-27031798-vmss000001
 ✔ [DNS(FQDN=INSIGHTS.<FQDN>)][VALIDATE_FQDN] FQDN is valid
    ✔ [RESOLVE_TOP_DOMAIN] Resolved ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }][RESOLVE_SUBDOMAIN] Resolved insights.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }][IPS_MATCH] Subdomain resolves to top domain
 ✔ [NODE(CPU >= 8, RAM >= 16GI)][LIST_NODES] Listed 12 nodes
    ✔ [AT_LEAST_ONE_NODE] At least one node found
    ✔ [CPU_USAGE] Node aks-pool0-27031798-vmss000000 has 12.50% CPU usage
    ✔ [MEMORY_USAGE] Node aks-pool0-27031798-vmss000000 has 38.27% memory usage
    ✔ [POD_USAGE] Node aks-pool0-27031798-vmss000000 has 40.00% of pods in use. Number of pods: 40.00 max allowed: 100.00[OSS(COMPONENT=CERT-MANAGER)][OSS(component=cert-manager)] Check for component cert-manager passed
 ✔ [RESOURCE][Capacity] Automation suite already installed on cluster
 ✔ [OSS(COMPONENT=LOGGING)][OSS(component=logging)] Check for component logging passed
 ✔ [GPU(PRODUCT=DOCUMENTUNDERSTANDING)][BASIC_GPU_SUCCESS] Was able to start a CUDA job on a GPU node
Checks run on cluster/[DATASERVICE][DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
 ❌ [ISTIO][ISTIO_SYNC_STATUS] Istio sync is up-to-date
    ❌ [ISTIO_ENVOY_CONFIG_STATUS] Istio Envoy configs are not healthy: Error [IST0101] (VirtualService uipath/du-platform-vs) Referenced host:port not found: "aistorage:5000"[ISTIO_SERVICEMESH_VALIDATION_GET_REGISTRY_FQDN] Successfully retrieved registry url
    ✔ [ISTIO_SERVICEMESH_VALIDATION_GET_CLUSTER_FQDN] Successfully retrieved cluster fqdn
    ✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_DEPLOYMENT] Successfully created the test deployment istio-validation-deployment
    ✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_SERVICE] Successfully created the test service istio-validation-service
    ✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_GATEWAY] Successfully created the test gateway istio-validation-gateway
    ✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_VIRTUALSERVICE] Successfully created the test virtual service istio-validation-vs
    ✔ [ISTIO_SERVICEMESH_VALIDATION_URL_ACCESS] Success exposing the service via servicemesh
 ❌ [POD][LIST_NAMESPACES] Retrieved 25 namespaces to check pod health
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/ah-tenant-service-sync-insights-data-job-28122960-p6rzg cannot mount volume: MountVolume.SetUp failed for volume "ah-insights-secrets" : failed to sync secret cache: timed out waiting for the condition
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[external-storage-creds], unattached volumes=[workload-socket is-secrets openssl istio-podinfo temp-location cert-location istio-data external-storage-creds workload-certs istio-envoy java domain-cert-config edk2 credential-socket tmp additional-ca-cert-config pem istiod-ca-cert istio-token app-secrets ceph-storage-creds]: timed out waiting for the condition
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
    ❌ [POD_UNHEALTHY] Latest event for pod uipath/du-documentmanager-dm-maintenance-cron-28122960-4sm5z: Error: failed to sync configmap cache: timed out waiting for the condition
 ❌ [SYNC][namespace:"argocd" | kind:"Application" | name:"dataservice"] Application health check failed: health status is Progressing and sync status is SyncedChecks run on nodes/aks-pool0-27031798-vmss000001
 ✔ [REDIS(PORT=6380)]
    ✔ [CONNECTIVITY] Successfully made Redis connection on ci-asaks4011056.redis.cache.windows.net:6380
 ✔ [OBJECTSTORAGE(PRODUCT=ORCHESTRATOR)]
    ✔ [CHECK_API] Object storage test passed for orchestrator
 ✔ [SQL(PRODUCT=PROCESSMINING, TYPE=ADO)]
    ✔ [EXECUTE_NATIVE] Successfully executed command
    ✔ [BUILD_CLIENT] Successfully built ADO client
    ✔ [CONNECT] Successfully connected ADO client to DB
    ✔ [DB_ROLES] SQL user has the required roles to DB
 ✔ [DNS(FQDN=INSIGHTS.<FQDN>)]
    ✔ [VALIDATE_FQDN] FQDN is valid
    ✔ [RESOLVE_SUBDOMAIN] Resolved insights.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }]
    ✔ [IPS_MATCH] Subdomain resolves to top domain
 ✔ [DNS(FQDN=ALM.<FQDN>)]
    ✔ [VALIDATE_FQDN] FQDN is valid
    ✔ [RESOLVE_SUBDOMAIN] Resolved alm.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }]
    ✔ [IPS_MATCH] Subdomain resolves to top domain
 Checks run on cluster/
 ✔ [NODE]
    ✔ [NODE_EXISTS] 12 Nodes present in the cluster
    ✔ [NODE_READY] All the nodes are in ready state
 ✔ [GATEKEEPER]
    ✔ [GATEKEEPER_HEALTH] Application is healthy and in sync
    ✔ [CREATE_CONSTRAINT] Created test constraint
    ✔ [VERIFY] Constraint verified
    ✔ [CLEANUP] Cleaned up the test constraint
 ✔ [LOGGING]
    ✔ [LOGGING_HEALTH] Application is healthy and in sync
 ✔ [DATASERVICE]
    ✔ [CREATE_NAMESPACE] Created namespace prereqctzhp
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqctzhp
 ✔ [ROBOTUBE]
    ✔ [ROBOTUBE_HEALTH] Application is healthy and in sync
 ✔ [AIRFLOW]
    ✔ [AIRFLOW_HEALTH] Application is healthy and in sync
 ✔ [ARGOCD]
    ✔ [ARGOCD_SERVER_PODS] Component argocd-server has ready Pods
    ✔ [ARGOCD_REPO_SERVER_PODS] Component argocd-repo-server has ready Pods
    ✔ [ARGOCD_APP_CONTROLLER_PODS] Component argocd-application-controller has ready Pods
    ✔ [ARGOCD_REDIS_PODS] Component redis-ha has ready Pods
 ✔ [ISTIO]
    ✔ [LIST_PODS] Found 2 pods for Istio
    ✔ [ISTIOD_EXISTS] The Istio pods are present and running version - 
    ✔ [ISTIOD_READY] Istio pods are healthy
 ✔ [AICENTER]
    ✔ [AICENTER_HEALTH] Application is healthy and in sync
    ✔ [CREATE_NAMESPACE] Created namespace prereqn6sqn
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqn6sqn
Checks run on local/
 ✔ [CONNECTIVITY]
    ✔ [OVERLAY_CONNECTIVITY_TEST] echo-a-4rffj on aks-pool0-27031798-vmss000002 can reach echo-a-4rffj's IP 10.240.1.86 on aks-pool0-27031798-vmss000002
    ✔ [OVERLAY_CONNECTIVITY_TEST] echo-a-4rffj on aks-pool0-27031798-vmss000002 can reach echo-a-8c6t5's IP 10.240.3.57 on aks-pool3-27031798-vmss000000
    ✔ [POD_TO_A] Scenario: http check between two random pods completed successfully
    ✔ [POD_TO_B_MULTI_NODE_CLUSTERIP] Scenario: http check between from pod to a multinode ClusterIP completed successfully
    ✔ [POD_TO_B_MULTI_NODE_HEADLESS] Scenario: http check between from pod to a multinode ClusterIP without a clusterIP set completed successfully
    ✔ [POD_TO_B_INTRA_NODE_CLUSTERIP] Scenario: http check between from two pods colocated on the same node via ClusterIP completed successfully
 ✔ [INGRESS]
    ✔ [INGRESS_GATEWAY_FOUND] Found service istio-ingressgateway in the cluster
    ✔ [INGRESS_GATEWAY_PORT_CHECK] Service istio-ingressgateway is configured to allow traffic on http://ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com
    ✔ [INGRESS_GATEWAY_PORT_CHECK] Service istio-ingressgateway is configured to allow traffic on https://ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com:443
 ✔ [OSS(COMPONENT=MONITORING)]
    ✔ [OSS(component=monitoring)] Check for component monitoring passed
 ✔ [OSS(COMPONENT=GATEKEEPER)]
    ✔ [OSS(component=gatekeeper)] Check for component gatekeeper passed
 ✔ [STORAGECLASS(NAME=STORAGE_CLASS_SINGLE_REPLICA)]
    ✔ [STORAGE_CLASS_EXISTS] Storage class azurefile-csi exists
    ✔ [LIST_NODES] Listed 12 nodes
    ✔ [CREATE_NAMESPACE] Created namespace prereqhcpkc
    ✔ [CREATE_STATEFULSET] Created statefulset storage-class-check-5n272
    ✔ [LIST_PODS] Listed 1 pods on node aks-pool3-27031798-vmss000001
    ✔ [POD_RUNNING] Found one pod running on node aks-pool3-27031798-vmss000001
 ✔ [REGISTRY]
    ✔ [CONNECTIVITY] Successfully made Registry connection on sfbrdevhelmweacr.azurecr.io
 ✔ [NETWORK-POLICIES]
    ✔ [CREATE_NAMESPACE] Namespace prereqw4t9b created
    ✔ [CREATE_EGRESS_NETWORK_POLICY] Created the egress network policies allow-coredns-egress and block-external-traffic
    ✔ [CREATE_INGRESS_NETWORK_POLICY] Created the ingress network policy: block-echo-server-ingress
    ✔ [CREATE_SERVICE] Service echo-server-svc created
 ✔ [STORAGECLASS(NAME=STORAGE_CLASS)]
    ✔ [STORAGE_CLASS_EXISTS] Storage class managed-premium exists
    ✔ [LIST_NODES] Listed 12 nodes
    ✔ [CREATE_NAMESPACE] Created namespace prereqgjhcb
    ✔ [CREATE_STATEFULSET] Created statefulset storage-class-check-nm9th
    ✔ [LIST_PODS] Listed 1 pods on node aks-pool0-27031798-vmss000003
    ✔ [POD_RUNNING] Found one pod running on node aks-pool0-27031798-vmss000003
    ✔ [LIST_PODS] Listed 1 pods on node aks-pool0-27031798-vmss000001
    ✔ [POD_RUNNING] Found one pod running on node aks-pool0-27031798-vmss000001
 ✔ [DNS(FQDN=INSIGHTS.<FQDN>)]
    ✔ [VALIDATE_FQDN] FQDN is valid
    ✔ [RESOLVE_TOP_DOMAIN] Resolved ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }]
    ✔ [RESOLVE_SUBDOMAIN] Resolved insights.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }]
    ✔ [IPS_MATCH] Subdomain resolves to top domain
 ✔ [NODE(CPU >= 8, RAM >= 16GI)]
    ✔ [LIST_NODES] Listed 12 nodes
    ✔ [AT_LEAST_ONE_NODE] At least one node found
    ✔ [CPU_USAGE] Node aks-pool0-27031798-vmss000000 has 12.50% CPU usage
    ✔ [MEMORY_USAGE] Node aks-pool0-27031798-vmss000000 has 38.27% memory usage
    ✔ [POD_USAGE] Node aks-pool0-27031798-vmss000000 has 40.00% of pods in use. Number of pods: 40.00 max allowed: 100.00
 ✔ [OSS(COMPONENT=CERT-MANAGER)]
    ✔ [OSS(component=cert-manager)] Check for component cert-manager passed
 ✔ [RESOURCE]
    ✔ [Capacity] Automation suite already installed on cluster
 ✔ [OSS(COMPONENT=LOGGING)]
    ✔ [OSS(component=logging)] Check for component logging passed
 ✔ [GPU(PRODUCT=DOCUMENTUNDERSTANDING)]
    ✔ [BASIC_GPU_SUCCESS] Was able to start a CUDA job on a GPU node
Checks run on cluster/
 ❌ [DATASERVICE]
    ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
 ❌ [ISTIO]
    ✔ [ISTIO_SYNC_STATUS] Istio sync is up-to-date
    ❌ [ISTIO_ENVOY_CONFIG_STATUS] Istio Envoy configs are not healthy: Error [IST0101] (VirtualService uipath/du-platform-vs) Referenced host:port not found: "aistorage:5000"
    ✔ [ISTIO_SERVICEMESH_VALIDATION_GET_REGISTRY_FQDN] Successfully retrieved registry url
    ✔ [ISTIO_SERVICEMESH_VALIDATION_GET_CLUSTER_FQDN] Successfully retrieved cluster fqdn
    ✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_DEPLOYMENT] Successfully created the test deployment istio-validation-deployment
    ✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_SERVICE] Successfully created the test service istio-validation-service
    ✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_GATEWAY] Successfully created the test gateway istio-validation-gateway
    ✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_VIRTUALSERVICE] Successfully created the test virtual service istio-validation-vs
    ✔ [ISTIO_SERVICEMESH_VALIDATION_URL_ACCESS] Success exposing the service via servicemesh
 ❌ [POD]
    ✔ [LIST_NAMESPACES] Retrieved 25 namespaces to check pod health
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/ah-tenant-service-sync-insights-data-job-28122960-p6rzg cannot mount volume: MountVolume.SetUp failed for volume "ah-insights-secrets" : failed to sync secret cache: timed out waiting for the condition
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[external-storage-creds], unattached volumes=[workload-socket is-secrets openssl istio-podinfo temp-location cert-location istio-data external-storage-creds workload-certs istio-envoy java domain-cert-config edk2 credential-socket tmp additional-ca-cert-config pem istiod-ca-cert istio-token app-secrets ceph-storage-creds]: timed out waiting for the condition
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
    ❌ [POD_UNHEALTHY] Latest event for pod uipath/du-documentmanager-dm-maintenance-cron-28122960-4sm5z: Error: failed to sync configmap cache: timed out waiting for the condition
 ❌ [SYNC]
    ❌ [namespace:"argocd" | kind:"Application" | name:"dataservice"] Application health check failed: health status is Progressing and sync status is Synced
Remarque :
L'échantillon susmentionné est rogné vers le bas. Les journaux réels contiennent plus d’informations. Vous pouvez remarquer que la commande diagnose s'exécute à plusieurs niveaux, tels que l'infrastructure, la mise en réseau, le stockage, les pods, le DNS, etc.

Analyse des journaux

Vous pouvez remarquer deux problèmes potentiels dans les journaux précédents :

  • Istio a une mauvaise configuration, ce qui peut entraîner des problèmes d'accès à la plate-forme Document Understanding :
    [ISTIO][ISTIO_SYNC_STATUS] Istio sync is up-to-date
        ❌ [ISTIO_ENVOY_CONFIG_STATUS] Istio Envoy configs are not healthy: Error [IST0101] (VirtualService uipath/du-platform-vs) Referenced host:port not found: "aistorage:5000"❌ [ISTIO]
        ✔ [ISTIO_SYNC_STATUS] Istio sync is up-to-date
        ❌ [ISTIO_ENVOY_CONFIG_STATUS] Istio Envoy configs are not healthy: Error [IST0101] (VirtualService uipath/du-platform-vs) Referenced host:port not found: "aistorage:5000"
  • Data Service n'est pas disponible. Voir Ceph dans l’exemple de code.
    [DATASERVICE][DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[external-storage-creds], unattached volumes=[workload-socket is-secrets openssl istio-podinfo temp-location cert-location istio-data external-storage-creds workload-certs istio-envoy java domain-cert-config edk2 credential-socket tmp additional-ca-cert-config pem istiod-ca-cert istio-token app-secrets ceph-storage-creds]: timed out waiting for the condition
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found❌ [DATASERVICE]
        ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[external-storage-creds], unattached volumes=[workload-socket is-secrets openssl istio-podinfo temp-location cert-location istio-data external-storage-creds workload-certs istio-envoy java domain-cert-config edk2 credential-socket tmp additional-ca-cert-config pem istiod-ca-cert istio-token app-secrets ceph-storage-creds]: timed out waiting for the condition
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found

Problèmes connus

Vous pouvez obtenir un message d'erreur semblable à l'exemple suivant. Vous pouvez l’ignorer car aucune Actions n’est requise de votre part.

I0622 01:31:28.917107   28815 request.go:601] Waited for 1.017599292s due to client-side throttling, not priority and fairness, request: GET:https://ci-asaks4011056-fwwpyxm7.hcp.westeurope.azmk8s.io:443/apis/networking.istio.io/v1alpha3I0622 01:31:28.917107   28815 request.go:601] Waited for 1.017599292s due to client-side throttling, not priority and fairness, request: GET:https://ci-asaks4011056-fwwpyxm7.hcp.westeurope.azmk8s.io:443/apis/networking.istio.io/v1alpha3

Utilitaires supplémentaires

Utilitaires supplémentaires

Toutes les commandes de l'outil de diagnostic Automation Suite (check, testet diagnose) prennent en charge un filtrage et un format de sortie supplémentaires.

Filtrage

Filtres

Description

Utilisations

--included

Liste des services à inclure dans la validation, séparés par des virgules

/uipathctl health diagnose input.json --versions.json --included ISTIO,INSIGHTS

Cette commande exécute le diagnostic uniquement sur Istio et Insights.

--excluded

Liste des services à exclure de la validation, séparés par des virgules

/uipathctl health test --excluded ISTIO,INSIGHTS

Cette commande exécute le test dans l'ensemble du cluster, sauf Istio et Insights.

Format de sortie

L'outil de diagnostic Automation Suite peut générer des rapports dans plusieurs formats : json, yaml, textet junit. Vous pouvez transmettre ces valeurs à n'importe quelle commande via l'indicateur --output . Ces formats de sortie sont pratiques lorsque vous souhaitez tirer parti de ces outils pour créer votre propre infrastructure de résolution des problèmes.

Exemples d'utilisation

Utilisation

Exemple de sortie

./uipathctl health check --included DATASERVICE --output json
./uipathtools health check --included DATASERVICE --output json./uipathctl health check --included DATASERVICE --output json
./uipathtools health check --included DATASERVICE --output json
{ "cluster/": { "DATASERVICE": [ { "name": "DATASERVICE_HEALTH", "description": "Application health check failed: health status is Progressing and sync status is Synced", "status": "failed" } ] } }{ "cluster/": { "DATASERVICE": [ { "name": "DATASERVICE_HEALTH", "description": "Application health check failed: health status is Progressing and sync status is Synced", "status": "failed" } ] } }
./uipathctl health check --included DATASERVICE --output yaml
./uipathtools health check --included DATASERVICE --output yaml./uipathctl health check --included DATASERVICE --output yaml
./uipathtools health check --included DATASERVICE --output yaml
? locationType: cluster : DATASERVICE: - name: DATASERVICE_HEALTH description: 'Application health check failed: health status is Progressing and sync status is Synced' status: failed? locationType: cluster : DATASERVICE: - name: DATASERVICE_HEALTH description: 'Application health check failed: health status is Progressing and sync status is Synced' status: failed
./uipathctl health check --included DATASERVICE --output text
./uipathtools health check --included DATASERVICE --output text./uipathctl health check --included DATASERVICE --output text
./uipathtools health check --included DATASERVICE --output text
Checks run on cluster/[DATASERVICE][DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is SyncedChecks run on cluster/ ❌ [DATASERVICE] ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
./uipathctl health check --included DATASERVICE --output junit
./uipathtools health check --included DATASERVICE --output junit./uipathctl health check --included DATASERVICE --output junit
./uipathtools health check --included DATASERVICE --output junit
<testsuite name="Health" tests="1" errors="0" failures="1" time="0" timestamp="2023-06-22T01:59:08.313362+05:30" hostname=""> <testcase name="DATASERVICE_HEALTH" classname="" time="0"> <failure message="Application health check failed: health status is Progressing and sync status is Synced" type=""> </failure> </testcase> </testsuite><testsuite name="Health" tests="1" errors="0" failures="1" time="0" timestamp="2023-06-22T01:59:08.313362+05:30" hostname=""> <testcase name="DATASERVICE_HEALTH" classname="" time="0"> <failure message="Application health check failed: health status is Progressing and sync status is Synced" type=""> </failure> </testcase> </testsuite>

Cette page vous a-t-elle été utile ?

Obtenez l'aide dont vous avez besoin
Formation RPA - Cours d'automatisation
Forum de la communauté UiPath
Uipath Logo White
Confiance et sécurité
© 2005-2024 UiPath Tous droits réservés.