Automation Suite
2023.10
False
Image de fond de la bannière
Guide d'installation d'Automation Suite sur Linux
Dernière mise à jour 19 avr. 2024

Exécution de l'outil de diagnostic

L'outil de diagnostic Automation Suite exécute un ensemble de vérifications pour générer un rapport sur la santé du cluster, que vous pouvez analyser pour identifier les problèmes et leurs potentielles causes profondes. L’outil vous aide à trouver les problèmes courants, tels que la perte de connectivité de la base de données ou les informations d’identification non valides ou expirées.

L'outil de diagnostic Automation Suite est disponible à la fois dans uipathctl et uipathtools, que vous pouvez télécharger sur votre machine de gestion. Pour obtenir des instructions de téléchargement, consultez uipathtools.
uipathtools est un outil CLI qui contient un sous-ensemble de capacités uipathctl spécifiques aux commandes d’intégrité. L’outil est rétrocompatible et fonctionne avec toutes les versions d’Automation Suite prises en charge. Nous vous recommandons d’utiliser uipathtools comme première étape si vous rencontrez un problème.

Validation rapide

Validation rapide

Les commandes check et test fournissent des informations rapides sur l'état du cluster sans exécuter une analyse approfondie.
  • check repose sur l'état de santé et de synchronisation d'ArgoCD et ne modifie aucun état dans le cluster
  • test examine les applications, le déploiement ou les pods et mute temporairement l'état du cluster pour vous fournir ces informations.

Vérification de l'état

Pour exécuter un test d'intégrité, utilisez l'une des commandes suivantes, selon l'outil CLI que vous utilisez :

  • Si vous utilisez uipathctl, exécutez :
    ./uipathctl health check./uipathctl health check
  • Si vous utilisez uipathtools, exécutez :
    ./uipathtools health check./uipathtools health check

Exemple de sortie du rapport généré :

Checks run on cluster/[NOTIFICATIONSERVICE][NOTIFICATIONSERVICE_HEALTH] Application is healthy and in sync
 ✔ [ACTION_CENTER][ACTIONCENTER_HEALTH] Application is healthy and in sync
 ❌ [SYNC][namespace:"argocd" | kind:"Application" | name:"dataservice"] Application health check failed: health status is Progressing and sync status is Synced
 ✔ [RELOADER][RELOADER_HEALTH] Application is healthy and in sync
 ❌ [POD][LIST_NAMESPACES] Retrieved 25 namespaces to check pod health
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
 ✔ [ISTIO][LIST_PODS] Found 2 pods for Istio
    ✔ [ISTIOD_EXISTS] The Istio pods are present and running version -[ISTIOD_READY] Istio pods are healthy
 ✔ [AIEVENTS][AIEVENTS_HEALTH] Application is healthy and in sync
 ❌ [DATASERVICE][DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
 ✔ [PLATFORM][PLATFORM_HEALTH] Application is healthy and in sync
 ✔ [TASK_MINING][TASKMINING_HEALTH] Application is healthy and in sync
 ✔ [LOGGING][LOGGING_HEALTH] Application is healthy and in sync
 ✔ [WEBHOOK][WEBHOOK_HEALTH] Application is healthy and in syncChecks run on cluster/
 ✔ [NOTIFICATIONSERVICE]
    ✔ [NOTIFICATIONSERVICE_HEALTH] Application is healthy and in sync
 ✔ [ACTION_CENTER]
    ✔ [ACTIONCENTER_HEALTH] Application is healthy and in sync
 ❌ [SYNC]
    ❌ [namespace:"argocd" | kind:"Application" | name:"dataservice"] Application health check failed: health status is Progressing and sync status is Synced
 ✔ [RELOADER]
    ✔ [RELOADER_HEALTH] Application is healthy and in sync
 ❌ [POD]
    ✔ [LIST_NAMESPACES] Retrieved 25 namespaces to check pod health
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
 ✔ [ISTIO]
    ✔ [LIST_PODS] Found 2 pods for Istio
    ✔ [ISTIOD_EXISTS] The Istio pods are present and running version - 
    ✔ [ISTIOD_READY] Istio pods are healthy
 ✔ [AIEVENTS]
    ✔ [AIEVENTS_HEALTH] Application is healthy and in sync
 ❌ [DATASERVICE]
    ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
 ✔ [PLATFORM]
    ✔ [PLATFORM_HEALTH] Application is healthy and in sync
 ✔ [TASK_MINING]
    ✔ [TASKMINING_HEALTH] Application is healthy and in sync
 ✔ [LOGGING]
    ✔ [LOGGING_HEALTH] Application is healthy and in sync
 ✔ [WEBHOOK]
    ✔ [WEBHOOK_HEALTH] Application is healthy and in sync

Par défaut, la commande vérifie l'intégrité de tous les composants. Cependant, cela vous permet également de vérifier strictement les composants qui vous intéressent :
  • Si vous souhaitez exclure des composants de l'exécution, utilisez l'indicateur --excluded. Par exemple, si vous ne souhaitez pas vérifier l'intégrité de SQL, exécutez uipathctl health check --excluded SQL. La commande vérifie l'intégrité de tous les composants à l'exception de SQL.
  • Si vous souhaitez inclure uniquement certains composants dans l'exécution, utilisez l'indicateur --included. Par exemple, si vous souhaitez uniquement vérifier l'intégrité de DNS et l'objectstore, exécutez uipathctl health check --included DNS,OBJECTSTORAGE.
Remarque :

Vous pouvez trouver les noms des composants que vous pouvez inclure ou exclure des vérifications de l' état. Dans l'exemple, le premier mot de chaque ligne en retrait représente le nom du composant. Par exemple : SQL, OBJECTSTORE, DNS, etc.

Analyse des journaux

  1. Après avoir exécuté une vérification de l'état, les journaux montrent que la vérification de l'état de l'application Data Service a échoué.
    [DATASERVICE][DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced❌ [DATASERVICE]
        ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
  2. Après une enquête plus approfondie, il devient clair que l'application Data Service a échoué car les pods dataservice-runtime-8f5bb7d56-v5krg et dataservice-taskrunner-787df76c74-98h5l sont en état d'échec. Si vous analysez plus avant, vous pouvez constater que le dataservice-external-storage-secret manquant est manquant.
    [POD][LIST_NAMESPACES] Retrieved 25 namespaces to check pod health
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found❌ [POD]
        ✔ [LIST_NAMESPACES] Retrieved 25 namespaces to check pod health
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
  3. Pour résoudre ce problème, assurez-vous que vous avez fourni les informations d'identification correctes pour le magasin d'objets dans le fichier input.json. Pour plus de détails, consultez .

Test d'intégrité

Pour exécuter un test d'intégrité, utilisez l'une des commandes suivantes, selon l'outil CLI que vous utilisez :

  • Si vous utilisez uipathctl, exécutez :
    ./uipathctl health test./uipathctl health test
  • Si vous utilisez uipathtools, exécutez :
    ./uipathtools health test./uipathtools health test

Exemple de sortie du rapport généré :

Checks run on cluster/[GATEKEEPER][CREATE_CONSTRAINT] Created test constraint
    ✔ [VERIFY] Constraint verified
    ✔ [CLEANUP] Cleaned up the test constraint
 ✔ [ACTION_CENTER][CREATE_NAMESPACE] Created namespace prereqk6b72
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqk6b72
    ✔ [CREATE_NAMESPACE] Created namespace prereqbxjx8
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqbxjx8
    ✔ [CREATE_NAMESPACE] Created namespace prereq8zvw4
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq8zvw4
 ✔ [DATASERVICE][CREATE_NAMESPACE] Created namespace prereqxwlsb
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqxwlsb
    ✔ [CREATE_NAMESPACE] Created namespace prereq5szsn
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq5szsn
 ✔ [APPS][CREATE_NAMESPACE] Created namespace prereq9z6nb
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq9z6nb
    ✔ [CREATE_NAMESPACE] Created namespace prereq6v7lm
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq6v7lm
    ✔ [CREATE_NAMESPACE] Created namespace prereqxxn5v
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqxxn5v
 ✔ [AUTOMATION_HUB][CREATE_NAMESPACE] Created namespace prereq4jkbt
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4jkbt
 ✔ [TEST_MANAGER][CREATE_NAMESPACE] Created namespace prereqnvvpc
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqnvvpc
 ✔ [ORCHESTRATOR][CREATE_NAMESPACE] Created namespace prereq8pf2f
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq8pf2f
    ✔ [CREATE_NAMESPACE] Created namespace prereq4w4v4
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4w4v4
    ✔ [CREATE_NAMESPACE] Created namespace prereqkzwqg
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqkzwqg
 ✔ [INSIGHTS][CREATE_NAMESPACE] Created namespace prereqqmgjc
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqqmgjc
    ✔ [CREATE_NAMESPACE] Created namespace prereq4vnjx
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4vnjx
    ✔ [CREATE_NAMESPACE] Created namespace prereqgtg9g
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqgtg9g
 ✔ [AUTOMATION_OPS][CREATE_NAMESPACE] Created namespace prereqgkkrz
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqgkkrz
 ✔ [AICENTER][CREATE_NAMESPACE] Created namespace prereqdls88
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqdls88
    ✔ [CREATE_NAMESPACE] Created namespace prereq6m7x9
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq6m7x9Checks run on cluster/
 ✔ [GATEKEEPER]
    ✔ [CREATE_CONSTRAINT] Created test constraint
    ✔ [VERIFY] Constraint verified
    ✔ [CLEANUP] Cleaned up the test constraint
 ✔ [ACTION_CENTER]
    ✔ [CREATE_NAMESPACE] Created namespace prereqk6b72
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqk6b72
    ✔ [CREATE_NAMESPACE] Created namespace prereqbxjx8
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqbxjx8
    ✔ [CREATE_NAMESPACE] Created namespace prereq8zvw4
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq8zvw4
 ✔ [DATASERVICE]
    ✔ [CREATE_NAMESPACE] Created namespace prereqxwlsb
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqxwlsb
    ✔ [CREATE_NAMESPACE] Created namespace prereq5szsn
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq5szsn
 ✔ [APPS]
    ✔ [CREATE_NAMESPACE] Created namespace prereq9z6nb
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq9z6nb
    ✔ [CREATE_NAMESPACE] Created namespace prereq6v7lm
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq6v7lm
    ✔ [CREATE_NAMESPACE] Created namespace prereqxxn5v
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqxxn5v
 ✔ [AUTOMATION_HUB]
    ✔ [CREATE_NAMESPACE] Created namespace prereq4jkbt
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4jkbt
 ✔ [TEST_MANAGER]
    ✔ [CREATE_NAMESPACE] Created namespace prereqnvvpc
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqnvvpc
 ✔ [ORCHESTRATOR]
    ✔ [CREATE_NAMESPACE] Created namespace prereq8pf2f
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq8pf2f
    ✔ [CREATE_NAMESPACE] Created namespace prereq4w4v4
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4w4v4
    ✔ [CREATE_NAMESPACE] Created namespace prereqkzwqg
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqkzwqg
 ✔ [INSIGHTS]
    ✔ [CREATE_NAMESPACE] Created namespace prereqqmgjc
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqqmgjc
    ✔ [CREATE_NAMESPACE] Created namespace prereq4vnjx
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq4vnjx
    ✔ [CREATE_NAMESPACE] Created namespace prereqgtg9g
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqgtg9g
 ✔ [AUTOMATION_OPS]
    ✔ [CREATE_NAMESPACE] Created namespace prereqgkkrz
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqgkkrz
 ✔ [AICENTER]
    ✔ [CREATE_NAMESPACE] Created namespace prereqdls88
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqdls88
    ✔ [CREATE_NAMESPACE] Created namespace prereq6m7x9
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereq6m7x9
Par défaut, la commande exécute des tests de santé sur tous les composants. Cependant, cela vous permet également de vérifier strictement les composants qui vous intéressent :
  • Si vous souhaitez exclure des composants de l'exécution, utilisez l'indicateur --excluded. Par exemple, si vous ne souhaitez pas vérifier l'intégrité de SQL, exécutez uipathctl health test --excluded SQL. La commande vérifie l'intégrité de tous les composants à l'exception de SQL.
  • Si vous souhaitez inclure uniquement certains composants dans l'exécution, utilisez l'indicateur --included. Par exemple, si vous souhaitez uniquement vérifier l'intégrité de DNS et l'objectstore, exécutez uipathctl health test --included DNS,OBJECTSTORAGE.
Remarque :

Vous pouvez trouver les noms des composants que vous pouvez inclure ou exclure des tests d'intégrité. Dans l'exemple, le premier mot de chaque ligne en retrait représente le nom du composant. Par exemple : SQL, OBJECTSTORE, DNS, etc.

Remarque :
Si vous comparez la sortie des commandes check et test pour l'application Data Service, vous pouvez voir que la première valide l'intégrité de l'application, tandis que la seconde vérifie le routage.

Problème connu

Vous pouvez obtenir un message d'erreur semblable à l'exemple suivant. Vous pouvez l’ignorer car aucune Actions n’est requise de votre part.

E0621 23:32:56.426321   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.426392   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.444420   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.446150   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.513357   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceededE0621 23:32:56.426321   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.426392   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.444420   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.446150   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded
E0621 23:32:56.513357   24470 reflector.go:138] external/io_k8s_client_go/tools/cache/reflector.go:167: Failed to watch *v1.Pod: context deadline exceeded

Validation approfondie

Validation approfondie

La commande diagnose fournit des informations détaillées sur l'état du cluster. Il vous aide à identifier les problèmes à tous les niveaux, tels que SQL, objectstore, nœud, secret, Istio, metworking, etc.
  • Il couvre à la fois les commandes check et test .
  • Il exécute les vérifications des prérequis effectuées avant l'installation d'Automation Suite pour valider les modifications apportées à la configuration de l'environnement après l'installation et qui peuvent être la cause potentielle du problème.
  • Il s'exécute sur tous les nœuds pour recueillir tous les problèmes spécifiques aux nœuds, tels que l'indisponibilité des ressources, toute interférence réseau, etc.

Pour exécuter une vérification de diagnostic, utilisez l'une des commandes suivantes, selon l'outil CLI que vous utilisez :

  • Si vous utilisez uipathctl, exécutez :
    ./uipathctl health diagnose input.json --versions version.json./uipathctl health diagnose input.json --versions version.json
  • Si vous utilisez uipathtools, exécutez :
    ./uipathtools health diagnose input.json --versions version.json./uipathtools health diagnose input.json --versions version.json

Exemple de sortie du rapport généré :

Checks run on nodes/aks-pool0-27031798-vmss000001
 ✔ [REDIS(PORT=6380)][CONNECTIVITY] Successfully made Redis connection on ci-asaks4011056.redis.cache.windows.net:6380[OBJECTSTORAGE(PRODUCT=ORCHESTRATOR)][CHECK_API] Object storage test passed for orchestrator
 ✔ [SQL(PRODUCT=PROCESSMINING, TYPE=ADO)][EXECUTE_NATIVE] Successfully executed command
    ✔ [BUILD_CLIENT] Successfully built ADO client
    ✔ [CONNECT] Successfully connected ADO client to DB[DB_ROLES] SQL user has the required roles to DB[DNS(FQDN=INSIGHTS.<FQDN>)][VALIDATE_FQDN] FQDN is valid
    ✔ [RESOLVE_SUBDOMAIN] Resolved insights.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }][IPS_MATCH] Subdomain resolves to top domain
 ✔ [DNS(FQDN=ALM.<FQDN>)][VALIDATE_FQDN] FQDN is valid
    ✔ [RESOLVE_SUBDOMAIN] Resolved alm.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }][IPS_MATCH] Subdomain resolves to top domain
 Checks run on cluster/[NODE][NODE_EXISTS] 12 Nodes present in the cluster
    ✔ [NODE_READY] All the nodes are in ready state
 ✔ [GATEKEEPER][GATEKEEPER_HEALTH] Application is healthy and in sync
    ✔ [CREATE_CONSTRAINT] Created test constraint
    ✔ [VERIFY] Constraint verified
    ✔ [CLEANUP] Cleaned up the test constraint
 ✔ [LOGGING][LOGGING_HEALTH] Application is healthy and in sync
 ✔ [DATASERVICE][CREATE_NAMESPACE] Created namespace prereqctzhp
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqctzhp
 ✔ [ROBOTUBE][ROBOTUBE_HEALTH] Application is healthy and in sync
 ✔ [AIRFLOW][AIRFLOW_HEALTH] Application is healthy and in sync
 ✔ [ARGOCD][ARGOCD_SERVER_PODS] Component argocd-server has ready Pods
    ✔ [ARGOCD_REPO_SERVER_PODS] Component argocd-repo-server has ready Pods
    ✔ [ARGOCD_APP_CONTROLLER_PODS] Component argocd-application-controller has ready Pods
    ✔ [ARGOCD_REDIS_PODS] Component redis-ha has ready Pods
 ✔ [ISTIO][LIST_PODS] Found 2 pods for Istio
    ✔ [ISTIOD_EXISTS] The Istio pods are present and running version -[ISTIOD_READY] Istio pods are healthy
 ✔ [AICENTER][AICENTER_HEALTH] Application is healthy and in sync
    ✔ [CREATE_NAMESPACE] Created namespace prereqn6sqn
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqn6sqn
Checks run on local/[CONNECTIVITY][OVERLAY_CONNECTIVITY_TEST] echo-a-4rffj on aks-pool0-27031798-vmss000002 can reach echo-a-4rffj's IP 10.240.1.86 on aks-pool0-27031798-vmss000002
    ✔ [OVERLAY_CONNECTIVITY_TEST] echo-a-4rffj on aks-pool0-27031798-vmss000002 can reach echo-a-8c6t5's IP 10.240.3.57 on aks-pool3-27031798-vmss000000
    ✔ [POD_TO_A] Scenario: http check between two random pods completed successfully
    ✔ [POD_TO_B_MULTI_NODE_CLUSTERIP] Scenario: http check between from pod to a multinode ClusterIP completed successfully
    ✔ [POD_TO_B_MULTI_NODE_HEADLESS] Scenario: http check between from pod to a multinode ClusterIP without a clusterIP set completed successfully
    ✔ [POD_TO_B_INTRA_NODE_CLUSTERIP] Scenario: http check between from two pods colocated on the same node via ClusterIP completed successfully
 ✔ [INGRESS][INGRESS_GATEWAY_FOUND] Found service istio-ingressgateway in the cluster
    ✔ [INGRESS_GATEWAY_PORT_CHECK] Service istio-ingressgateway is configured to allow traffic on http://ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com
    ✔ [INGRESS_GATEWAY_PORT_CHECK] Service istio-ingressgateway is configured to allow traffic on https://ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com:443[OSS(COMPONENT=MONITORING)][OSS(component=monitoring)] Check for component monitoring passed
 ✔ [OSS(COMPONENT=GATEKEEPER)][OSS(component=gatekeeper)] Check for component gatekeeper passed
 ✔ [STORAGECLASS(NAME=STORAGE_CLASS_SINGLE_REPLICA)][STORAGE_CLASS_EXISTS] Storage class azurefile-csi exists
    ✔ [LIST_NODES] Listed 12 nodes
    ✔ [CREATE_NAMESPACE] Created namespace prereqhcpkc
    ✔ [CREATE_STATEFULSET] Created statefulset storage-class-check-5n272
    ✔ [LIST_PODS] Listed 1 pods on node aks-pool3-27031798-vmss000001
    ✔ [POD_RUNNING] Found one pod running on node aks-pool3-27031798-vmss000001
 ✔ [REGISTRY][CONNECTIVITY] Successfully made Registry connection on sfbrdevhelmweacr.azurecr.io
 ✔ [NETWORK-POLICIES][CREATE_NAMESPACE] Namespace prereqw4t9b created
    ✔ [CREATE_EGRESS_NETWORK_POLICY] Created the egress network policies allow-coredns-egress and block-external-traffic
    ✔ [CREATE_INGRESS_NETWORK_POLICY] Created the ingress network policy: block-echo-server-ingress
    ✔ [CREATE_SERVICE] Service echo-server-svc created
 ✔ [STORAGECLASS(NAME=STORAGE_CLASS)][STORAGE_CLASS_EXISTS] Storage class managed-premium exists
    ✔ [LIST_NODES] Listed 12 nodes
    ✔ [CREATE_NAMESPACE] Created namespace prereqgjhcb
    ✔ [CREATE_STATEFULSET] Created statefulset storage-class-check-nm9th
    ✔ [LIST_PODS] Listed 1 pods on node aks-pool0-27031798-vmss000003
    ✔ [POD_RUNNING] Found one pod running on node aks-pool0-27031798-vmss000003
    ✔ [LIST_PODS] Listed 1 pods on node aks-pool0-27031798-vmss000001
    ✔ [POD_RUNNING] Found one pod running on node aks-pool0-27031798-vmss000001
 ✔ [DNS(FQDN=INSIGHTS.<FQDN>)][VALIDATE_FQDN] FQDN is valid
    ✔ [RESOLVE_TOP_DOMAIN] Resolved ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }][RESOLVE_SUBDOMAIN] Resolved insights.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }][IPS_MATCH] Subdomain resolves to top domain
 ✔ [NODE(CPU >= 8, RAM >= 16GI)][LIST_NODES] Listed 12 nodes
    ✔ [AT_LEAST_ONE_NODE] At least one node found
    ✔ [CPU_USAGE] Node aks-pool0-27031798-vmss000000 has 12.50% CPU usage
    ✔ [MEMORY_USAGE] Node aks-pool0-27031798-vmss000000 has 38.27% memory usage
    ✔ [POD_USAGE] Node aks-pool0-27031798-vmss000000 has 40.00% of pods in use. Number of pods: 40.00 max allowed: 100.00[OSS(COMPONENT=CERT-MANAGER)][OSS(component=cert-manager)] Check for component cert-manager passed
 ✔ [RESOURCE][Capacity] Automation suite already installed on cluster
 ✔ [OSS(COMPONENT=LOGGING)][OSS(component=logging)] Check for component logging passed
 ✔ [GPU(PRODUCT=DOCUMENTUNDERSTANDING)][BASIC_GPU_SUCCESS] Was able to start a CUDA job on a GPU node
Checks run on cluster/[DATASERVICE][DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
 ❌ [ISTIO][ISTIO_SYNC_STATUS] Istio sync is up-to-date
    ❌ [ISTIO_ENVOY_CONFIG_STATUS] Istio Envoy configs are not healthy: Error [IST0101] (VirtualService uipath/du-platform-vs) Referenced host:port not found: "aistorage:5000"[ISTIO_SERVICEMESH_VALIDATION_GET_REGISTRY_FQDN] Successfully retrieved registry url
    ✔ [ISTIO_SERVICEMESH_VALIDATION_GET_CLUSTER_FQDN] Successfully retrieved cluster fqdn
    ✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_DEPLOYMENT] Successfully created the test deployment istio-validation-deployment
    ✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_SERVICE] Successfully created the test service istio-validation-service
    ✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_GATEWAY] Successfully created the test gateway istio-validation-gateway
    ✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_VIRTUALSERVICE] Successfully created the test virtual service istio-validation-vs
    ✔ [ISTIO_SERVICEMESH_VALIDATION_URL_ACCESS] Success exposing the service via servicemesh
 ❌ [POD][LIST_NAMESPACES] Retrieved 25 namespaces to check pod health
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/ah-tenant-service-sync-insights-data-job-28122960-p6rzg cannot mount volume: MountVolume.SetUp failed for volume "ah-insights-secrets" : failed to sync secret cache: timed out waiting for the condition
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[external-storage-creds], unattached volumes=[workload-socket is-secrets openssl istio-podinfo temp-location cert-location istio-data external-storage-creds workload-certs istio-envoy java domain-cert-config edk2 credential-socket tmp additional-ca-cert-config pem istiod-ca-cert istio-token app-secrets ceph-storage-creds]: timed out waiting for the condition
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
    ❌ [POD_UNHEALTHY] Latest event for pod uipath/du-documentmanager-dm-maintenance-cron-28122960-4sm5z: Error: failed to sync configmap cache: timed out waiting for the condition
 ❌ [SYNC][namespace:"argocd" | kind:"Application" | name:"dataservice"] Application health check failed: health status is Progressing and sync status is SyncedChecks run on nodes/aks-pool0-27031798-vmss000001
 ✔ [REDIS(PORT=6380)]
    ✔ [CONNECTIVITY] Successfully made Redis connection on ci-asaks4011056.redis.cache.windows.net:6380
 ✔ [OBJECTSTORAGE(PRODUCT=ORCHESTRATOR)]
    ✔ [CHECK_API] Object storage test passed for orchestrator
 ✔ [SQL(PRODUCT=PROCESSMINING, TYPE=ADO)]
    ✔ [EXECUTE_NATIVE] Successfully executed command
    ✔ [BUILD_CLIENT] Successfully built ADO client
    ✔ [CONNECT] Successfully connected ADO client to DB
    ✔ [DB_ROLES] SQL user has the required roles to DB
 ✔ [DNS(FQDN=INSIGHTS.<FQDN>)]
    ✔ [VALIDATE_FQDN] FQDN is valid
    ✔ [RESOLVE_SUBDOMAIN] Resolved insights.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }]
    ✔ [IPS_MATCH] Subdomain resolves to top domain
 ✔ [DNS(FQDN=ALM.<FQDN>)]
    ✔ [VALIDATE_FQDN] FQDN is valid
    ✔ [RESOLVE_SUBDOMAIN] Resolved alm.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }]
    ✔ [IPS_MATCH] Subdomain resolves to top domain
 Checks run on cluster/
 ✔ [NODE]
    ✔ [NODE_EXISTS] 12 Nodes present in the cluster
    ✔ [NODE_READY] All the nodes are in ready state
 ✔ [GATEKEEPER]
    ✔ [GATEKEEPER_HEALTH] Application is healthy and in sync
    ✔ [CREATE_CONSTRAINT] Created test constraint
    ✔ [VERIFY] Constraint verified
    ✔ [CLEANUP] Cleaned up the test constraint
 ✔ [LOGGING]
    ✔ [LOGGING_HEALTH] Application is healthy and in sync
 ✔ [DATASERVICE]
    ✔ [CREATE_NAMESPACE] Created namespace prereqctzhp
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqctzhp
 ✔ [ROBOTUBE]
    ✔ [ROBOTUBE_HEALTH] Application is healthy and in sync
 ✔ [AIRFLOW]
    ✔ [AIRFLOW_HEALTH] Application is healthy and in sync
 ✔ [ARGOCD]
    ✔ [ARGOCD_SERVER_PODS] Component argocd-server has ready Pods
    ✔ [ARGOCD_REPO_SERVER_PODS] Component argocd-repo-server has ready Pods
    ✔ [ARGOCD_APP_CONTROLLER_PODS] Component argocd-application-controller has ready Pods
    ✔ [ARGOCD_REDIS_PODS] Component redis-ha has ready Pods
 ✔ [ISTIO]
    ✔ [LIST_PODS] Found 2 pods for Istio
    ✔ [ISTIOD_EXISTS] The Istio pods are present and running version - 
    ✔ [ISTIOD_READY] Istio pods are healthy
 ✔ [AICENTER]
    ✔ [AICENTER_HEALTH] Application is healthy and in sync
    ✔ [CREATE_NAMESPACE] Created namespace prereqn6sqn
    ✔ [CREATE_POD] Created test pod curl-pod in namespace prereqn6sqn
Checks run on local/
 ✔ [CONNECTIVITY]
    ✔ [OVERLAY_CONNECTIVITY_TEST] echo-a-4rffj on aks-pool0-27031798-vmss000002 can reach echo-a-4rffj's IP 10.240.1.86 on aks-pool0-27031798-vmss000002
    ✔ [OVERLAY_CONNECTIVITY_TEST] echo-a-4rffj on aks-pool0-27031798-vmss000002 can reach echo-a-8c6t5's IP 10.240.3.57 on aks-pool3-27031798-vmss000000
    ✔ [POD_TO_A] Scenario: http check between two random pods completed successfully
    ✔ [POD_TO_B_MULTI_NODE_CLUSTERIP] Scenario: http check between from pod to a multinode ClusterIP completed successfully
    ✔ [POD_TO_B_MULTI_NODE_HEADLESS] Scenario: http check between from pod to a multinode ClusterIP without a clusterIP set completed successfully
    ✔ [POD_TO_B_INTRA_NODE_CLUSTERIP] Scenario: http check between from two pods colocated on the same node via ClusterIP completed successfully
 ✔ [INGRESS]
    ✔ [INGRESS_GATEWAY_FOUND] Found service istio-ingressgateway in the cluster
    ✔ [INGRESS_GATEWAY_PORT_CHECK] Service istio-ingressgateway is configured to allow traffic on http://ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com
    ✔ [INGRESS_GATEWAY_PORT_CHECK] Service istio-ingressgateway is configured to allow traffic on https://ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com:443
 ✔ [OSS(COMPONENT=MONITORING)]
    ✔ [OSS(component=monitoring)] Check for component monitoring passed
 ✔ [OSS(COMPONENT=GATEKEEPER)]
    ✔ [OSS(component=gatekeeper)] Check for component gatekeeper passed
 ✔ [STORAGECLASS(NAME=STORAGE_CLASS_SINGLE_REPLICA)]
    ✔ [STORAGE_CLASS_EXISTS] Storage class azurefile-csi exists
    ✔ [LIST_NODES] Listed 12 nodes
    ✔ [CREATE_NAMESPACE] Created namespace prereqhcpkc
    ✔ [CREATE_STATEFULSET] Created statefulset storage-class-check-5n272
    ✔ [LIST_PODS] Listed 1 pods on node aks-pool3-27031798-vmss000001
    ✔ [POD_RUNNING] Found one pod running on node aks-pool3-27031798-vmss000001
 ✔ [REGISTRY]
    ✔ [CONNECTIVITY] Successfully made Registry connection on sfbrdevhelmweacr.azurecr.io
 ✔ [NETWORK-POLICIES]
    ✔ [CREATE_NAMESPACE] Namespace prereqw4t9b created
    ✔ [CREATE_EGRESS_NETWORK_POLICY] Created the egress network policies allow-coredns-egress and block-external-traffic
    ✔ [CREATE_INGRESS_NETWORK_POLICY] Created the ingress network policy: block-echo-server-ingress
    ✔ [CREATE_SERVICE] Service echo-server-svc created
 ✔ [STORAGECLASS(NAME=STORAGE_CLASS)]
    ✔ [STORAGE_CLASS_EXISTS] Storage class managed-premium exists
    ✔ [LIST_NODES] Listed 12 nodes
    ✔ [CREATE_NAMESPACE] Created namespace prereqgjhcb
    ✔ [CREATE_STATEFULSET] Created statefulset storage-class-check-nm9th
    ✔ [LIST_PODS] Listed 1 pods on node aks-pool0-27031798-vmss000003
    ✔ [POD_RUNNING] Found one pod running on node aks-pool0-27031798-vmss000003
    ✔ [LIST_PODS] Listed 1 pods on node aks-pool0-27031798-vmss000001
    ✔ [POD_RUNNING] Found one pod running on node aks-pool0-27031798-vmss000001
 ✔ [DNS(FQDN=INSIGHTS.<FQDN>)]
    ✔ [VALIDATE_FQDN] FQDN is valid
    ✔ [RESOLVE_TOP_DOMAIN] Resolved ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }]
    ✔ [RESOLVE_SUBDOMAIN] Resolved insights.ci-asaks4011056.infra-sf-ea.infra.uipath-dev.com to [{20.71.155.129 }]
    ✔ [IPS_MATCH] Subdomain resolves to top domain
 ✔ [NODE(CPU >= 8, RAM >= 16GI)]
    ✔ [LIST_NODES] Listed 12 nodes
    ✔ [AT_LEAST_ONE_NODE] At least one node found
    ✔ [CPU_USAGE] Node aks-pool0-27031798-vmss000000 has 12.50% CPU usage
    ✔ [MEMORY_USAGE] Node aks-pool0-27031798-vmss000000 has 38.27% memory usage
    ✔ [POD_USAGE] Node aks-pool0-27031798-vmss000000 has 40.00% of pods in use. Number of pods: 40.00 max allowed: 100.00
 ✔ [OSS(COMPONENT=CERT-MANAGER)]
    ✔ [OSS(component=cert-manager)] Check for component cert-manager passed
 ✔ [RESOURCE]
    ✔ [Capacity] Automation suite already installed on cluster
 ✔ [OSS(COMPONENT=LOGGING)]
    ✔ [OSS(component=logging)] Check for component logging passed
 ✔ [GPU(PRODUCT=DOCUMENTUNDERSTANDING)]
    ✔ [BASIC_GPU_SUCCESS] Was able to start a CUDA job on a GPU node
Checks run on cluster/
 ❌ [DATASERVICE]
    ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
 ❌ [ISTIO]
    ✔ [ISTIO_SYNC_STATUS] Istio sync is up-to-date
    ❌ [ISTIO_ENVOY_CONFIG_STATUS] Istio Envoy configs are not healthy: Error [IST0101] (VirtualService uipath/du-platform-vs) Referenced host:port not found: "aistorage:5000"
    ✔ [ISTIO_SERVICEMESH_VALIDATION_GET_REGISTRY_FQDN] Successfully retrieved registry url
    ✔ [ISTIO_SERVICEMESH_VALIDATION_GET_CLUSTER_FQDN] Successfully retrieved cluster fqdn
    ✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_DEPLOYMENT] Successfully created the test deployment istio-validation-deployment
    ✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_SERVICE] Successfully created the test service istio-validation-service
    ✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_GATEWAY] Successfully created the test gateway istio-validation-gateway
    ✔ [ISTIO_SERVICEMESH_VALIDATION_CREATE_TEST_VIRTUALSERVICE] Successfully created the test virtual service istio-validation-vs
    ✔ [ISTIO_SERVICEMESH_VALIDATION_URL_ACCESS] Success exposing the service via servicemesh
 ❌ [POD]
    ✔ [LIST_NAMESPACES] Retrieved 25 namespaces to check pod health
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/ah-tenant-service-sync-insights-data-job-28122960-p6rzg cannot mount volume: MountVolume.SetUp failed for volume "ah-insights-secrets" : failed to sync secret cache: timed out waiting for the condition
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[external-storage-creds], unattached volumes=[workload-socket is-secrets openssl istio-podinfo temp-location cert-location istio-data external-storage-creds workload-certs istio-envoy java domain-cert-config edk2 credential-socket tmp additional-ca-cert-config pem istiod-ca-cert istio-token app-secrets ceph-storage-creds]: timed out waiting for the condition
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
    ❌ [POD_UNHEALTHY] Latest event for pod uipath/du-documentmanager-dm-maintenance-cron-28122960-4sm5z: Error: failed to sync configmap cache: timed out waiting for the condition
 ❌ [SYNC]
    ❌ [namespace:"argocd" | kind:"Application" | name:"dataservice"] Application health check failed: health status is Progressing and sync status is Synced
Remarque :
L'échantillon susmentionné est rogné vers le bas. Les journaux réels contiennent plus d’informations. Vous pouvez remarquer que la commande diagnose s'exécute à plusieurs niveaux, tels que l'infrastructure, la mise en réseau, le stockage, les pods, le DNS, etc.

Analyse des journaux

Deux problèmes potentiels peuvent être remarqués dans les journaux précédents :

  • Istio a une mauvaise configuration, ce qui peut entraîner des problèmes d'accès à la plate-forme Document Understanding :
    [ISTIO][ISTIO_SYNC_STATUS] Istio sync is up-to-date
        ❌ [ISTIO_ENVOY_CONFIG_STATUS] Istio Envoy configs are not healthy: Error [IST0101] (VirtualService uipath/du-platform-vs) Referenced host:port not found: "aistorage:5000"❌ [ISTIO]
        ✔ [ISTIO_SYNC_STATUS] Istio sync is up-to-date
        ❌ [ISTIO_ENVOY_CONFIG_STATUS] Istio Envoy configs are not healthy: Error [IST0101] (VirtualService uipath/du-platform-vs) Referenced host:port not found: "aistorage:5000"
  • Data Service n'est pas disponible. Voir Ceph dans l’exemple de code.
    [DATASERVICE][DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[external-storage-creds], unattached volumes=[workload-socket is-secrets openssl istio-podinfo temp-location cert-location istio-data external-storage-creds workload-certs istio-envoy java domain-cert-config edk2 credential-socket tmp additional-ca-cert-config pem istiod-ca-cert istio-token app-secrets ceph-storage-creds]: timed out waiting for the condition
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found❌ [DATASERVICE]
        ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
    ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-v5krg cannot mount volume: (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[external-storage-creds], unattached volumes=[workload-socket is-secrets openssl istio-podinfo temp-location cert-location istio-data external-storage-creds workload-certs istio-envoy java domain-cert-config edk2 credential-socket tmp additional-ca-cert-config pem istiod-ca-cert istio-token app-secrets ceph-storage-creds]: timed out waiting for the condition
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-runtime-8f5bb7d56-xs9t5 cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found
        ❌ [CANNOT_MOUNT_VOLUME] Pod uipath/dataservice-taskrunner-787df76c74-98h5l cannot mount volume: MountVolume.SetUp failed for volume "external-storage-creds" : secret "dataservice-external-storage-secret" not found

Problèmes connus

Vous pouvez obtenir un message d'erreur semblable à l'exemple suivant. Vous pouvez l’ignorer car aucune Actions n’est requise de votre part.

I0622 01:31:28.917107   28815 request.go:601] Waited for 1.017599292s due to client-side throttling, not priority and fairness, request: GET:https://ci-asaks4011056-fwwpyxm7.hcp.westeurope.azmk8s.io:443/apis/networking.istio.io/v1alpha3I0622 01:31:28.917107   28815 request.go:601] Waited for 1.017599292s due to client-side throttling, not priority and fairness, request: GET:https://ci-asaks4011056-fwwpyxm7.hcp.westeurope.azmk8s.io:443/apis/networking.istio.io/v1alpha3

Utilitaires supplémentaires

Utilitaires supplémentaires

Toutes les commandes de l'outil de diagnostic Automation Suite (check, testet diagnose) prennent en charge un filtrage et un format de sortie supplémentaires.

Filtrage

Filtres

Description

Utilisations

--included

Liste des services à inclure dans la validation, séparés par des virgules

/uipathctl health diagnose input.json --versions.json --included ISTIO,INSIGHTS

Cette commande exécute le diagnostic uniquement sur Istio et Insights.

--excluded

Liste des services à exclure de la validation, séparés par des virgules

/uipathctl health test --excluded ISTIO,INSIGHTS

Cette commande exécute le test dans l'ensemble du cluster, sauf Istio et Insights.

Format de sortie

L'outil de diagnostic Automation Suite peut générer des rapports dans plusieurs formats : json, yaml, textet junit. Vous pouvez transmettre ces valeurs à n'importe quelle commande via l'indicateur --output . Ces formats de sortie sont pratiques lorsque vous souhaitez tirer parti de ces outils pour créer votre propre infrastructure de résolution des problèmes.

Exemples d'utilisation

Utilisation

Exemple de sortie

./uipathctl health check --included DATASERVICE --output json
./uipathtools health check --included DATASERVICE --output json./uipathctl health check --included DATASERVICE --output json
./uipathtools health check --included DATASERVICE --output json
{ "cluster/": { "DATASERVICE": [ { "name": "DATASERVICE_HEALTH", "description": "Application health check failed: health status is Progressing and sync status is Synced", "status": "failed" } ] } }{ "cluster/": { "DATASERVICE": [ { "name": "DATASERVICE_HEALTH", "description": "Application health check failed: health status is Progressing and sync status is Synced", "status": "failed" } ] } }
./uipathctl health check --included DATASERVICE --output yaml
./uipathtools health check --included DATASERVICE --output yaml./uipathctl health check --included DATASERVICE --output yaml
./uipathtools health check --included DATASERVICE --output yaml
? locationType: cluster : DATASERVICE: - name: DATASERVICE_HEALTH description: 'Application health check failed: health status is Progressing and sync status is Synced' status: failed? locationType: cluster : DATASERVICE: - name: DATASERVICE_HEALTH description: 'Application health check failed: health status is Progressing and sync status is Synced' status: failed
./uipathctl health check --included DATASERVICE --output text
./uipathtools health check --included DATASERVICE --output text./uipathctl health check --included DATASERVICE --output text
./uipathtools health check --included DATASERVICE --output text
Checks run on cluster/[DATASERVICE][DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is SyncedChecks run on cluster/ ❌ [DATASERVICE] ❌ [DATASERVICE_HEALTH] Application health check failed: health status is Progressing and sync status is Synced
./uipathctl health check --included DATASERVICE --output junit
./uipathtools health check --included DATASERVICE --output junit./uipathctl health check --included DATASERVICE --output junit
./uipathtools health check --included DATASERVICE --output junit
<testsuite name="Health" tests="1" errors="0" failures="1" time="0" timestamp="2023-06-22T01:59:08.313362+05:30" hostname=""> <testcase name="DATASERVICE_HEALTH" classname="" time="0"> <failure message="Application health check failed: health status is Progressing and sync status is Synced" type=""> </failure> </testcase> </testsuite><testsuite name="Health" tests="1" errors="0" failures="1" time="0" timestamp="2023-06-22T01:59:08.313362+05:30" hostname=""> <testcase name="DATASERVICE_HEALTH" classname="" time="0"> <failure message="Application health check failed: health status is Progressing and sync status is Synced" type=""> </failure> </testcase> </testsuite>

Lecture des rapports de diagnostic

Journaux INFO

Les journaux INFO affichés en vert indiquent que les vérifications requises ont réussi. Cependant, vous devez toujours vérifier correctement l'utilisation du disque/de la mémoire pour éviter les erreurs cachées.

Messages d'AVERTISSEMENT

Même si ces messages ne signalent pas un risque élevé, vous devrez peut-être les rectifier, car ils peuvent affecter certains services dans certains scénarios.

Messages d'erreur

Vous devez résoudre les problèmes décrits par ces messages car ils affectent certains services du cluster.

Rke2-server ou Rke2-agent Service arrêté

Si ces services sont en panne, cela signifie que le nœud est en panne. Essayez de redémarrer le service à l'aide de systemctl restart <service-name> car cela devrait résoudre le problème.

Taille du répertoire monté sur /var/lib

Le rapport affiche la taille du répertoire monté sur /var/lib car Kubernetes l'utilise pour stocker ses données. Si le répertoire est plein, divers problèmes peuvent survenir. Pour éviter ces problèmes, assurez-vous d'augmenter sa taille.

Version Rke2

Le rapport affiche la version rke2 comme référence.

Pression du disque ou pression de la mémoire

Pour tous les nœuds, nous spécifions s'ils sont sous pression du disque ou sous pression de la mémoire. Si cela se produit, les charges de travail sur ces nœuds peuvent commencer à présenter des problèmes. Vérifiez s'il existe d'autres processus en cours d'exécution sur ces nœuds qui consomment des ressources et supprimez-les si tel est le cas.

État des services Ceph

Nous utilisons Ceph comme stockage d'objets S3 pour stocker les journaux et les fichiers de différentes applications. Vous pouvez voir l'état de ses services. S'ils sont en panne, vous devrez peut-être les redémarrer. Assurez-vous également de vérifier si l'utilisation du disque par Ceph est pleine.

Ports 443 et 31443

Les ports 443 et 31443 doivent être ouverts avec le nom d'hôte fourni. Le rapport indique s'ils ne sont pas accessibles. Assurez-vous d'ouvrir les ports appropriés si cela se produit.

Validité du certificat

L'outil vérifie si le certificat téléchargé est valide pour le nom d'hôte donné et s'il n'a pas expiré. Si le certificat ne répond pas à ces critères, des erreurs se produisent. Pour éviter cela, assurez-vous de vérifier votre certificat téléchargé et modifiez-le si nécessaire.

GPU

Étant donné que certains services nécessitent la présence d'un GPU sur certains nœuds du cluster, l'outil de diagnostic vérifie s'il existe des nœuds GPU et imprime le nombre de ces nœuds. Si vous vous attendez à ce que des nœuds GPU soient présents et qu'ils ne s'affichent pas ici, cela signifie que quelque chose s'est mal passé durant la configuration du GPU.

RabbitMQ et DockerRegistry

RabbitMQ et DockerRegistry sont deux composants importants utilisés par certains services. Si l'un d'entre eux est en panne, vous devez enquêter sur le problème et redémarrer.

Les services ArgoCD ne fonctionnent plus

ArgoCD est notre outil de gestion du cycle de vie des applications (ALM). Si l'un de ses services est en panne, d'autres applications peuvent devenir obsolètes ou rencontrer d'autres problèmes. La récupération de ces services est importante et peut nécessiter un débogage supplémentaire.

Applications ArgoCD manquantes ou dégradées

L'outil de diagnostic d'Automation Suite indique si des applications ArgoCD sont manquantes ou dégradées.

  • Si des applications sont manquantes, accédez à l'interface utilisateur ArgoCD et synchronisez-les.
  • Si les applications sont dégradées, un débogage supplémentaire est nécessaire pour enquêter sur les erreurs générées par ArgoCD

Cette page vous a-t-elle été utile ?

Obtenez l'aide dont vous avez besoin
Formation RPA - Cours d'automatisation
Forum de la communauté UiPath
Logo Uipath blanc
Confiance et sécurité
© 2005-2024 UiPath. All rights reserved.