automation-suite
2024.10
true
UiPath logo, featuring letters U and I in white
Automation Suite on Linux Installation Guide
Last updated Nov 14, 2024

Dashboards and metrics

We provide pre-built component-specific dashboards, which you can access in Grafana. For details on the components you can monitor, see Automation Suite architecture.

Some alerts are pre-configured on important metrics. You can find these configurations in the Alerts section in the Prometheus UI. It is your responsibility to configure alert receivers.

Accessing Grafana dashboard

To access Grafana dashboards, you must retrieve your credentials and use them to log in:

  • Username:

    kubectl -n monitoring get secrets/grafana-creds -o "jsonpath={.data.admin-user}" | base64 -d; echokubectl -n monitoring get secrets/grafana-creds -o "jsonpath={.data.admin-user}" | base64 -d; echo
  • Password:

    kubectl -n monitoring get secrets/grafana-creds -o "jsonpath={.data.admin-password}" | base64 -d; echokubectl -n monitoring get secrets/grafana-creds -o "jsonpath={.data.admin-password}" | base64 -d; echo

Automation Suite component dashboards

Monitoring the network

You can monitor the Automation Suite network via the following Grafana dashboards: Istio Mesh and Istio. For details on how to access Grafana, see Accessing the monitoring tools.

The Istio-related dashboard are disabled by default. To enable the dashboards, take the following steps:

  1. Log into the ArgoCD UI and go to Monitoring App. For details on how to access the ArgoCD UI, see Accessing ArgoCD.

  2. Select Details and then, choose Parameters.

  3. Set the global.monitoringConfigure.enableEnhancedMonitoring.istio.enabled parameter to true.
Note:

If you reinstall or perform an upgrade, the configuration set to enable the Istio dashboards will be removed. Consequently, you must re-enable the configuration to access the Istio dashboars.

Istio Mesh dashboard

This dashboard shows the overall request volume, as well as 400 and 500 error rates across the entire service mesh, for the selected time period. The data is displayed in the upper-right corner of the window.

It also shows the immediate Success Rate over the past minute for each individual service. Note that a Success Rate of NaN indicates the service is not currently serving traffic.

Istio Workload dashboard

This dashboard shows the traffic metrics over the time range selected in the upper-right corner of the window.

Use the selectors at the top of the dashboard to drill into specific workloads. Of particular interest is the uipath namespace.

The top section shows overall metrics, the Inbound Workloads section separates out traffic based on origin, and the Outbound Services section separates out traffic based on destination.

Monitoring storage

Monitoring Persistent Volumes

You can monitor persistent volumes via the Kubernetes / Persistent Volumes dashboard. You can keep track of the free and used space for each volume.

You can also check the status of each volume by clicking the PersistentVolumes item within the Storage menu of the Cluster Explorer.

Ceph cluster dashboard

Ceph is an open-source storage provider that exposes Amazon S3-compliant object/blob storage on top of persistent volumes created by Longhorn.

Monitoring hardware utilization

To check the hardware utilization per node, you can use the Nodes dashboard. Data on the CPU, Memory, Disk, and Network is available.

You can monitor the hardware utilization for specific workloads using the Kubernetes / Compute Resources / Namespace (Workloads) dashboard. Select the uipath namespace to get the needed data.

Monitoring Kubernetes resource status

To see the status of pods, deployments, statefulsets, etc., you can use the Cluster Explorer UI. This is the same landing page as accessed after logging into the rancher-server endpoint. The homepage shows a summary, with drill downs into specific details for each resource type on the left. Note the namespace selector at the top of the page. This dashboard may also be replaced with the Lens tool.

Creating shareable visual snapshot of a Grafana chart

  1. Click the menu button next to the chart title, and then select Share.
  2. Click the Snapshot tab, and set the Snapshot name,Expire, and Timeout.
  3. Click Publish to snapshot.raintank.io.

For more details, see the Grafana documentation on sharing dashboards.

Note: This snapshot is viewable on the public Internet by anyone with the link.

Creating custom persistent Grafana dashboards

For details on how to create custom persisten Grafana dashboards, see Grafana documentation.

Admin access to Grafana

Admin access to Grafana is not typically needed in Automation Suite clusters as dashboards are available for read access by default to anonymous users, and creating custom persistent dashboards must be created using the Kubernetes-native instructions linked above in this document.

Nonetheless, admin access to Grafana is possible with the instructions below.

The default username and password for Grafana admin access can be retrieved as follows:

kubectl -n monitoring get secrets/grafana-creds -o "jsonpath={.data.admin-user}" | base64 -d; echo
kubectl -n monitoring get secrets/grafana-creds -o "jsonpath={.data.admin-password}" | base64 -d; echokubectl -n monitoring get secrets/grafana-creds -o "jsonpath={.data.admin-user}" | base64 -d; echo
kubectl -n monitoring get secrets/grafana-creds -o "jsonpath={.data.admin-password}" | base64 -d; echo

Note that in High Availability Automation Suite clusters, there are multiple Grafana pods in order to enable uninterrupted read access in case of node failure, as well as a higher volume of read queries. This is incompatible with admin access because the pods do not share session state and logging in requires it. In order to work around this, the number of Grafana replicas must be temporarily scaled to 1 while admin access is desired. See below for instructions on how to scale the number of Grafana replicas:

# scale down
kubectl scale -n monitoring deployment/monitoring-grafana --replicas=1
# scale up
kubectl scale -n monitoring deployment/monitoring-grafana --replicas=2# scale down
kubectl scale -n monitoring deployment/monitoring-grafana --replicas=1
# scale up
kubectl scale -n monitoring deployment/monitoring-grafana --replicas=2

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.