- Overview
- Requirements
- Recommended: Deployment templates
- Manual: Preparing the installation
- Manual: Preparing the installation
- Step 1: Configuring the OCI-compliant registry for offline installations
- Step 2: Configuring the external objectstore
- Step 3: Configuring High Availability Add-on
- Step 4: Configuring Microsoft SQL Server
- Step 5: Configuring the load balancer
- Step 6: Configuring the DNS
- Step 7: Configuring the disks
- Step 8: Configuring kernel and OS level settings
- Step 9: Configuring the node ports
- Step 10: Applying miscellaneous settings
- Step 12: Validating and installing the required RPM packages
- Step 13: Generating cluster_config.json
- Cluster_config.json Sample
- General configuration
- Profile configuration
- Certificate configuration
- Database configuration
- External Objectstore configuration
- Pre-signed URL configuration
- ArgoCD configuration
- External OCI-compliant registry configuration
- Disaster recovery: Active/Passive and Active/Active configurations
- High Availability Add-on configuration
- Orchestrator-specific configuration
- Insights-specific configuration
- Process Mining-specific configuration
- Document Understanding-specific configuration
- Automation Suite Robots-specific configuration
- AI Center-specific configuration
- Monitoring configuration
- Optional: Configuring the proxy server
- Optional: Enabling resilience to zonal failures in a multi-node HA-ready production cluster
- Optional: Passing custom resolv.conf
- Optional: Increasing fault tolerance
- Adding a dedicated agent node with GPU support
- Adding a dedicated agent Node for Task Mining
- Connecting Task Mining application
- Adding a Dedicated Agent Node for Automation Suite Robots
- Step 15: Configuring the temporary Docker registry for offline installations
- Step 16: Validating the prerequisites for the installation
- Manual: Performing the installation
- Post-installation
- Cluster administration
- Managing products
- Getting Started with the Cluster Administration portal
- Migrating objectstore from persistent volume to raw disks
- Migrating from in-cluster to external High Availability Add-on
- Migrating data between objectstores
- Migrating in-cluster objectstore to external objectstore
- Migrating to an external OCI-compliant registry
- Switching to the secondary cluster manually in an Active/Passive setup
- Disaster Recovery: Performing post-installation operations
- Converting an existing installation to multi-site setup
- Guidelines on upgrading an Active/Passive or Active/Active deployment
- Guidelines on backing up and restoring an Active/Passive or Active/Active deployment
- Monitoring and alerting
- Migration and upgrade
- Migrating between Automation Suite clusters
- Upgrading Automation Suite
- Downloading the installation packages and getting all the files on the first server node
- Retrieving the latest applied configuration from the cluster
- Updating the cluster configuration
- Configuring the OCI-compliant registry for offline installations
- Executing the upgrade
- Performing post-upgrade operations
- Applying a patch
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to troubleshoot services during installation
- How to uninstall the cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bucket
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to disable TX checksum offloading
- How to manually set the ArgoCD log level to Info
- How to expand AI Center storage
- How to generate the encoded pull_secret_value for external registries
- How to address weak ciphers in TLS 1.2
- How to check the TLS version
- Unable to run an offline installation on RHEL 8.4 OS
- Error in downloading the bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- SQL connection string validation error
- Prerequisite check for selinux iscsid module fails
- Azure disk not marked as SSD
- Failure after certificate update
- Antivirus causes installation issues
- Automation Suite not working after OS upgrade
- Automation Suite requires backlog_wait_time to be set to 0
- Volume unable to mount due to not being ready for workloads
- Support bundle log collection failure
- Single-node upgrade fails at the fabric stage
- Upgrade fails due to unhealthy Ceph
- RKE2 not getting started due to space issue
- Volume unable to mount and remains in attach/detach loop state
- Upgrade fails due to classic objects in the Orchestrator database
- Ceph cluster found in a degraded state after side-by-side upgrade
- Unhealthy Insights component causes the migration to fail
- Service upgrade fails for Apps
- In-place upgrade timeouts
- Docker registry migration stuck in PVC deletion stage
- AI Center provisioning failure after upgrading to 2023.10 or later
- Upgrade fails in offline environments
- SQL validation fails during upgrade
- snapshot-controller-crds pod in CrashLoopBackOff state after upgrade
- Setting a timeout interval for the management portals
- Authentication not working after migration
- Kinit: Cannot find KDC for realm <AD Domain> while getting initial credentials
- Kinit: Keytab contains no suitable keys for *** while getting initial credentials
- GSSAPI operation failed due to invalid status code
- Alarm received for failed Kerberos-tgt-update job
- SSPI provider: Server not found in Kerberos database
- Login failed for AD user due to disabled account
- ArgoCD login failed
- Update the underlying directory connections
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis probe failure
- RKE2 server fails to start
- Secret not found in UiPath namespace
- ArgoCD goes into progressing state after first installation
- MongoDB pods in CrashLoopBackOff or pending PVC provisioning after deletion
- Pods stuck in Init:0/X
- Missing Ceph-rook metrics from monitoring dashboards
- Running High Availability with Process Mining
- Process Mining ingestion failed when logged in using Kerberos
- After Disaster Recovery Dapr is not working properly for Process Mining and Task Mining
- Unable to connect to AutomationSuite_ProcessMining_Warehouse database using a pyodbc format connection string
- Airflow installation fails with sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''
- How to add an IP table rule to use SQL Server port 1433
- Automation Suite certificate is not trusted from the server where CData Sync is running
- Running the diagnostics tool
- Using the Automation Suite support bundle
- Exploring Logs
Dashboards and metrics
We provide pre-built component-specific dashboards, which you can access in Grafana. For details on the components you can monitor, see Automation Suite architecture.
Some alerts are pre-configured on important metrics. You can find these configurations in the Alerts section in the Prometheus UI. It is your responsibility to configure alert receivers.
To access Grafana dashboards, you must retrieve your credentials and use them to log in:
-
Username:
kubectl -n monitoring get secrets/grafana-creds -o "jsonpath={.data.admin-user}" | base64 -d; echo
kubectl -n monitoring get secrets/grafana-creds -o "jsonpath={.data.admin-user}" | base64 -d; echo -
Password:
kubectl -n monitoring get secrets/grafana-creds -o "jsonpath={.data.admin-password}" | base64 -d; echo
kubectl -n monitoring get secrets/grafana-creds -o "jsonpath={.data.admin-password}" | base64 -d; echo
You can monitor the Automation Suite network via the following Grafana dashboards: Istio Mesh and Istio. For details on how to access Grafana, see Accessing the monitoring tools.
The Istio-related dashboard are disabled by default. To enable the dashboards, take the following steps:
-
Log into the ArgoCD UI and go to Monitoring App. For details on how to access the ArgoCD UI, see Accessing ArgoCD.
-
Select Details and then, choose Parameters.
-
Set the
global.monitoringConfigure.enableEnhancedMonitoring.istio.enabled
parameter totrue
.
If you reinstall or perform an upgrade, the configuration set to enable the Istio dashboards will be removed. Consequently, you must re-enable the configuration to access the Istio dashboars.
Istio Mesh dashboard
This dashboard shows the overall request volume, as well as 400 and 500 error rates across the entire service mesh, for the selected time period. The data is displayed in the upper-right corner of the window.
It also shows the immediate Success Rate over the past minute for each individual service. Note that a Success Rate of NaN indicates the service is not currently serving traffic.
Istio Workload dashboard
This dashboard shows the traffic metrics over the time range selected in the upper-right corner of the window.
Use the selectors at the top of the dashboard to drill into specific workloads. Of particular interest is the uipath namespace.
The top section shows overall metrics, the Inbound Workloads section separates out traffic based on origin, and the Outbound Services section separates out traffic based on destination.
Monitoring Persistent Volumes
You can monitor persistent volumes via the Kubernetes / Persistent Volumes dashboard. You can keep track of the free and used space for each volume.
You can also check the status of each volume by clicking the PersistentVolumes item within the Storage menu of the Cluster Explorer.
Ceph cluster dashboard
Ceph is an open-source storage provider that exposes Amazon S3-compliant object/blob storage on top of persistent volumes created by Longhorn.
To check the hardware utilization per node, you can use the Nodes dashboard. Data on the CPU, Memory, Disk, and Network is available.
You can monitor the hardware utilization for specific workloads using the Kubernetes / Compute Resources / Namespace (Workloads) dashboard. Select the uipath namespace to get the needed data.
To see the status of pods, deployments, statefulsets, etc., you can use the Cluster Explorer UI. This is the same landing page as accessed after logging into the rancher-server endpoint. The homepage shows a summary, with drill downs into specific details for each resource type on the left. Note the namespace selector at the top of the page. This dashboard may also be replaced with the Lens tool.
- Click the menu button next to the chart title, and then select Share.
- Click the Snapshot tab, and set the Snapshot name,Expire, and Timeout.
- Click Publish to snapshot.raintank.io.
For more details, see the Grafana documentation on sharing dashboards.
For details on how to create custom persisten Grafana dashboards, see Grafana documentation.
Admin access to Grafana is not typically needed in Automation Suite clusters as dashboards are available for read access by default to anonymous users, and creating custom persistent dashboards must be created using the Kubernetes-native instructions linked above in this document.
Nonetheless, admin access to Grafana is possible with the instructions below.
The default username and password for Grafana admin access can be retrieved as follows:
kubectl -n monitoring get secrets/grafana-creds -o "jsonpath={.data.admin-user}" | base64 -d; echo
kubectl -n monitoring get secrets/grafana-creds -o "jsonpath={.data.admin-password}" | base64 -d; echo
kubectl -n monitoring get secrets/grafana-creds -o "jsonpath={.data.admin-user}" | base64 -d; echo
kubectl -n monitoring get secrets/grafana-creds -o "jsonpath={.data.admin-password}" | base64 -d; echo
Note that in High Availability Automation Suite clusters, there are multiple Grafana pods in order to enable uninterrupted read access in case of node failure, as well as a higher volume of read queries. This is incompatible with admin access because the pods do not share session state and logging in requires it. In order to work around this, the number of Grafana replicas must be temporarily scaled to 1 while admin access is desired. See below for instructions on how to scale the number of Grafana replicas:
# scale down
kubectl scale -n monitoring deployment/monitoring-grafana --replicas=1
# scale up
kubectl scale -n monitoring deployment/monitoring-grafana --replicas=2
# scale down
kubectl scale -n monitoring deployment/monitoring-grafana --replicas=1
# scale up
kubectl scale -n monitoring deployment/monitoring-grafana --replicas=2
You can search for available metrics in the Prometheus UI.
Documentation on the available metrics is here:
- Accessing Grafana dashboard
- Automation Suite component dashboards
- Monitoring the network
- Monitoring storage
- Monitoring hardware utilization
- Monitoring Kubernetes resource status
- Creating shareable visual snapshot of a Grafana chart
- Creating custom persistent Grafana dashboards
- Admin access to Grafana
- Available metrics