- Overview
- Requirements
- Installation
- Post-installation
- Cluster administration
- Managing products
- Managing the cluster in ArgoCD
- Setting up the external NFS server
- Automated: Enabling the Backup on the Cluster
- Automated: Disabling the Backup on the Cluster
- Automated, Online: Restoring the Cluster
- Automated, Offline: Restoring the Cluster
- Manual: Enabling the Backup on the Cluster
- Manual: Disabling the Backup on the Cluster
- Manual, Online: Restoring the Cluster
- Manual, Offline: Restoring the Cluster
- Additional configuration
- Migrating objectstore from persistent volume to raw disks
- Configuring the FQDN post-installation
- Setting up Kerberos authentication
- Setting up Elasticsearch and Kibana
- Saving robot logs to Elasticsearch
- Setting up Splunk
- Monitoring and alerting
- Migration and upgrade
- Migration options
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Insights
- Step 7: Deleting the default tenant
- B) Single tenant migration
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to Troubleshoot Services During Installation
- How to Uninstall the Cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bucket
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to automatically clean up Longhorn snapshots
- How to disable TX checksum offloading
- How to address weak ciphers in TLS 1.2
- Unable to run an offline installation on RHEL 8.4 OS
- Error in Downloading the Bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- First installation fails during Longhorn setup
- SQL connection string validation error
- Prerequisite check for selinux iscsid module fails
- Azure disk not marked as SSD
- Failure After Certificate Update
- Automation Suite not working after OS upgrade
- Automation Suite Requires Backlog_wait_time to Be Set 1
- Volume unable to mount due to not being ready for workloads
- RKE2 fails during installation and upgrade
- Failure to upload or download data in objectstore
- PVC resize does not heal Ceph
- Failure to Resize Objectstore PVC
- Rook Ceph or Looker pod stuck in Init state
- StatefulSet volume attachment error
- Failure to create persistent volumes
- Storage reclamation patch
- Backup failed due to TooManySnapshots error
- All Longhorn replicas are faulted
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Cannot Log in After Migration
- Kinit: Cannot Find KDC for Realm <AD Domain> While Getting Initial Credentials
- Kinit: Keytab Contains No Suitable Keys for *** While Getting Initial Credentials
- GSSAPI Operation Failed With Error: An Invalid Status Code Was Supplied (Client's Credentials Have Been Revoked).
- Alarm Received for Failed Kerberos-tgt-update Job
- SSPI Provider: Server Not Found in Kerberos Database
- Login Failed for User <ADDOMAIN><aduser>. Reason: The Account Is Disabled.
- ArgoCD login failed
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis Probe Failure
- RKE2 Server Fails to Start
- Secret Not Found in UiPath Namespace
- After the Initial Install, ArgoCD App Went Into Progressing State
- MongoDB pods in CrashLoopBackOff or pending PVC provisioning after deletion
- Unexpected Inconsistency; Run Fsck Manually
- Degraded MongoDB or Business Applications After Cluster Restore
- Missing Self-heal-operator and Sf-k8-utils Repo
- Unhealthy Services After Cluster Restore or Rollback
- RabbitMQ pod stuck in CrashLoopBackOff
- Prometheus in CrashloopBackoff state with out-of-memory (OOM) error
- Missing Ceph-rook metrics from monitoring dashboards
- Pods cannot communicate with FQDN in a proxy environment
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite support bundle
- Exploring Logs
Automation Suite Installation Guide
Setting up Elasticsearch and Kibana
The EFK (Elasticsearch, Fluentd, Kibana) stack is a centralized logging solution that allows you to search, analyze and visualize log data. Fluentd collects and sends the logs to Elasticsearch, Kibana retrieves the logs and lets you visualize and analyze the data.
Automation Suite supports Elasticsearch version 7.x. Version 8.x is also supported, but only for advanced configurations.
If your Elasticsearch instance requires credentials, create a secret with its password in the cluster.
kubectl -n cattle-logging-system create secret generic elastic-user --from-literal=password=<password>
kubectl -n cattle-logging-system create secret generic elastic-user --from-literal=password=<password>
Run the following command to ClusterOutput to Elasticsearch. Replace the following attributes with the ones of your Elasticsearch configuration:
<elasticsearch host>
- the network host of your Elasticsearch instance;<elasticsearch port>
- the Elasticsearch port for client communication;<secret key>
- the secret with the Elasticsearch password;timekey
value inelasticsearch.buffer
- the output frequency i.e. how often you want to push logs;-
elasticsearch.scheme
- the URL scheme. Valid values are:http
orhttps
.kubectl -n cattle-logging-system apply -f - <<"EOF" apiVersion: logging.banzaicloud.io/v1beta1 kind: ClusterOutput metadata: name: es-output spec: elasticsearch: host: <elasticsearch host> port: <elasticsearch port> scheme: <http or https> ssl_verify: false ssl_version: TLSv1_2 user: elastic password: valueFrom: secretKeyRef: name: elastic-user key: <secret key> buffer: timekey: 10m timekey_wait: 30s timekey_use_utc: true EOF
kubectl -n cattle-logging-system apply -f - <<"EOF" apiVersion: logging.banzaicloud.io/v1beta1 kind: ClusterOutput metadata: name: es-output spec: elasticsearch: host: <elasticsearch host> port: <elasticsearch port> scheme: <http or https> ssl_verify: false ssl_version: TLSv1_2 user: elastic password: valueFrom: secretKeyRef: name: elastic-user key: <secret key> buffer: timekey: 10m timekey_wait: 30s timekey_use_utc: true EOF
Run the following command to ClusterFlow in FluendD:
kubectl -n cattle-logging-system apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
name: es-flow
spec:
filters:
- tag_normaliser:
format: ${namespace_name}/${pod_name}.${container_name}
globalOutputRefs:
- es-output
match:
- select:
container_names:
- istio-proxy
namespaces:
- istio-system
- exclude:
container_names:
- istio-proxy
- istio-init
- aicenter-hit-count-update
- exclude:
namespaces:
- fleet-system
- cattle-gatekeeper-system
- default
- exclude:
labels:
app: csi-snapshotter
- exclude:
labels:
longhorn.io/job-task: backup
- exclude:
labels:
app: csi-resizer
- select: {}
EOF
kubectl -n cattle-logging-system apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
name: es-flow
spec:
filters:
- tag_normaliser:
format: ${namespace_name}/${pod_name}.${container_name}
globalOutputRefs:
- es-output
match:
- select:
container_names:
- istio-proxy
namespaces:
- istio-system
- exclude:
container_names:
- istio-proxy
- istio-init
- aicenter-hit-count-update
- exclude:
namespaces:
- fleet-system
- cattle-gatekeeper-system
- default
- exclude:
labels:
app: csi-snapshotter
- exclude:
labels:
longhorn.io/job-task: backup
- exclude:
labels:
app: csi-resizer
- select: {}
EOF
ClusterOutput
.