- Overview
- Requirements
- Installation
- Post-installation
- Migration and upgrade
- Upgrading Automation Suite on EKS/AKS
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Orchestrator
- Step 7: Migrating standalone Insights
- Step 8: Deleting the default tenant
- B) Single tenant migration
- Migrating from Automation Suite on Linux to Automation Suite on EKS/AKS
- Monitoring and alerting
- Cluster administration
- Product-specific configuration
- Troubleshooting
- How to forward application logs to Splunk
- The backup setup does not work due to a failure to connect to Azure Government
- Pods in the uipath namespace stuck when enabling custom node taints
- Unable to launch Automation Hub and Apps with proxy setup
- Pods cannot communicate with FQDN in a proxy environment
- Test Automation SQL connection string is ignored
Automation Suite on EKS/AKS Installation Guide
How to forward application logs to Splunk
-
This section covers exporting POD logs. For exporting robot logs, see Ochestrator - About Logs.
-
Splunk is an external tool, and UiPath® does not have an opinion on how you should configure your Splunk setting. For more details about HTTP Event Collector, see Splunk official documentation.
The Splunk-Fluentd stack is a centralized logging solution that allows you to search, analyze, and visualize log data. Fluentd collects and sends the logs to Splunk. Splunk retrieves the logs and lets you visualize and analyze the data.
Create a Kubernetes secret with the HTTP Event Collector (HEC) token generated in the Splunk UI. This token is used for the authentication between Automation Suite and Splunk.
kubectl -n logging create secret generic splunk-hec-token --from-literal=splunk_hec_token=<splunk_hec_token>
kubectl -n logging create secret generic splunk-hec-token --from-literal=splunk_hec_token=<splunk_hec_token>
ClusterOutput
defines where your logs are sent to and describes the configuration and authentication details.
To configure the ClusterOutput for Splunk, run the following command:
kubectl -n logging apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:
name: splunk-output
spec:
splunkHec:
buffer:
tags: '[]'
timekey: <splunk_hec_timekey>
timekey_use_utc: true
timekey_wait: 10s
type: file
hec_host: <splunk_hec_host>
hec_port: <splunk_hec_port>
hec_token:
valueFrom:
secretKeyRef:
key: splunk_hec_token
name: splunk-hec-token
index: <splunk_hec_index>
insecure_ssl: true
protocol: <splunk_hec_protocol>
source: <splunk_hec_source>
sourcetype: <splunk_hec_source_type>
EOF
kubectl -n logging apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:
name: splunk-output
spec:
splunkHec:
buffer:
tags: '[]'
timekey: <splunk_hec_timekey>
timekey_use_utc: true
timekey_wait: 10s
type: file
hec_host: <splunk_hec_host>
hec_port: <splunk_hec_port>
hec_token:
valueFrom:
secretKeyRef:
key: splunk_hec_token
name: splunk-hec-token
index: <splunk_hec_index>
insecure_ssl: true
protocol: <splunk_hec_protocol>
source: <splunk_hec_source>
sourcetype: <splunk_hec_source_type>
EOF
< >
with the corresponding values used in your Splunk configuration. For details, see the following table:
Attribute |
Description |
---|---|
|
The network host of your Splunk instance. This is usually the IP address or FQDN of Splunk. |
|
The Splunk port for client communication. This port usually differs from the port on which you launch the Splunk dashboard.
The conventional HEC port for Splunk is
8088 .
|
|
The secret key of the Splunk token. This is the name of the key in the secret you created in the previous step, which holds Splunk HEC token. The presented manifest already contains the key:
splunk_hec_token . If you have not altered the command to create a secret, you do not need to change this value.
|
splunk_hec_timekey value in splunkHec.buffer |
The output frequency, or how often you want to push logs. We recommend using a 30-seconds (
30s ) interval.
|
|
The URL protocol. Valid values are
http and https . You must use HTTPS protocol if you have SSL communication enabled on Splunk.
|
|
The identifier for the Splunk index. Used to index events. |
|
The source field for events. |
|
The source type field for events. |
source
attribute.
The following example is based on the configuration presented on this page.
ClusterFlow
to define:
- the logs you want to collect and filter;
- the
ClusterOutput
to send the logs to.
To configure ClusterFlow in Fluentd, run the following command:
kubectl -n logging apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
name: splunk-flow
namespace: logging
spec:
filters:
- tag_normaliser:
format: ${namespace_name}/${pod_name}.${container_name}
globalOutputRefs:
- splunk-output
match:
- select:
container_names:
- istio-proxy
namespaces:
- istio-system
- exclude:
container_names:
- istio-proxy
- istio-init
- aicenter-hit-count-update
- istio-configure-executor
- on-prem-tenant-license-update
- curl
- recovery
- aicenter-oob-scheduler
- cert-trustor
- exclude:
namespaces:
- default
- exclude:
labels:
app: csi-snapshotter
- exclude:
labels:
app: csi-resizer
- select: {}
EOF
kubectl -n logging apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
name: splunk-flow
namespace: logging
spec:
filters:
- tag_normaliser:
format: ${namespace_name}/${pod_name}.${container_name}
globalOutputRefs:
- splunk-output
match:
- select:
container_names:
- istio-proxy
namespaces:
- istio-system
- exclude:
container_names:
- istio-proxy
- istio-init
- aicenter-hit-count-update
- istio-configure-executor
- on-prem-tenant-license-update
- curl
- recovery
- aicenter-oob-scheduler
- cert-trustor
- exclude:
namespaces:
- default
- exclude:
labels:
app: csi-snapshotter
- exclude:
labels:
app: csi-resizer
- select: {}
EOF
If, for some reason, the application logs are not pushed to Splunk, take the following steps:
- Change the Fluentd log level to debug.
- Query the Fluentd
pod:
kubectl patch logging -n logging logging-operator-logging --type=json -p '[{"op":"add","path":"/spec/fluentd/logLevel","value":debug}]' kubectl -n logging exec -it sts/logging-operator-logging-fluentd cat /fluentd/log/out
kubectl patch logging -n logging logging-operator-logging --type=json -p '[{"op":"add","path":"/spec/fluentd/logLevel","value":debug}]' kubectl -n logging exec -it sts/logging-operator-logging-fluentd cat /fluentd/log/outNote: The Fluentd logs should indicate the cause of data not being pushed to Splunk. - After fixing the issue, restore the Fluentd log
level:
kubectl patch logging -n logging logging-operator-logging --type=json -p '[{"op":"remove","path":"/spec/fluentd/logLevel","value":debug}]'
kubectl patch logging -n logging logging-operator-logging --type=json -p '[{"op":"remove","path":"/spec/fluentd/logLevel","value":debug}]'