automation-suite
2023.10
false
UiPath logo, featuring letters U and I in white
Automation Suite on Linux Installation Guide
Last updated Dec 3, 2024

Forwarding application logs to Splunk

Note:
  • This section covers exporting POD logs. For exporting robot logs, see Ochestrator - About Logs.

  • Splunk is an external tool, and UiPath® does not have an opinion on how you should configure your Splunk setting. For more details about HTTP Event Collector, see Splunk official documentation.

The Splunk-Fluentd stack is a centralized logging solution that allows you to search, analyze, and visualize log data. Fluentd collects and sends the logs to Splunk. Splunk retrieves the logs and lets you visualize and analyze the data.

To configure Splunk, take the following steps:

  1. Click Settings in the top navigation bar, and then select Indexes.


  2. Click New Index and then Create an index.




  3. Click Settings in the top navigation bar, and then select Data inputs.


  4. Click HTTP Event Collector.


  5. To enable the new token creation, click Global Settings .


  6. Enable and save the Global Settings.


  7. To create the token, click New Token.


  8. Enter a name for the HTTP Event Collector and click Next.


  9. Click New and enter Source Type details.


  10. Scroll down and select Index from the available list of indexes, and click Next in the top navigation bar.


  11. Verify the data and click Submit.


  12. Once created, fetch the details of Token ID,Index,Source,Source Type. You need these values to set up ClusterOutput.


Creating a secret with a token

Create a Kubernetes secret with the HTTP Event Collector (HEC) token generated in the Splunk UI. This token is used for the authentication between Automation Suite and Splunk.

kubectl -n cattle-logging-system create secret generic splunk-hec-token --from-literal=splunk_hec_token=<splunk_hec_token>kubectl -n cattle-logging-system create secret generic splunk-hec-token --from-literal=splunk_hec_token=<splunk_hec_token>

ClusterOutput to Splunk

A ClusterOutput defines where your logs are sent to and describes the configuration and authentication details.

To configure the ClusterOutput for Splunk, run the following command:

kubectl -n cattle-logging-system apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:
  name: splunk-output
spec:
  splunkHec:
    buffer:
      tags: '[]'
      timekey: <splunk_hec_timekey>
      timekey_use_utc: true
      timekey_wait: 10s
      type: file
    hec_host: <splunk_hec_host>
    hec_port: <splunk_hec_port>
    hec_token:
      valueFrom:
        secretKeyRef:
          key: splunk_hec_token
          name: splunk-hec-token
    index: <splunk_hec_index>
    insecure_ssl: true
    protocol: <splunk_hec_protocol>
    source: <splunk_hec_source>
    sourcetype: <splunk_hec_source_type>
EOFkubectl -n cattle-logging-system apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:
  name: splunk-output
spec:
  splunkHec:
    buffer:
      tags: '[]'
      timekey: <splunk_hec_timekey>
      timekey_use_utc: true
      timekey_wait: 10s
      type: file
    hec_host: <splunk_hec_host>
    hec_port: <splunk_hec_port>
    hec_token:
      valueFrom:
        secretKeyRef:
          key: splunk_hec_token
          name: splunk-hec-token
    index: <splunk_hec_index>
    insecure_ssl: true
    protocol: <splunk_hec_protocol>
    source: <splunk_hec_source>
    sourcetype: <splunk_hec_source_type>
EOF
Note: Replace the attributes between angle brackets < > with the corresponding values used in your Splunk configuration. For details, see the following table:

Attribute

Description

splunk_hec_host

The network host of your Splunk instance. This is usually the IP address or FQDN of Splunk.

splunk_hec_port

The Splunk port for client communication. This port usually differs from the port on which you launch the Splunk dashboard. The conventional HEC port for Splunk is 8088.

secret_key

The secret key of the Splunk token. This is the name of the key in the secret you created in the previous step, which holds Splunk HEC token.

The presented manifest already contains the key: splunk_hec_token. If you have not altered the command to create a secret, you do not need to change this value.
splunk_hec_timekey value in splunkHec.buffer
The output frequency, or how often you want to push logs. We recommend using a 30-seconds (30s) interval.

protocol

The URL protocol. Valid values are http and https. You must use HTTPS protocol if you have SSL communication enabled on Splunk.

splunk_hec_index

The identifier for the Splunk index. Used to index events.

splunk_hec_source

The source field for events.

splunk_hec_source_type

The source type field for events.

The following example is based on the configuration presented on this page.



ClusterFlow in Fluentd

Use the ClusterFlow to define:
  • the logs you want to collect and filter;
  • the ClusterOutput to send the logs to.

To configure ClusterFlow in Fluentd, run the following command:

kubectl -n cattle-logging-system apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
  name: splunk-flow
  namespace: cattle-logging-system
spec:
  filters:
  - tag_normaliser:
      format: ${namespace_name}/${pod_name}.${container_name}
  globalOutputRefs:
  - splunk-output
  match:
  - select:
      container_names:
      - istio-proxy
      namespaces:
      - istio-system
  - exclude:
      container_names:
      - istio-proxy
      - istio-init
      - aicenter-hit-count-update
      - istio-configure-executor
      - on-prem-tenant-license-update
      - curl
      - recovery
      - aicenter-oob-scheduler
      - cert-trustor
  - exclude:
      namespaces:
      - fleet-system
      - cattle-gatekeeper-system
      - default
  - exclude:
      labels:
        app: csi-snapshotter
  - exclude:
      labels:
        app: csi-resizer
  - select: {}
EOFkubectl -n cattle-logging-system apply -f - <<"EOF"
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
  name: splunk-flow
  namespace: cattle-logging-system
spec:
  filters:
  - tag_normaliser:
      format: ${namespace_name}/${pod_name}.${container_name}
  globalOutputRefs:
  - splunk-output
  match:
  - select:
      container_names:
      - istio-proxy
      namespaces:
      - istio-system
  - exclude:
      container_names:
      - istio-proxy
      - istio-init
      - aicenter-hit-count-update
      - istio-configure-executor
      - on-prem-tenant-license-update
      - curl
      - recovery
      - aicenter-oob-scheduler
      - cert-trustor
  - exclude:
      namespaces:
      - fleet-system
      - cattle-gatekeeper-system
      - default
  - exclude:
      labels:
        app: csi-snapshotter
  - exclude:
      labels:
        app: csi-resizer
  - select: {}
EOF

Searching in Splunk

  1. Click Search & Reporting.



  2. Search based on Source, Index, and SourceType.





Troubleshooting

If, for some reason, the application logs are not pushed to Splunk, take the following steps:

  1. Change the Fluentd log level to debug.
  2. Query the Fluentd pod:
    kubectl patch loggings.logging.banzaicloud.io rancher-logging-root --type=json -p '[{"op":"add","path":"/spec/fluentd/logLevel","value":debug}]'
    kubectl -n cattle-logging-system exec -it sts/rancher-logging-root-fluentd cat /fluentd/log/outkubectl patch loggings.logging.banzaicloud.io rancher-logging-root --type=json -p '[{"op":"add","path":"/spec/fluentd/logLevel","value":debug}]'
    kubectl -n cattle-logging-system exec -it sts/rancher-logging-root-fluentd cat /fluentd/log/out
    Note: The Fluentd logs should indicate the cause of data not being pushed to Splunk.
  3. After fixing the issue, restore the Fluentd log level:
    kubectl patch loggings.logging.banzaicloud.io rancher-logging-root --type=json -p '[{"op":"remove","path":"/spec/fluentd/logLevel"}]'kubectl patch loggings.logging.banzaicloud.io rancher-logging-root --type=json -p '[{"op":"remove","path":"/spec/fluentd/logLevel"}]'

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.