Sending logs to Splunk using fluent-plugin-splunk-hec Fluentd output plugin

Anuja Arosha
3 min readMar 20, 2021

In my last post, I have explained sending logs from fluent-bit to Splunk. In today’s post I will explain how to send logs to Splunk using Fluentd. Both Fluentd and Fluent-Bit were developed by Treasure Data and Fluentd is the first product to launch. Fluent-Bit is a light weight version of Fluentd. But today we are going to use Fluentd which is much popular in the community.

As usual, I’ll start by mentioning the environment that I have tested my setup.

  • Splunk Enterprise : Version 8.0.1
  • Fluentd : Version 1.11.5
  • Fluentd running OS version : Ubuntu 20.04.1 LTS

In addition to those, I have used fluent-plugin-splunk-hec Fluentd Output plugin to accomplish my task. As mentioned in the plugin Github Readme file, I have updated my Fluentd Gemfile, which was located inside root of the Fluentd source directory.

Once plugin is added to your environment, you can modify your fluent.conf file in order to match the logs that your source generates. Below is the related match clause of the configuration file.

<match debug.access>
@type splunk_hec
@log_level trace
hec_host <your splunk server IP goes here>
hec_port 8088
hec_token <your splunk HEC token goes here>
index aap_index
insecure_ssl true
<format>
@type json
</format>
<buffer>
@type file
path /media/psf/Home/Fluentd/FluentConf/5_fluent_tail_splunk
flush_interval 30s
</buffer>
</match>

Most of the parameters that I have declared are self explanatory and some of those are explained in the plugin Readme file as well. As in general, if I explained optional parameters that I have used; I have enabled trace log level in order to get some clear understanding what is happening once the Fluentd process has started. Then I have used a buffering option, which will write the logs generated by the source in to a file and flush that chunk after a 30 seconds of period. As in my previous post, I am sending to a particular Splunk index (aap_index) that I have created at the Splunk end.

In the happy path, once you executed the configuration file as below,

fluentd  -c fluent.conf

you will be able to see the logs in the Splunk with following query.

index="aap_index"

Like I said, that is all for the happy path scenario. But I’ll give you two error scenarios that you may encounter while you doing this first time. This is where your trace logging become handy.

certificate verify failed (self signed certificate in certificate chain)

Your full error will be like;

...error_class=OpenSSL::SSL::SSLError error="SSL_connect returned=1 errno=0 state=error: certificate verify failed (self signed certificate in certificate chain)"

This is a scenario where your Splunk server is having secured connection with a trusted certificate. What you have to do is add necessary certificates to your certificate chain. For an Ubuntu environment, you can follow this post.

read server hello A: wrong version number

This is another possible error you may encountered. Full error line will be like this

...error_class=OpenSSL::SSL::SSLError error="SSL_connect returned=1 errno=0 state=SSLv3 read server hello A: wrong version number"

If you get this error, one of the main configuration that you need to check is Enable SSL check box in the Splunk Global Settings. Once you log in to the Splunk web interface, navigate to Settings -> Data Inputs -> HTTP Event Collector -> Global Settings. Then you will be able to see a pop up like below and make sure you put the check mark on Enable SSL check box.

That’s all I have to say in this post and will meet soon again :)

--

--

Anuja Arosha

Native mobile application development enthusiasm. DevOps engineer.