I have been playing around with Splunk recently, so I can understand what it is and why my customers may choose to it. For those that don’t know, Splunk (the product) captures, indexes and correlates real-time data in a searchable repository from which it can generate graphs, reports, alerts, dashboards and visualizations. In essence Splunk is a really cool and smart way to look at and analyse your data.
Because Splunk is able to ingest data from almost any source we can quite easily start pulling data out of an IBM Storwize or SVC product and then investigate with Splunk. I couldn’t find anything in Google on this subject, so here is a post that will help you along.
A common way to get data into Splunk is to use syslog. Since Storwize can send events to syslog, all we need to do on the Storwize side is configure where the Splunk server is.
In this example I have chosen syslog level 7 (which is detailed output) and to send all events.
Then on Splunk side, ensure Splunk is listening for syslog events. Storwize always uses UDP port 514:
However this really only captures events. There are lots of other pieces of information we may want to pull out of our Storwize products and graph in Splunk. So lets teach Splunk how to get them using CLI over SSH.
Firstly we need to supply Splunk a user ID so it can login to our Storwize and grab data. I created a new user on my Storwize V3700 called Splunk, placed it in the Monitor group (so anyone with the Splunk userid and password can look but not touch) and then supplied a public SSH key since I don’t want to store a password in any text file and using SSH keys makes things nice and easy. In this case I am using the id_rsa.pub file for the root user of my Splunk server, since in my case Splunk is running all scripts as root.
Now from my root command prompt on the Splunk server (called av-linux) I test that access works to my V3700 (on IP address 172.24.1.121) using the lsmdiskgrp command. It’s all looking good.
[root@av-linux ~]# ssh email@example.com "lsmdiskgrp -delim ,"
So I am now set up to write scripts that Splunk can fire on a regular basis to pull data from my Storwize device using SSH CLI commands.
Now here are two important things to realize about using SSH commands to pull data from Storwize and ingest them into Splunk:
- For historical data like logs, it is very easy to pull the same data twice. For instance if I grab the contents of the lseventlog command using an SSH script then I will get every event in the log, which is fine. But if I grab it again the next day, most of the same events will be ingested. If I am looking to validate how often a particular event occurs I will count the same event many times as I ingested it many times. Ideally the Storwize CLI commands would let me filter on dates, but that functionality is not available
- Real time display commands don’t insert a date into the output, but Splunk will log the date and time that each piece of data was collected on.
Lets take the output of lsmdiskgrp as shown above. If we run this once per day we could track the space consumption of each pool over time. Sounds good right? So on my Splunk server I create a script like this. Notice I get the output in bytes, this is important as the default output could be in MB or GB or TB.
ssh firstname.lastname@example.org “lsmdiskgrp -delim , -bytes”
I put the script into the /opt/splunk/bin/scripts folder and call it v37001pools.
I make it executable and give it a test run:
[root@av-linux scripts]# pwd
[root@av-linux scripts]# chmod 755 v37001pools
[root@av-linux scripts]# ./v37001pools
So now I tell Splunk I have a new input using a script:
Input the location of the script, the interval and the fact that this is CSV (because we are using -delim with a comma. Note my interval is crazy: every 60 seconds is way too often, even every 3600 seconds is probably too often. I used it to get lots of samples quickly.
I now confirm I have new data I can search:
And the data itself is time stamped with all fields identified and has all the data like pool names.
Now I can start graphing this data. With Splunk what I find is that if someone publishes the XML this makes life way easier. So I created an empty Dashboard called Storwize Pools and then immediately select Edit Source
Now replace the default source (delete any text already in the source) with this where you change the heading and script name with your own (in red) and the pool name of one of your pools (in blue). If you have more than one pool, add an additional chart for every pool (copy all the chart section and just make a new chart).
In the attached word document you will find the required XML. For some reason WordPress kept fighting me and changing my quotes so I have attached the XML as a doc.
And we get a lovely Dashboard that looks like this. Because the script runs every 60 seconds, I am getting 60 second stats.
We could run it every day or use a cron job to run it at the same time of every day (which makes more sense). Maybe once per day at 1am by setting the interval to a cron value like this: 0 01 * * *
So hopefully that will help you get started with monitoring your SVC or Storwize product with Splunk.
If you would like some more examples, just leave a comment!