February 16, 2014: 9:41 pm: Programming, Splunk

A big thanks to the members of the @SplunkDev team that were helpful and patient with my questions while I pulled this together. Thanks Guys: @gblock, @damiendallimore‎ and David Noble

In Splunk circles, you often hear about the holy grail of using Splunk to actively control other systems. It can be hard to find details or good examples on HOW to do it. I am always working on something new that deepens my technical skills. I had not previously dealt with REST APIs or Splunk alert scripts and this post is the result. Used well you can replace manual daily operations tasks; changing Splunk from a tool into a team member.

We will cover a working example of using Splunk alert results to update a Google Spreadsheet via the Drive Python SDK. Once you understand how it works, you can make you own controls of any system that supports REST API calls such as an Intrusion Prevention System to block a list of IP addresses using a scheduled Splunk alert.

We will leverage a Splunk blog post on saving credentials in a Splunk App to avoid leaving our Google credentials hard coded and exposed in the alert script. It turns out alert scripts work in the same way but it is not well documented. I built a Python class for retrieving those credentials from Splunk so you could re-use the code across many alert scripts. The scripts can all be found in the supporting GitHub repo. You will be able to use these as a framework for your own alert scripts to drive actions in other systems. I will not be stepping through the code itself as it is fairly well commented. There are plenty of moving parts to this so you need to be an experienced Splunk administrator to get it working. The benefit is that once you get one working you can just make new variants with little effort.

GoogleSpreadsheet

(more…)

TwitterFacebookLinkedInInstapaperPocketApp.netGoogle+Share
January 19, 2014: 11:30 am: Programming, Splunk

Splunk is a great tool for digging into data and presenting the results. Sometimes, you just want a status board of results that comes to you without having to log into a web application. A wonderful app for this is the iPad app statusboard by Panic software.

You always could create a panel on your statusboard that links to a URL of a file for presentation. However, this means your data is not protected by authentication. Panic added Dropbox support so you can now make a panel that pulls from a csv or json file. You can also airplay to an AppleTV or direct connect the iPad to a TV to present the dashboard on a large display.

In this post I will cover how I combined a Splunk alert script in python, dropbox and statusboard to get the result below. I am displaying the number of failed login attempts against my wordpress blog by country code for the previous 7 days. Keep in mind this is a Splunk instance running on my laptop with minimally sensitive information. I would never run dropbox directly on a work related production Splunk server. An alternative method would be to run a scheduled script that pulls the results out of Splunk via the REST api and write it out to a csv in the dropbox folder. I will do that version of this post in the future.

\"WordPress

(more…)

TwitterFacebookLinkedInInstapaperPocketApp.netGoogle+Share
January 18, 2014: 6:30 pm: Programming, Splunk

I want to start making some custom alert scripts. As usual, I like to practice by using a live example. I have SSH remote access and Apache enabled on my laptop. When at work I keep a map up in Splunk on my laptop showing the source ip location of any attempts to connect to my laptop. If you start beating on my laptop it results in an instant ban hammer in the network IPS.

I sometimes miss seeing the map updates when busy. If I had an alert history that is quickly accessible it would be easier to handle the scanning systems. I decided on this alert to test the hits on apache that runs every 15 minutes. These logs just happen to go into an index called os_osx. I tagged the combined_access source type as \”web\”.

index=os_osx tag=web | stats count by clientip

Now the fun part. I am working on my python skills so I did the alert script in python. This required me to call the OSX shell command osascript in order to execute the Apple Script that generates the actual Notification Center message. It took a minute of experimentation to get the right combination of escaped quotes to build the Apple Script command.

We get a result like this:

\"AlertSample\"

And here is the alert script that I saved as osx-alert.py in the /Applications/splunk/bin/scripts folder on my laptop. That is the script I chose to call on the search above when saved as an alert.

import os
import csv
import gzip
from subprocess import call

if __name__ == \"__main__\":

# Obtain the path to the alert events compressed file
        alertEventsFile = os.environ[\'SPLUNK_ARG_8\']

# Handle to the csv contents of the alerts events compressed file
        eventContents = csv.reader(gzip.open(alertEventsFile, \'rb\'))

# Assign the contents to a list iterator and skip the header line of the table. 
        alert_iterator = iter(eventContents)
        next(alert_iterator)

# Send a notification for each source ip in the alert results table. We grab the IP and count from the columns in each row of the stats count csv format output from Splunk.
        for line in alert_iterator:
                message = \"ALERT: \"+line[1]+\" connections from ip: \"+line[0]+\" in past 15 minutes.\"
                call([\"osascript\",\"-e\",\"display notification \"\"+message+\"\" with title \"Splunk\"\"])
TwitterFacebookLinkedInInstapaperPocketApp.netGoogle+Share
January 2, 2014: 10:45 pm: Splunk

I like to take more than traditional IT and security logs into Splunk. You can enhance your production data in creative ways. I am a firm believer the best way to learn is to practice on something out of the norm. The game Minecraft is a fun source of log data if you find out how to extract the information. I am a bit of a closet Minecraft Let’s Play video fan. At the last Splunk user conference the gaming room was setup with a local Minecraft server and logging to Splunk. That was the public debut of the Splunk Minecraft App. It was fun to see the live information about what types of resources had been collected etc.

The Splunk Minecraft App relies on a plugin for a variant build of Minecraft called Bukkit, which makes it easy to run Minecraft with modifications. The problem is that the Log To Splunk plugin has not been updated to keep up with java versions. Yeah, Minecraft is written in java. Over the holiday I wanted to play with some minecraft logs in Splunk v6 so I had to find another solution. After all, it is a good way to practice on parsing logs, event typing and tagging them. There is an old blog post that predates the Splunk Minecraft App that tells how to use a Minecraft plugin called PlayerLogger to do this. You can find the original post over on Robert Jordan’s blog.

(more…)

TwitterFacebookLinkedInInstapaperPocketApp.netGoogle+Share
January 1, 2014: 8:01 am: Review

I have been using a pretty well made shoulder laptop bag. It has lots of good pockets and the stiching is not flimsy. However, it is still a shoulder bag and that gets uncomfortable during a 15 minute walk to and from work.

So I asked for the Cocoon slim backpack for Christmas. It is an exclusive for now through the Apple Store. This is the same company that brings us the Gridit products.

I asked for the backpack for two reasons. First, it has a built in Gridit section. Second the slim profile places the weight down the length of my spine better when carrying the backpack. Way better than a traditional shoulder bag.

The bag has a very solid feel. The zippers are well made and do not feel like they will separate as a lot of bags do. The sections do unzip to the point you can lay it open fully flat. That completely exposes the built in Gridit section. I was able to organize the items I want to carry but not always use onto the Gridit platform. I found my small Gridit board still fits flat just laying on top of the built in platform. This makes it easy for me to routinely pull out charging cables I need often without opening it all the way.

The laptop section has a soft sleeve area for an iPad or other tablet. Then an adjoining pocket for up to a 15\” laptop.

There are an external pocket on the front of the backpack closed by zipped for slim materials. And one in the front cover but within the laptop compartment. That\’s it. No other pockets as you would expect to keep it slim. Perfect for a back and forth backpack.

I\’m very satisfied with the backpack. An excellent well made bargain for the price.

TwitterFacebookLinkedInInstapaperPocketApp.netGoogle+Share
December 30, 2013: 7:00 am: Splunk

I see a lot of folks new to Splunk have to work to mature their deployments because the did not tackle indexes early on. Indexes are how you control access to data and it\’s retention period.

Consider a \”traditional\” starting splunk deployment by a security group. You get the IT group to install the universal forwarder sending you logs. Up front they aren\’t interested in more than making you go away so they can work the next support ticket. Later, they find out how much access to their own logs in splunk can help operations succeed. Everything is all mixed together; your IDS, mail logs and web logs. Maybe a lot they don\’t need to see.

Splunk will put data into the index named \”main\” by default. Everyone with a login to splunk can see this index. There is no simple move command once data is in an index to shift it into a new one.

It gets to be a bigger mess when start installing apps. Some like the *nix app put everything into an index called \”os\”.

Naming Convention

You should setup different indexes as early on as possible in a new deployment. Above all use a naming convention. Sticking with the default retention period is ok. It\’s six years, so you have time to shrink it later.

I follow this naming convention.
* os_windows_groupname
* os_linux_groupname
* os_windows_groupname_secondgroupname

  1. I use underscores in index names.
  2. This type of index is for OS related logs so it starts with os_.
  3. The first and often only groupname is the IT or organizational group that owns the systems and provides the logs.
  4. Optionally what if you have a system developers and IT admins need to share log access. That is where I add _secondgroup name to it and send events for just those systems to this index.

Why do I follow this convention?

As mentioned indexes in Splunk are the control mechanism for access control and data retention. This is all set by index for user roles, then time periods for retention set for the index as well.

Searching with wildcards. Using this scheme you can setup a dashboard that leverages searches like

index=os_linux_* sudo

If you save that search or build it into a dashboard then if one group has access to the dashboard they see only their logs that match. The next group sees only theirs with the same dashboard. You get to see ALL events if as the security staff you have permissions to all the indexes. This also works well for eventtyping. Since eventtypes are defined by searches you can ensure an eventtype for only certain windows events run only across those indexes but ALL of them via the wildcard.

The downside shows up when you are not using the default index and you are new to splunk. There is a tendency to install some given Splunk app and expect it to just show data. Often these apps are coded to search just default indexes or their own. You will have to dig into their code and find where you have to replace the app searches etc with your wildcard naming scheme to get it wired up. It is still worth the effort and saves you from a lot of pain as your deployment matures.

For more about indexing be sure to read through the Splunk manual on Managing Indexes and Clusters.

TwitterFacebookLinkedInInstapaperPocketApp.netGoogle+Share
December 28, 2013: 8:30 am: Splunk

This will only help you if you are using Deployment Server. This is an Enterprise server role so it won’t work if you are on the free license.

Sure you can install the Deployment Monitor application. In fact, I recommend that if you use Deployment Server(DS) that you use the app. But, we want to be able to quickly see if any new Splunk forwarders have been setup by our IT admins and they haven’t told us. So we will add some panels to our personal admin app and dashboard.

I like to whitelist assign log collection configs by assigning apps manually to the forwarders. However I make all systems that phone home to my deployment server pickup up the output app for my organization. This app just tells the forwarders how to talk to the indexers. It has nothing to do with what logs are picked up and what indexes they are sent to.

Keep in mind naming schemes. You should name your applications with your org name at the start of them to make them easy to spot.

In my serverclass.conf I have a stanza to assign an application called “org_all_forwarder” to all forwarders (excluding my indexers) that talk to the DS pickup. This app tells the Splunk Universal Forwarders how to send to the indexers. Nothing else is in this app.

We also assign a second app “org_all_deploymentclient” which contains the configuration on reporting to the DS. We won’t get into what is in these apps. This post is about a dashboard of what forwarders are pulling down applications.

[serverClass:org_all_forwarder]

whitelist.0=*

blacklist.1=indexer1

blacklist.2=indexer2

blacklist.3=indexer3

[serverClass:org_all_forwarder:app:org_all_forwarder]

[serverClass:org_all_forwwarder:app:org_all_deploymentclient]

So to detect when I have new forwarders I just need to see systems that pickup the all forwarder app and nothing else. That means I have not assigned any other applications to it.

We made a personal admin dashboard in the previous blog post on license summarization. Let’s add two panels to our personal admin dashboard that we will review daily. The data is in the _internal index for 28 days. That is the default retention period for that index. This doesn’t matter since we are only watching a week back.

Go into the application,MY-ADMIN, and the dashboard, My-Daily-Admin.

Create the Unassigned Forwarders Panel

  1. Click Edit->Edit Panels
  2. Click Add Panel
  3. Choose a title of “Splunk Web Login Activity (past 7 days)”
  4. Paste the following into the search field

    index=_internal sourcetype=splunkd DeployedApplication Downloaded | rex “deployment\?name=(.+?):(?<ds_class>.+?):(?<ds_app>.+?)\s” | table _time, host, ds_class, ds_app | lookup dnsLookup hostname AS host | transaction host | search ds_class=org_all_forwarder | eval classCount=mvcount(ds_class) | where classCount=1 | table _time, host, ds_class, ds_app

  5. Change the time range to last 7 days and click Add Panel to save it.
  6. I like to leave this one a statistics table visualization

Create the Recent Forwarders Panel

  1. Click Edit->Edit Panels
  2. Click Add Panel
  3. Choose a title of “Splunk Web Login Activity (past 7 days)”
  4. Paste the following into the search field

    index=_internal sourcetype=splunkd DeployedApplication Downloaded | rex “deployment\?name=(.+?):(?<ds_class>.+?):(?<ds_app>.+?)\s” | table _time, host, ds_class, ds_app | lookup dnsLookup hostname AS host | transaction host | search ds_class=org_all_forwarder | eval classCount=mvcount(ds_class) | where classCount>1 | table _time, host, ds_class, ds_app

  5. Change the time range to last 7 days and click Add Panel to save it.
  6. I like to leave this one a statistics table visualization

There you go. Two more panels for your daily admin review.

TwitterFacebookLinkedInInstapaperPocketApp.netGoogle+Share
December 27, 2013: 7:45 pm: Splunk, Training

Splunk updated their entire product certification process for those who need to manage and administrate Splunk. Previously, to get certified in Splunk it was a game of collecting the Pokemon cards of each training course\’s certificate of completion. That had the major downside for those of us experienced in Splunk. We could never get our employers to fund taking classes for material we knew well.

The process now involves an actual online exam. It is FREE. The courses can give you a very good foundation in the topics and prepare you for the exam. As with most certification exams the training and self study will cover the skill sets much deeper than the exam material alone can cover. I always recommend training when you can swing it as you never know what you do not know about a topic.

Splunk Certified Knowledge Manager

This certification covers the operation and managing the various knowledge objects within the Splunk application. This is more about helping the users have a solid consistent experience in using Splunk. It is not about the back end administration of the servers themselves.

The courses behind this certification are:
* Using Splunk
* Searching and Reporting
* Creating Splunk Knowledge Objects

Splunk Certified Admin

This certification is all about the technical administration of all aspects of Splunk. Everything from licensing, deployment management, indexing etc. This is for you if you want to be the wizard behind the curtain.

There is just one course behind this certification. It is the combination of the old Admin and Advanced Admin courses. It does require you have passed the Certified Knowledge Manager as the pre-requisite.
* Splunk Administration

Taking the Exams

Most of the folks I know have some experience with Splunk. For those people, I recommend you take the outlines for the courses behind each track. Highlight the agendas for the areas that you know you are weak in. Setup a v6 Splunk instance to practice those areas. Watch the tutorial videos from the Splunk intro page when you log into Splunk. Last, be sure to read ALL the documentation at least once related to the course material.

Then you just email certification@splunk.com to request registration to take the exam. They will send you a personalized exam link in an email with details of the number of questions you have to pass for the particular exam. It will also tell you how long you have to take the exam once you start it. You can take the exam as many times as you need to pass it. But you have to wait two hours between attempts.

Good luck!

TwitterFacebookLinkedInInstapaperPocketApp.netGoogle+Share

Next Page »