Duane Waddle gave a great updated talk of the SSL configuration talk from .conf 2014. Now with Splunk 6.2 content.
** Update Oct 15, 2014 – Poodle SSLv3 Issue**
The talk was given the week before the SSLv3 issue was released. Please remove all references to supportSSLV3Only = true from the configs when you use them. You also can find more from Splunk on the SSLv3 issue and how to mitigate at http://www.splunk.com/view/SP-CAAANKE
.conf 2014 was a great time this year. Duane and I enjoyed giving the talk “Avoid the SSLippery Slope of Default SSL” with great questions from the audience. I was surprised at the solid turn out for a Thursday 9am talk. My talk was “From Tool to Team Member: Controlling Systems with Splunk Alert Scripts”
Here are the PDF copies of the slides for both talks:
Increasingly, production security requires more than using default SSL certificates. This session will cover best practices for implementing your own SSL certificates on all Splunk channels. The right configuration and steps can provide both encryption and authentication needed for today’s due diligence requirements.
- From Tool to Team Member: Controlling Systems with Splunk Alert Scripts (PDF)
- George Starcher
- Code: George’s git repo
We will go in depth into setting up alert scripts that can make web services calls to other devices such as intrusion prevention systems. This gives Splunk the ability to actively control such systems. Code samples will be provided that include being able to save login credentials encrypted within Splunk. Using alert scripts we can change Splunk from just a tool into an IT team member taking actions on your behalf!Scripts, Splunk, SSL
Damien Dallimore of Splunk wrote a great Modular Input for SNMP on Splunkbase. It is written such a way that you install it on your Splunk server (hopefully that is unix based). Then you setup an inputs.conf in the app like this:
communitystring = mysecretstring
do_bulk_get = 0
index = network_snmp_traps
ipv6 = 0
snmp_mode = traps
snmp_version = 2C
sourcetype = snmp_ta
split_bulk_output = 0
trap_host = myserverip
trap_port = 162
What if you don’t want traps going directly to your Splunk server?
Why, yes you can indeed use the smnp_ta on a Universal Forwarder. It needs to have pysnmp installed so usually you are going to be ok on most Linux systems.
You just have to make a couple of changes to snmp_ta/bin/snmp.py:
1. You must absolutely change the hash bang at the the top of the file. Rather than the existing path to the Splunk python instance. You might need to change it to something like the following depending on your system.
2. If you do as I do and make copies of TAs and using a naming convention such as TA_app_snmp_cal01. Then you have to edit two other lines in the snmp.py file. Change the path indicated in egg_dir and mib_egg_dir. To something like:
egg_dir = SPLUNK_HOME + "/etc/apps/TA_snmp_cal01/bin/
mib_egg_dir = SPLUNK_HOME + "/etc/apps/TA_snmp_cal01/bin/mibs"
That should do the trick. Now the Universal Forwarder you put the app onto should start listening on UDP 162 for SNMP traps. Just be sure to change the community string and the trap_host to your settings. The trap_host should be the IP of the forwarded you are putting this onto.
Do keep in mind that the parsing of the traps happens at the time they are received and indexed. So you need to install the right MiBs into the app’s bin/mibs folder. It will honestly drive you to drink. It is a painful process. You can read more on that process on a two part series on SNMP polling using the Modular Input at
Host Field and SNMP Traps:
The way the snmp_ta works the host field ends up being the IP address of the system that sent the trap. I prefer my host field to be FQDN names that compliment my earlier post on auto lookup of location by host. I modified the TA’s code to allow a new inputs.conf option in the stanza. It is called trap_rnds. I should be submitting a pull request to Damien soon and submitting the feature back to him. Be watching for the updated app. Keep in mind if you use this feature you will generate a reverse DNS lookup to your infrastructure for each trap event that comes in. So you may need to consider if that will impact the DNS servers that system uses.Logging, SNMP, Splunk
A fun crazy experiment:
Some weekends I just pick a couple of lego blocks of technology and click them together to see what happens. I was thinking over the concept of TOR hidden services. It turns out you can run a Splunk Universal Forwarder (UF) with an outputs.conf pointing to your indexer while it listens for inputs from other UFs as a TOR hidden service. You can then make a UF running on something like a raspberrypi send it’s logs back over TOR like a dynamic vpn.
Why would you want to? Because it was neat to do. Here is how to repeat the proof of concept.
How do we make it work?
The Universal Forwarder TOR to Indexer Relay:
- Install the Splunk Universal Forwarder
- Install TOR: sudo apt-get install tor
- Setup TOR to listen on 9997 as a hidden service by editing the /var/tor/torrc file
torrc hidden service config12HiddenServiceDir /var/lib/tor/other_hidden_service/HiddenServicePort 9997 127.0.0.1:9997
- Restart TOR: sudo service tor restart
- Get the server’s .onion address: sudo vi /var/lib/tor/other_hidden_service/hostname
- Setup $SPLUNK_HOME/etc/system/local/inputs.conf to listen on 9997
Gateway Forwarder inputs.conf12[splunktcp://9997]disabled = 0
- Setup $SPLUNK_HOME/etc/system/local/outputs.conf to send data to your existing Splunk Indexer. The below example is setup for SSL so replace with what yours uses.
Gateway Forwarder outputs.conf1234567891011[tcpout]defaultGroup = myIndexer[tcpout:myIndexer]compressed = truemaxQueueSize = 128MBserver = 10.0.1.50:9998sslCertPath = $SPLUNK_HOME/etc/auth/server.pemsslPassword = passwordsslRootCAPath = $SPLUNK_HOME/etc/auth/cacert.pemsslVerifyServerCert = falseuseACK = true
The Remote Forwarding Log Source:
- Install the Splunk Universal Forwarder
- Install TOR: sudo apt-get install tor
- Install socat: sudo apt-get install socat
- Setup $SPLUNK_HOME/etc/system/local/outputs.conf to send logs to localhost:9998
UF outputs.conf1234[tcpout]defaultGroup = torRelay[tcpout:torRelay]server=localhost:9998
- Ensure socat is running to bounce 9998 to 9997. This is how we torrify the Splunk forwarder to Indexer traffic. We need to use it to tunnel the Splunk TCP traffic through TOR. You will want to work up how to make that auto start on reboot and run in background. But here is the command you can run manually to test it. Note in this command you have to know the .onion address of the UF we will use as our TOR to Splunk indexer gateway on the receiving end.
Shell1sudo socat TCP4-LISTEN:9998,bind=localhost,fork SOCKS4A:localhost:h5copg6ecll6cqbr.onion:9997,socksport=9050
- Set Splunk to pickup logs etc via the normal inputs.conf methods.
That is it and you have torrified Splunk forwarder to Indexer traffic. It would let you collect data from remote sources without exposing to them the actual destination address of your Indexing system.
Keep in mind that TOR itself encrypts the traffic so you could stick with the unencrypted “9997” outputs.conf style setup. Or you could still go all out and generate a new SSL Certificate Authority with ECC certificates and do all the normal certificate root and name validation that you should when setting up SSL for Splunk. If you want to learn more on how to do that come see a talk I am giving with a friend at Splunk .conf 2014 this year.
Logging, Privacy, Splunk
You might be lucky enough that you have all your log reporting hosts properly resolving to fully qualified domain names (FQDN) (e.g. splunk.cal01.georgestarcher.com). If you are really lucky part of your fqdn is a location code (e.g. cal01 = San Francisco). This can be useful if the location code is in the hostname of your wifi gear logs. You can use an auto lookup against a location table to match wifi mac address activity to lat/lon based on the equipment’s site code in it’s hostname.
First, you need to create a location csv file for the lookup to use. In our example we will use the following gs-location-lookup.csv. As a disclaimer; I do not have anything located at Splunk HQ. It is just a public address to use to demonstrate this example. We will place this file in /$SPLUNK_HOME/etc/system/local/lookups. However you could place this sort of lookup in an app that you distribute to all your search heads. If you only know the address locations of your organization sites, just use Google Maps to find out the lat/lon for the address.
cal01,cal01.georgestarcher.com,Splunk HQ,250 Brannan Street,1st Floor,San Francisco,California,94107,United States,37.783031, -122.391049
Next, we define both the lookup table and the host field site code extraction in transforms.conf. We do make the assumption our site location is the component of the FQDN just before our domain name.
SOURCE_KEY = host
REGEX = (?P<hostSiteCode>[^\.]+)\.georgestarcher\.com$
filename = gs-location-lookup.csv
case_sensitive_match = false
Last, we add the automatic lookup in our props.conf to apply to any host that has a value ending in georgestarcher.com. You probably noticed that I made the lookup command output the fields all to start with host. This is because we might do other lookups against the site code. We will know specifically this location information is tied to the host name. Not a value for siteCode that might come up in our logged data that we also wish to lookup. After all, a syslog.cal01.georgestarcher.com might collect logs that have a site code in them like cal02. Now you can search for logs based on their site location.
REPORT-gs-extractSite = gs_site_code
LOOKUP-gs-siteLookup = siteLookup siteCode AS hostSiteCode OUTPUT siteCity AS hostSiteCity, siteCountry AS hostSiteCountry, siteFacility AS hostSiteFacility, siteDomainName AS hostSiteDomainName, siteAddress1 AS hostSiteAddress1, siteAddress2 AS hostSiteAddress2, siteRegion AS hostSiteRegion, siteLat AS hostSiteLat, siteLon AS hostSiteLon
Here is a bonus. If you wanted to map the events based on the host site location just add this geostats command to your searches:
| geostats latfield=hostSiteLat longfield=hostSiteLon count
Let’s follow up on our DNS theme of the last post. I have used my alert scripting to block attackers in the past such as those scanning heavily against SSH. Now I want to start considering emulating the complaint notification one can get from using fail2ban. So let’s start with just adding a simple external command lookup for getting the abuse contact for a given IP address. We will actually use the method found in the fail2ban complain module. So big thanks to them!
We want to have a search like this:
tag=authentication action=failure | stats count values(user) by src_ip | lookup abuseLookup ip AS src_ip
Once you add the transforms and python script below the command should work in Splunk. Keep in mind like the dnsLookup this has to happen on any search heads that will need it. I also have not yet worked on making this handle ipv6 which abusix.com can do with the lookups. The new abuseLookup will return a field to your events called abusecontact. Then you can use that how you want in reporting events.
First edit your transforms.conf to add this stanza:
external_cmd = abuseLookup.py ip abusecontact
fields_list = ip, abusecontact
Now create the python script abuseLookup.py in $SPLUNK_HOME/etc/system/bin/
# This uses the https://abusix.com/contactdb.html to lookup abuse contacts based on how fail2ban operates.
# This requires the dig command
ipOctects = ip.split('.')
address = '.'.join(reversed(ipOctects)) + '.abuse-contacts.abusix.org'
cmd = 'dig +short -t txt -q '+address
abuseemail = abuseemail[1:-2]
if len(sys.argv) != 3:
print "Usage: python abuseLookup.py [ip field] [abuse email]"
ipfield = sys.argv
abusecontact = sys.argv
infile = sys.stdin
outfile = sys.stdout
r = csv.DictReader(infile)
header = r.fieldnames
w = csv.DictWriter(outfile, fieldnames=r.fieldnames)
for result in r:
result[abusecontact] = getAbuse(result[ipfield])
I use dnsmasq, a light weight DNS caching server, at home on a raspberry pi to log dns traffic when testing things (just uncomment the log-queries option and pull the logs into Splunk). But, what about helping performance of some DNS related activity with Splunk itself?
It is very common to do a LOT of DNS lookups when using Splunk for security purposes. This can create a metric ton of lookup requests to the DNS servers your Splunk server normally points at. That traffic load can cause unforeseen issues at times. I like to setup dnsmasq for local DNS caching on my Splunk search head to help reduce that load when IP lookups are repetitive.
Let’s say your normal network dns server is 10.0.0.1. Rather than have your Splunk server query it or go directly to the root servers we will setup dnsmasq on the server and point it to the normal server you use. This will let Splunk get DNS requests locally to the system if they have been recently cached. We will also lock it down to only work for the local server so other systems on your network do not try and use your Splunk server as a DNS server. Now in my testing it seems dnsmasq seems to re-forward requests in fairly short order despite a large cache value (number of host names). This still should provide some protection if you run a poorly placed dnsLookup command in a Splunk search. Just think of how many times the same IP can come up in searches with large numbers of events. This has to be done for each of the Splunk search heads where the DNS lookups occur.
This is an example of a search that will potentially generate repetitive lookups for the same ip address:
tag=authentication action=failure | lookup dnsLookup ip AS src_ip
This is a better placement of the lookup so you only get one lookup per ip address:
tag=authentication action=failure | stats count by src_ip | lookup dnsLookup AS src_ip
Let’s walk through adding dnsmasq to help reduce the traffic caused by the first search and lookup example.Splunk
In the “old days” we had to install the Google Maps App for Splunk to get IP geolocation lookups. Splunk added the built in iplocation command in v6. The maxmind free database is used by both the Maps app and Splunk natively.
It is very convenient and fun to make searches like:
tag=authentication action=failure | stats count values(user) by src_ip | iplocation ip AS src_ip
The issue we run into is that IP information changes often. Spunk does not provide any automatic direct update for the database. You only seem to get a new copy when you install a version release (e.g. upgrading v6 to v6.1.2). The documentation does not even detail where the database is located within Splunk. Lastly, you might have some good reason for not upgrading a release the moment it comes out just so you can have more current ip location information. You might not want to risk breaking something in your deployment until you can test it.
Here is how you can replace the database manually. You can use the free one that Maxmind updates monthly or you might pay for the commercial copy.
- Download the current database from http://dev.maxmind.com/geoip/geoip2/geolite2/ You will want the city binary gzipped version.
- Copy it to your Splunk search head server.
- Expand the gizipped file to get the file GeoLite2-City.mmdb
- Overwrite the copy in $SPLUNK_HOME/share/
That is it. You have updated the existing copy with the currently available one. You should update it monthly or after you patch Splunk as it too will overwrite the copy in that location.Splunk