Splunk Stored Encrypted Credentials

I wrote about automating control of other systems from Splunk back in 2014. Things are very different now in what support Splunk provides for framework and SDKs. I have been looking to update some of the existing stuff in my git repo and using the Splunk Add-on builder. It handles a lot of the work for you when integrating to Splunk.

We now have modular alerts which is the evolution of the alert script stuff we were doing in 2014. Splunk also now has modular inputs, the old style custom search commands, and the new style custom search commands. In all cases, you could want to use credentials for a system that you do not want hard coded or left unencrypted in the code.

The Storage Passwords REST Endpoint

You will typically find two blog posts when you look into storing passwords in Splunk. Mine from 2014 and the one from Splunk in 2011 which I referenced in my details post with code. Both posts mention a critical point. The access permissions to make it work.

Knowledge objects in Splunk run as the user that owns them. I am talking the Splunk application user context. Not the OS system account you start Splunk under. If I run a search and save it as an alert then attach an alert action the code that executes in the alert action has Splunk user permissions as me. The owner of the search that triggered it at the time.

This is a critical point because you had to have a user capability known as ‘admin_all_objects’. Yes that is as godlike as it sounds. It normally is assigned to the admin user role. That has changed recently with Splunk 6.5.0. There is a new capability you can assign to a Splunk user role called ‘list_storage_passwords’. This lets your user account fetch from the storage passwords endpoint without being full admin over Splunk. It still suffers one downside. It is still an all or nothing access. If you have this permission you can pull ALL encrypted stored passwords. Still it is an improvement. Yes, it can be misused by Splunk users with the permission if they go figure out how to directly pull the entire storage. You have to decide whom your adversary is. The known Splunk user whom could pull it out, or an attacker or red team person whom finds credentials stored in scripts either directly on the system or in a code repository. I vote for using the storage as the better of the two choices.

Stored Credentials:

Where are they actually stored? On that point I am not going to bother with old versions of Splunk. You should be life cycle maintaining your deployment so I am going to just talk about 6.5.0+.

You need to have a username, the password, a realm and which app context you want to put it in. Realm? Yeah that is a fancy name for what is this credential for because you might actually have five different accounts named admin. How do you know which is the admin you want for a given use? Let’s say I have the username gstarcher on the service adafruit.io. I want to store that credential so I can send IOT data to my account there. I also have an account named gstarcher on another service and I want Splunk to be able to talk to both services using different alerts or inputs or whatever. So I use the realm to say adafruitio, gstarcher, password to define that credential. I might have the other be like ifttt, gstarcher, apikey. I can tell them apart because of the realm.

Wait, what about app context? If you have been around Splunk long you know that all configurations and knowledge objects exist within “applications” aka their app context. If you make a new credential via the API and do not tell the command what application you want it stored under then it will use the one your user defaults to. That is most often the Searching and Reporting app, aka search. That means if you look in $SPLUNK_HOME$/etc/apps/search/local/passwords.conf you will find the credentials you stored.

Example passwords.conf entry:

Do you notice it is encrypted? Yeah, it will be encrypted ONLY if you add the password using the API calls. If you do it by hand in the .conf text file then it will remain unencrypted. Even after a Splunk restart. This is odd behavior considering it uses splunk.secret to auto encrypt passwords in files like server.conf on a restart. So don’t do that.

How is it encrypted? It is encrypted using the splunk.secret private key for the Splunk install itself on that particular system. You can find that in $SPLUNK_HOME/etc/auth. That is why you tightly control whom has access to your Splunk system at the OS level. Audit it, make alerts on SSH into it etc. This file is needed as the software must have a way to know its own private key to decrypt things. Duane and I once wrote something in 30 minutes on a Saturday to decrypt passwords if you have the splunk.secret and conf files with encrypted passwords. So protect the private key.

Let me say this again. The app context ONLY matters in where the password lands for a passwords.conf perspective. The actual storage_passwords rest endpoint has no care in the world about app permissions for the user. It only checks if you have the capability list_storage_passwords. It will happily return every stored password to a get call. It will ONLY filter results if you set the app name when you make the API connection back to the Splunk REST interface. If you don’t specify the app as a filter it will return ALL credentials stored. Other than that, it is up to you to use username and realm to grab just the credential you need in your code. Don’t like that? Then please, please log a Splunk support ticket of type Enhancement Request against Core Splunk product asking for it to be updated to be more granular and respect app context permissions. Be sure to give a nice paragraph of your particular use case. That helps their developer stories.

Splunk Add-on Builder:

There are two ways the Splunk Add-on Builder handles “password” fields. First, if you place a password field in the Alert Actions Inputs panel for your alert, the Splunk GUI will obscure the password. The problem is that it is NOT encrypted. Let’s say you made this alert action. You attach your new alert action to a search. The password gets stored unencrypted in savedsearches.conf of the app where the search is saved.

The Add-on Builder provides an alternative solution that does encrypt credentials. You have to use the Add-on Setup Parameters panel and check the Add Account box. This lets you build a setup page you can enter credentials in for the TA. Those credentials will be stored in passwords.conf for the TA’s app context. There is one other issue. Currently the app builder internal libraries hard code realm to the be the app name. That is not great if you are making an Adaptive Response for Splunk Enterprise Security and want to reference credentials stored using the ES Credential Manager GUI. If you are making a TA that will never have multiple credentials that share the same username then this is still ok.

Patterns for Retrieval:

This is where everyone has the hardest time. Finding code examples on actually getting your credential back out. And it varies based on what you are making. So I am going to show an example for each type. Adapting it is up to you.

Splunklib Python SDK:

You will need to include the splunklib folder from the Splunk Python SDK in your App’s bin folder for the newer non InterSplunk style patterns. Yeah I know, why should you have to keep putting copies of the SDK in an app on a full install of Splunk that already should have it? Well there are reasons. I don’t get them all, but has to with design decisions and issues on paths, static vs dynamic linking concepts etc. All best left to the Splunk dev teams. Splunk admins hate the result of larger application bundles, but it is what it is.

Adding a Cred Script:

This is just a quick script that assumes it is in a folder and the splunklib is a folder level up which is why the sys.path.append is what it is for this example. This is handy if you are a business with a central password control system. You could use this as a template on how to reach into Splunk to keep credentials Splunk needs in sync with the centrally managed credential.

Modular Alert: Manual Style

The trick is always how do you get the session_key to work with. Traditional modular alerts send the information into the executes script via stdin. So here we grab stdin, parse it to JSON and pull off our session_key. Using that we can call a simple connect back to Splunk using the session_key and fetch the realm/username that are assumed to be setup in the modular alert configuration which is sent also in that payload of information.

Add-on Builder: Alert Action: Fetch realm other than app

Again it comes down to how do you obtain the session_key of the user that fires the knowledge object. The app builder has this great helper object and session_key is just a method hanging off it. We do not even have to grab stdin and parse it.

Add-on builder: Alert Action: App as realm

Just call their existing method you only specify the username because it is hardcoded to the app name for the realm.

Custom Search Command: Old InterSplunk Style

In an old style custom search command the easiest pattern to leverage the Intersplunk library to grab the sent “settings” which includes the sessionKey field. After we have that we are back to our normal Splunk SDK client pattern. You can see we are just returning all credential. You could use arguments on your custom search command to pass in the desired realm and username and borrow the credential for if pattern from the modular alert above. This assumes you have put the splunklib from the Splunk Python SDK in the bin folder of the app where your command exists. Also you must set passauth=true in the commands.conf where you define your search command.

Custom Search Command: New v2 Chunked Protocol Style

The new v2 chunked style of search command gives us an already authenticated session connection via the self object. Here we don’t even need to find and handle the session_key and just call the self. service.storage_passwords method to get all the credentials and leverage our usual SDK pattern to get the credential we want. The below pattern does not show it but you could pass realm and username in via arguments on your custom search command. You could then use the credential for if pattern from the modular alert example up above to grab just the desired credential.

Modular Input: Manual Style

I honestly recommend using the Add-on Builder these days. But if you want to use credentials with a manually built input Splunk has documentation here http://dev.splunk.com/view/SP-CAAAE9B#creds . Keep in mind you have to setup what username to send a session_key for by specifying the name in passAuth in the inputs.conf definition.

Modular Input: Add-on Builder

This works the same as our alert actions because of the helper object and the wrapping App Builder does for us. See Above on the other Add-on Builder examples. It is much easier to use and could be made to use the gui and named user creds.

Mac Forensics – Automator Love – Make a Dictionary

I really really love Automator on the mac.  It just makes it so easy to setup scripts you can run again later.  More importantly it lets you write a script solution that is point and click for someone else when they need help.

I had an email from a Detective that does forensics work on child exploitation cases.  He wanted a simple way to build a dictionary from a selection of folders and files.  He wanted to use that dictionary with my crowbar tools to go after a filevault from a mac.

Here is what I did.

Continue reading “Mac Forensics – Automator Love – Make a Dictionary”

Hard Crashing my Macbook Pro

I recently moved over completely to a macbook pro at work.  I had a windows XP desktop with dual monitor support and had two external drives hooked up via firewire.  On top of that I use PGP and had full disc encrypted both my external drives.

Shortly after completely shifting over to my mbp I found it hard crashing.  I mean the hard crash that says on the laptop screen that you have to use the power button to reboot and recover from a crash.  It took some basic troubleshooting but here is what I found.  Running OSX Leopard with VMWare fusion.  I have Windows XP with PGP installed inside of it.  I had to change the connection of the external drives from firewire to usb.  This is because vmware cannot pass through firewire devices to the XP VM.  It has to be usb.  I plug in the drives while XP has focus and I get the normal prompt for the drive passphrase.  I enter it and everything mounts up fine.  It is not till after a good 5 minutes or more with no specific time that the crash will occur.  Every time.  I rebooted, let the drives connect but I hit cancel so they never mounted using PGP and left the mbp running while I went to lunch.  Magic, no crashes occur.  Lastly I go to decrypt the drives and I find that PGP on the mac side can mount the drives but says it cannot decrypt them because they were encrypted using PGP for Windows.  So I had to hook them back to my old desktop and decrypt them.  Fortunately I saved uninstalling PGP from the desktop as my last step and had not done it yet.

I have to make some decisions about the type of data on the external drives, maybe just encrypting some of it as a pgp disk file instead of full disc encryption.  Mixing PGP FDE inside vmware is definitely a quick way to crash your mac repeatedly.  I had even posted this on twitter and got a response back from vmware.  They agree its an issue something about hardware, drivers etc.  Of course no solution.  Likely that is something for PGP to work out.

Passwords: Writing them down

I noticed over on Andy the ITGuy’s blog a post about writing down passwords. I agree completely that passwords should be recorded in a work environment. They are the property of the company as much as any piece of hardware or software. How you write them down and handle them is very important though.

Here is what we do. We keep all our sensitive passwords in an excel spreadsheet in our IT area on our file server. The folder is locked down tightly with group permissions to just the IT group. Next we turned on file and object auditing on the passwords subfolder in that area. Toss in Snare for Windows that sends all the object audit events to my kiwisyslog box. The file is encrypted via PGP. Keys of the local IT staff plus the key of a backup person in our corporate office are used. Finally the kiwisyslog sends its events to a mySQL database so I can run reports whenever I wish. This way I can tell exactly who goes into the folder and decrypts the file any time. The staff just deletes the file once done looking up the password they need.

You cannot just rely on domain permission lockdown alone. What happens if someone gets elevated privileges without authorization. So this is why we use PGP. Only people whose keys were used can get into the file should they even reach it.

Another advantage of using excel. If you rotate passwords you just make a new tab, copy the current tab into it and name the tabs appropriately. Over time you will have an entire history of all your previous passwords. This is important in larger environments where you may not have changed passwords on all equipment like you thought. You can look up older passwords to try without locking yourself out just because no one is around that remembers passwords from months or years ago.

Lastly, print a copy. Whenever we change any passwords we print a new copy of the entire excel workbook. Proper header-footers are set so we can tell which pages are older passwords. Next we seal that in an envelope signing and dating across the seal. Finally we drop it in a fire resistant safe.

Between these methods you have easy access to password lists, a secured electronic copy, the secured copy gets backed up with all other server based data and lastly a hard copy in case the backups or server is unavailable.

All this can still work in a smaller environment. Just that the backup key used to encrypt the file is likely to be a company officer than a second IT person.