Splunk Getting Extreme Part Nine

I ran a series on Splunk Enterprise’s Extreme Search over a year ago. It is time to add a discussion about alfacut. I had not realized the exact behavior of alfacut at the time of that blog series. 

Alfacut:

What is alfacut in XS? Extreme Search generates a score called WhereCIX that is the measurement ranging from 0.0 to 1.0 of how compatible what you are measuring is against the Extreme Search statement you made in the xswhere command. The xswhere command uses an alfacut (the limit on the WhereCIX score) of 0.2 by default.

Here is where I rant. You CAN put alfacut>=0.7 in your xswhere command. This is like thinking you are cutting with shears and instead you are cutting with safety scissors. It doesn’t work like you would expect when you use compound xswhere commands: AND, OR.

If you specify the alfacut in the xswhere command it applies the limit to BOTH sides of the compound individually . It does NOT apply to the combined score, which is what you would expect as a user if you were looking at the output after the xswhere command. The WhereCIX  displayed from the output of the xswhere is the score of the compatibility of the entire compound statement.  If we want to filter on the combined score we simply have to know this is how things work and add our filter in the next step in the SPL pipeline.

XSWhere Compound Statements:

How you define your interesting question for ES Notables matters. Let’s take Excessive Failures.

The default Excessive Failures is not even Extreme Search based:

This says we find interesting to be a failure count greater or equal to 6 by src, app. 

That is going to be VERY noisy and have little context based on existing behavior. A static limit also gives bad actors a limbo bar to glide under.

What we really want to ask is: “Give me extreme non normal failure counts by source and app.” This would squash down common repetitive abuse.

We need a compound statement against anomalous and magnitude contexts.

The SA-AccessProtection app has an existing XS Context Generator named: Access – Authentication Failures By Source – Context Gen

It looks like this:

The issue is that failures by src only is misleading because what if the failure pattern of the “app” such as ssh or mywebapp differ but from same source. You don’t want them hiding the other’s pattern.

Magnitude Context Gen:

That gives us the stats for a source and the app it is abusing such as sshd specifically. I used xsCreateDDContext instead of xsUpdateDDContext, because the first time we run it we need to create  a new vs update an existing context file.

We need more than this though. Because maybe there is a misbehaving script and we do not want ES Notable events for simply extreme failures but non normal extreme failure counts.

To use Anomalous XS commands you need to have the extra XS visualization app installed from splunkbase: https://splunkbase.splunk.com/app/2855/

Anomalous Context Gen:

Combining Contexts:

Our NEW xswhere:

See we did not specify the alfacut in the xswhere command. Instead, we need to use a following where statement:

| where WhereCIX>=.9

That gives us a limit using the compatibility score of the entire compound statement. This will be much less noisy a result. If you have a compound statement and a WhereCIX of 0.5 you typically have a one sided match. Meaning it was perfectly compatible with one side of your statement but not the other at all. This is common when you combine anomalous and magnitude contexts like this. You have yup it was super noisy but that was totally normal for it so it was NOT above normal but WAS extreme. 

One possible new Excessive Failed Logins search could look something like this:

 

Share