Liar

Note: Updated 1/27/2014 to include definitions and improved nomenclature. Thanks to Tom Cross (@_decius_) for the guidance.

Over the last couple of years I've deployed close to 50 different pilot and production deployments of StealthWatch throughout the Central United States for Fortune 1000 companies. I have yet to run through the process without finding security breaches. Some of the breaches are routine others are scary and sophisticated. In many of the breaches I meet operators and managers that don't understand why the attack was missed by IDS, SIEM and Malware Sandboxing. I'd like to parse through some detection concepts and the breaches they tend to catch to clarify the roles of each in a healthy security infrastructure. (Note: It drives me insane when I hear vendors talking about their “APT” detection see: Why I Don't Say APT for an explanation.)

Definitions & Simplified Matrix

In looking at how security events can be caught there are two major components. The first is the item being inspected: the focus. A detection mechanism can examine the object entering the subject/victim (file, packet, etc.) or the behavior that the subject displays after compromise. This makes the two types of foci : object or behavior. The second component is how the focus is examined: method. It can be observed for known bad things (blacklist) or for anything that is not known good (whitelist.)

This matrix provides four buckets:

Focus/Method Blacklist (Known Bad) Whitelist (Known Good)
Object Signature Detection Whitelisting
Behavior Behavioral Detection Anomaly Detection
 
Signature Detection: Inspection of an object (file, packet, etc) for patterns known to be bad.
Behavioral Detection: Inspection of the behavior of a victim for known bad behaviors.
Whitelisting; A method used in signature detection to reduce false positives and increase speed. Not a detection method by it's self (as far as I know.)
Anomaly Detection: Inspection of subject behavior that deviates from established good behaviors. The creation/maintainence of behavior whitelist is based on validated baselines.

Boolean vs. Algorithmic

When using Blacklist based detection schemes (signature & behavioral) the processing will almost always be boolean. This is because we are aware of the bad behavior and can draw conclusions about the motivation. If an IPS matches a pattern for a SQL injection we can alarm on an attempted breach. We can add additional IF...THEN statements to increase accuracy. IF [SQL Injection] triggered AND [target is vulnerable] THEN Block ELSE Log only. The power of modern IPS and SIEM is they can execute business logic based off of scenarios that have been well documented. This allows outputs to be Red or Green lights (or black and white.) 

Whitelist based technologies have unique challenges. First, it is known that things will change. There will be degrees of variance. This has led anomaly engines to rely on index calculations. Where Blacklist processors do boolean processing against events, Whitelist technologies do algorithmic processing against their subjects (hosts, users, etc.) When a subject leaves whitelist it will assign points for the departure. The farther the departure, the greater the points assigned to the subject's score. Where boolean outputs black and white (high and low/good and bad) algorithmic processing puts out a series of scores ranging from 0 to infinity. It is easiest to think of algorithmic processing as a "demerit system" that can add many shades of gray to boolean's black & white.

The Stupid and Lazy: Routine Threats

Signature based detection either on the wire (IDS) or on disk (AV) is catching old exploits that have been documented and publicly disclosed. If a web server gets hit by an SQL injection exploit that has been documented for six weeks, the operator and the attacker are stupid, lazy or both. That type of attack will generate a log entry that will point back to the attacker implicating him as well as implicating the incompetence of the operator/administrator (and his management.) These should never happen. This is a 2005 problem and is not an advanced threat.

Also falling into this category are users clicking links to give away their passwords or execute a piece of code. These are routine threats. Signature Based IDS, Managed Antivirus, Content Filtering are all needed as part of a basic security architecture. These types of threats can only be successful in organizations that are oblivious to the reality of cyber threats.

All of these detection methods focus on object centric detection. When the subject (victim) of an attack is fed the object (payload, file, etc.) the object is inspected for malicious object markers (signatures.) If it matches, the object is destroyed. Signature based/object centric detection can ONLY catch these routine threats regardless of what the vendor or analyst tries to tell you.

The Innovative: Emerging Threat

A new exploit has been crafted and can evade detection. There is no threat marker for detecting it. Object centric inspection will not catch it. One approach to catching this "zero day" exploit is malware sand boxing. The object is injected into a virtual machine and executed. The sandbox monitors the subject (victim) behavior looking for malicious subject markers (communicating with a C&C, scanning the network, etc.) If a behavioral marker is detected, an object marker (signature) can be created and fed back into the mechanisms listed in the routine threat buckets.

Another approach is SIEM. The idea behind SIEM is to aggregate security events from across the network and look for markers in behavior that can show a breach. These events focus on subject behavioral markers. They can look for things like multiple log-in failures across different servers to show a possible breach. These are good checks but as with every subject behavioral tool, requires a knowledge of what a breach will look like. It does not generally require knowing what the exploits look like (though they can use those.) Both approaches check for known bad behaviors.

The Elite: Sophisticated, Targeted Threat

It may surprise some of my readers to know that most of sophisticated attacks that occur don't need to use an exploit or malware at all. The easiest piece of the network to exploit are the users of it. System Administrators of Fortune 500 companies tend to give out too much information and access when seduced by a pretty blond. Many underpaid analysts find $10,000 to be a fair price for calling in sick on a particular day. Political motivated employees may be willing to drop some sensitive data onto a thumb-drive for the sake of the “movement.”

For criminals or state sponsored attackers, the entire portfolio of crime and espionage are available. In these cases there is NO EXPLOIT to detect. This means even malware sandboxing is useless.

The only effective school of detection for this type of attack that includes manipulation of layer 8 (humans) is network based anomaly detection (NBAD.) NBAD builds tight profiles around what normal in a network is. Anomaly detection doesn't require knowing bad, it requires knowing good/normal. When patterns shift, good anomaly detection will alert on that.

Anomaly detection is not subject behavioral monitoring. Behavioral monitoring is important but it leverages knowledge of known bad behavior (scanning, communicating with C&C, etc.) Anomaly alarms are: "New/Uncommon traffic pattern or behavior detected." If the anomaly engine can tell you what it is (SSH brute force, SQL injection, TCP Scan, etc.) it is not an anomaly alarm. It is a behavioral marker/signature. You do need behavioral detection but never let a vendor tell you that it is the same as anomaly detection. Behavioral detection will catch a lot of things but the hardest layer of detection to bypass is sophisticated anomaly detection. It is critical in catching the sophisticated, targeted threats that will land your organization in the papers. Another challenge with NBAD is that you need a smarter staff to use it to catch the smart bad guys (see: Time to Hire a Security Team.)

Wrap Up

 

Known Threats

Zero Day/Emerging

Targeted/Sophisticated

Signature Detection (AV/IDS/Content Filter)

Yes

No

No

Behavioral Detection (SIEM, Adv Malware, HIPS, Network Behavior)

Yes

Yes

Possible

Anomaly Detection (NBAD, UBAD)

Yes

Yes

Yes

The table above focuses on detection, not blocking. Since Sandboxing, AV, IPS, and filtering have the ability to stop an exploit before they kick off, they are critical components to protecting an infrastructure. NBAD and new user based anomaly detection (UBAD) monitor behavior after initial compromise occurs. NBAD/UBAD is the best at surveillance but cannot (generally) prevent the breaches. Anomaly detection is critical in catching advanced attacks and validating the enforcement mechanisms are working (see: When Enforcement Doesn't...)

It's also important to note that security products may incorporate different degrees of each of the three detection methods. There are real challenges in each school of detection (false positives, management, circumvention, etc.)

All networks need object centric signature detection (AV, IDS, Content Filtering, etc.) as well as sandboxing and/or SIEM to detect emerging threats. If you actually want a chance to catch sophisticated attacks, you need to hire a smart team that will know what to do with anomaly detection. Ask specific questions like:

  • How will this tool assist in catching a disgruntled of politically motivated employee from downloading our data and leaving with it? (a la Snowden and Manning)
  • How will this tool assist in detecting breaches where credentials have been disclosed or stolen?
  • How will this tool assist in catching zero day exploits against my assets?

If there isn’t a credible answer for these, they are jerking you around. It’s not an “APT tool."