Holy Grail

"I'm hoping to find a solution set that can provide a single pane of glass to InfoSec operations" - every Chief Information Security Officer

Having a single solution that can answer every information security question has long been the Holy Grail to CISO and other managers. In Malory's Le Morte d'Arthur everyone (including Arthur and Lancelot) gave up on it long before it was found by Galahad. Indiana Jones found it a couple thousand years after everyone else gave up. In our digital world, the opposite seems to be true. There is no scripture, fable or legend that leads us to believe that a Holy Aggregator exists but we think if we wish hard enough it will appear.

I recently met with an agency that had purchased five (5) different SEIM over the last 18 months looking for the Holy Aggregator. You know what they discovered? They all fell way short of delivering on their hopes of a single pane. In this article, I'd like to discuss why we so desperately seek this single pane of glass, what the barriers currently are to making it appear and what to do in the mean time.

Why we crave the Holy Aggregator

The reasons we so badly want a single pane are pretty simple to understand. All of the reasons revolve around a central thesis: "simplified, centralized solutions [should/could] deliver better results at a lower cost." The reasoning behind this includes the following assertions:

  • Less tools reduce maintenance (operating expenses)
  • Less tools simplify training of personnel
  • Less tools reduce the requisite head count
  • More specific focus on smaller tool sets improve operator efficiencies in the tools
  • Fewer vendor relationships (these are exhausting)
  • Centralized storage and processing of data makes more data points available to a master detection engine resulting in better detection and reporting

Describing the Utopia

The ship computers in Roddenberry's Star Trek franchise are great examples of what CISO dreams are made of. Every piece of ship data is available. Every discovered piece of external knowledge in "Star Fleet" is at the disposal of the computer. The computer has the ability to perform intuitive calculations (even the most complex.) Finally the computer is able to intuit the intention of simple queries from operators. If an intruder is detected by the ships computer, it can quickly filter all false positive conditions, calculate the correct mitigation and notify the bridge crew. The captain can just say, "Computer, report!" and only the relevant details are given, "Klingon attack crew attempted to enter via hangar bay 2. I have isolated that area with force fields in accordance with defensive directive omega 6."

Why Utopia doesn't exist on Stardate 67041.6

All kidding aside, the Star Fleet example does provide us with the qualities of a single pane of glass. I'd like to examine the current deficiencies of each in finding the Holy Aggregator.

Internal Data Centralization

Bringing all InfoSec data into a central store is getting a lot of focus these days. Solutions like Splunk, ArcSight and QRadar and other Security Event and Incident Management (SEIM) have been trying to do this for the last 10-15 years. The first challenge of getting data into these "haystacks" is that the diverse data sources have to be "connected." Formats like syslog have helped in providing a transmission medium to send these log events to the aggregators. There has been a lot of good work over the last decade in getting various outputs into data stores.

Semantic Context

SEIM and other data aggregators would do well to learn the lessons of Semantic web. The goal of Semantic Web is to create standard vocabularies of various pieces of web content so they can be understood. A blog (like this one) should be marked as a data type of "blog" with attributes like "author" (yours truly in this case), "publication date", "summary," etc. The reason semantic classification is so important is that it provides a processing engine with the information it needs for calculating the data and determining it's relevance in different contexts.

Many aggregators are not "understanding" the data as it is processed. Traditional relational databases are used to store the data in fixed fields. Aggregators have three levels of data storage and processing 1) limit input to understood fields keeping in context (fixed semantic processing) 2) accept all input without context/understanding(flexible object storage) 3) processing all input and discovering context from external vocabularies (flexible semantic processing.) The starship computer is able to perform flexible semantic processing but nothing on stardate 67041 (July 29, 2013) is able to do that. That leaves us with tools that can perform fixed semantic processing or flexible object storage.

Semantic aware solutions are aware that different data attributes have different implications. Currently, these are the mile deep/foot wide solutions that are purpose built for processing specific (usually singular) data sets like AAA, NetFlow, file access ,firewall logs, etc. These solutions tend to very efficiently store and process the data they are semantically aware. Nearly every detection engine in InfoSec tends to be semantic in nature. Semantic engines have the capability of processing and storing data much more efficiently than their flexible object equivalents. Check out my recent blog entry comparing how StealthWatch (NetFlow Semantic) and Splunk (flexible object) compare in their respective abilities to handle NetFlow. The only down side to the current state of industry evolution is the data points for these products are much more limited than their flexible object storage counterparts.

Flexible object storage (Splunk is a great example) engines will accept any input and store the entry unaltered. It has no awareness of what it is storing and as a result can not perform anything beyond custom user queries. These solutions tend to be a mile wide/foot deep. They tend to make good haystacks if you know what you are looking for but the lack of context awareness making them poor solutions for detection. They have the potential of storing data from diverse sources but cannot understand what is being stored due to poor context.

The obvious solution here is that as new data types are fed into the engine, it would analyze the data and determine what it was and how it fits into the other data points being fed into it. This can happen in the Star Trek universe easily enough but for us less evolved humans such development is still years away. To allow for pervasive semantic vocabularies, either standards have to be adopted or proprietary vocabularies have to be made available as open source. Our industry has been very slow in this for largely capitalist reasons (money drives development and cross vendor collaboration does not generate near term revenues.)

Boolean is to Algorithmic as Yahoo! is to ...

Google is synonymous with web search not because it invented it but because of Page Rank. Larry Page created an algorithm that calculated the relevance of a web page using a number of factors. The result of this PageRank determined the order that results were delivered to searchers. This was fundamentally different from Yahoo! approach of providing a hierarchical directory of listings that a searcher would dig deeper and deeper to find the correct site (Country -> State -> City -> Service type -> specialty, etc.)

Computers love boolean statements. Computers love logic gates. Human beings and the lives they live operate through algorithms. If you have read my blog for any period of time, you will know that I love drawing from the established wisdom of martial/physical security in discussing network/information security (see: School of Ft. Knox.)

When processing threat data (in the physical or cyber world) two schools can be used. The first is boolean processing. Boolean processing is the basis of both signature solutions and SEIM. Essentially they take data points and build conditional statements that if hold true will result in triggering a notification or alarm. In the physical world "If you see a person carrying a knife at the airport, detain him." SEIM processing may have conditions like "If an IDS event is triggered from a source AND the target is marked as vulnerable from the VA scanner OR the source is from an unfriendly nation" with an action like "block the communication and notify incident response." There are obvious problems with this approach. First, you have to know the threat criteria before the attack occurs. If the exploit has been modified to evade an IDS signature (this happens ALOT) the SEIM misses detection (because the IDS did.) If the vulnerability is not yet known or the VA scanner hasn't yet scanned it, the SEIM fails again. If the attacking host spoofs his source origin (through TOR, for example) it fails again. The problem with SEIM detection (and signature based detection in general) is that is only catches the attacks that are very simple to detect.

In the case of Richard Reid (the "Shoe Bomber') he was questioned by Parisian authorities because he 1) had a British passport with no stamps on it (he told the embassy he washed his old one) 2) paid for his ticket in cash (most France to US travelers use a credit card) and 3) had no luggage for a 2 week trip to FL. None of these things are enough to pull a person off a plane but when calculated together produce a threat indicator high enough to question him.

In network security, similar point systems can be taken from airport security and search engine rankings. An intelligent set of algorithms evaluating behavior (and assigning threat scores to a host) can not only catch events that boolean are capable of but also evolving, targeted threats. (Be sure to note my blog last year on detection methodologies.) An (oversimplified) algorithm to replace to previous boolean SEIM signature might be Threat Index (TI) = TI + [IDS EVENT Points] + [Target Vulnerable Points] + [Dangerous Location Points]. Then you alarm or take action when the Threat Index exceeds acceptable levels. The three metrics that can rate an aggregator is the mean time to know (MTTK) that an event has happened, percent of false positives (crying wolf), and percent of false negatives (missed breaches.) These are easy to gage in a red cell environment.

Current aggregator technologies use rudimentary boolean/signature detection that is ineffective. This approach produces both high false positive and false negative rates because of the limited scope of threat detection (event centric instead of host centric.) A single pane will never exist until boolean evolves to algorithmic and that evolution can't occur without context/semantic awareness.

Star Trek Invented the iPad

Apple and Steve Jobs may get credit for putting iPads into the hands of global users but Star Fleet officers were demonstrating the power of an easy user experience (UX) on CBS since the sixties. Being able to intelligently process and store data (flexible semantic) is step one. Being able to intelligently analyze the data (algorithmically) is step two. The last component of a Holy Aggregator is easy user experience (UX.) UX in an InfoSec conversation is providing incident responders with the data they need, when they need it, in the format they need, following their response processes. Flexible object solutions stink at this because they don't understand the data they are storing. There are very few developed workflows in them. Custom queries have to be created. Data should map to processes. One problem we have in InfoSec is we don't document processes which makes mapping user interfaces to the not-yet-existent processes impossible to develop. The second problem is vendors develop based on requirements (process syslog, NetFlow and output to ticketing system) instead of these incident response processes. One of the best tests that can be run against an aggregator is the mean time to resolution (MTTR) (in minutes) that it takes a power user to accomplish an investigative task (i.e.. determine what data was lost in the recent breach.)

What to do today?

If I didn't make it clear enough along the way,let me summarize: WE ARE VERY FAR AWAY FROM HAVING A HOLY AGGREGATOR!! So what to do now? First step is accepting that what you need/want does not exist in one line item. Accepting reality makes it easier to find a solution. The next step is to stop letting analysts and vendors drive your solution development. I wrote an article earlier in the year describing the broken conversations between organizations, vendors and analysts. Do not go shopping for SEIM, AV, IPS and other things you're told you have to have. Instead focus on your objectives. SANS Critical Controls is a great resource for inventorying how you are addressing security issues. The 10 CBK of ISC2 is another good checklist. Focus on the operational areas you need to build, not products that need to be purchased.

Focusing on these areas will prompt you to build processes on how to handle specific threat scenarios (armed man walks into data center, spear phishing of CEO, botnet infection from sales staff, etc.) Start buying tools as a result of what your process needs. Questions like "how can I reduce the time staff spends on memory forensics to determine impact of endpoint compromise?" may lead you to an expensive forensic solution that has a 6 month ROI from recovered man hours at the same time reducing risk. Buying an expensive forensic tool because a vendor or analyst said to is stupid and wasteful.

Building these processes are a huge pain in the ass. I know that. I have suspicions that some managers are day dreaming of a Holy Aggregator because they don't want to deal with it. Others don't want to hire the head count (see: Time to Hire a Security Team.) The reality of the current day is that a single pane of glass will only lull an organization into a false sense of security. Take the time to build processes based off of well constructed controls. Buy tools that enable the processes. If you wait for the Holy Aggregator to show up, you will be looking for a new line of work when your data is found on Pastebin.

Wrap Up

I know the appeal of a single pane of glass for InfoSec processes is appealing. In the current state of technology it is best to use context/semantic aware data processors that can algorithmically look for threats. The interfaces need to support security operations and incident response workflows (make sure you have these documented before spending too much money.) If you select tools based on workflows instead of comparative matrices, you can build an infrastructure that can respond to modern advanced threats.