In the first part of this series I mentioned the Parisian airport security personnel had an opportunity to cut the attack attempt off at the knees but were prevented in doing so because they lacked some critical information. They were very suspicious of “The Shoe Bomber” because he purchased his ticket with cash and was going on a multi-week trip across the Atlantic without luggage. As they began to question him, they discovered that his passport had no entry stamps on it. This was cause for greater concern because he was departing France on a blank, British passport. The reason that the passport was blank was Richard Reid requested a new one from the British Embassy reporting that his previous passport had been badly damaged in the wash. Had he presented his original passport (this was at least his third), the security personnel would have been alarmed by previous travel to Pakistan (for terror training across the border in Afghanistan.) Additionally, if the investigating personnel would have had access to British police or court records they would have noted several periods of incarceration in British prisons. If they could have seen the British anti-terrorism intelligence they would have known that he was a devoted member of the Finsbury Park Mosque in London where radical Muslim imam, Abu Hamza al-MasriIf, regularly (and publically) advocated jihadist action against the West.  In 2001, they didn’t have access to any of this information so the poor French investigators, not able to get a confession of conspiracy out of Reid, sent him on his merry way to continue his holy war.

From Logs to Flows

The problem presented to the French airport officials is much the same as what we face in modern network security. The problem is not in inadequate information; the problem is in processing and sharing a vast amount of data.

Data Overload

Network intelligence can be collected from a variety of data sources ranging from the log files on the thousands of network computers to the raw data traversing the wires. In many networks the raw data of communications occurring on the network can exceed several terabytes (1015 bytes) each day. When faced with the sober risks we’ve outlined in earlier discussions and pushed up against impossible costs for storing such data, it is easy to understand the circumstances that led the Parisians to reluctantly throw their hands up and release Reid.

Clean (and process) as you go

The conventional approach to network monitoring is “store the data as long as fiscally possible so that it is available for analysis when needed.” This is the type of model that was in place in Paris when Richard Reid attempted to board AA 63. Innovative researchers in counter-terrorism as well as in network security realized that taking a page out of the McDonald’s training manual was what is needed for dealing with large datasets. I have had the pleasure of working on projects with several friends who, at some point in our tasks, chimed in “we should clean as we go.” After hearing that several times, I made some inquiries that revealed each of these friends had a common, past employer in McDonalds. At any busy McDonalds franchise it is easy to notice that a backlog is the most feared threat. As orders come in they need to be completely processed to make room for the next one. Team members need to process the orders and “clean as they go.”

Following 9/11 attacks (and continuing past the shoe bombing) deliberate efforts were made to allow all relevant threat information to be analyzed in near-real-time. Just like operations at my local McDonald’s, the intel/orders came in, were properly processed and appropriate actions were taken. This approach allows physical security experts to make more informed decisions because they are processing more data. As we saw with Richard Reid, intelligence is not rated by how much data is stored; value is established in the volume of data that is intelligently processed.

Let’s get raw

In network security we are left with a simple hypothesis for success. The quality of actionable network intelligence is dependent on the breadth and depth of processed data and ability of the processor to accurately classify the data. The best kind of data to feed into the processor will be in its least processed (most raw) form. When discussing the breadth of data on the network, complete visibility would require every communication going in, out or around the network to be feed into processor. In regard to depth, it would require every bit of those communications to be feed into the processor.

Probing for Data

Traditional network monitoring solutions require network data to flow through them (inline) or for a copy of the traffic to be fed into them (spanned.) This requires many (sometimes hundreds) of probing devices to be distributed across the network. Signature based processors require this type of communication visibility so patterns can be matched in the data stream. This data collection method provides the highest level of data depth but achieving complete coverage (breadth) may be technically impossible if not fiscally implausible. Compared to terrorism intelligence it would be akin to analyzing every word spoken in every conversation worldwide.

Enter NetFlow

If you are a regular follower of this blog, you probably have noticed that I made it through my first two and a half entries without mentioning our beloved and revered technology. Like all good introductions, a proper context was required. Without further ado, I would like to introduce NetFlow. NetFlow is a technology created by Cisco that allows the logging of all communications on a networking device to be sent to a processor or logging device. Since all network traffic is already passing through network devices, the NetFlow innovation provides a means to the widest breadth of coverage. If we were to draw a comparison to physical security it would be a technology that logs the participants of every conversation, how long they spoke, the purpose of their conversations (and more) but excluding the actual words (data) being logged.

Detection Methods Recap

As we discussed in the first installment in this series, “Detection Methods”, behavioral based detection does not require the full data streams that are necessary for signature detection. As we discussed earlier, while signature based detection has value, it has limitations in detecting immerging and advanced threats. Signature based detection mechanisms have a place as an ancillary security source but to handle modern threats, a behavioral based detection is critical to network protection.

StealthWatch and NetFlow

When NetFlow data (providing complete breadth of network visibility) is processed by a behavioral intelligence engine like StealthWatch (if there was anything else “like StealthWatch”) the result is reliable, actionable network intelligence without the expense of storing and duplicating full data streams. It provides the type of comprehensive visibility that the Parisian authorities needed in the physical realm. They didn’t need to know what he learned or said at the terrorist training camps, the militant mosque or in his British cell; they just needed to know he had been there.

Wrap Up

Just like the airport investigators of December 2001, we can only thwart our attackers if we are able to intelligently process all relevant data. The data on Reid had been collected but it hadn’t been properly processed or subsequently displayed to the investigators that needed it. Secure networks are built to process all relevant network data as it is made available so that actionable intelligence is produced. In the next installment we will examine the different types of criminals that are hoping to become more acquainted with your network.