Part Three: Attack Payloads
In Part 2 of this series, we evaluated the different attack detection methods that can be used in foiling attacks. In this installment, I’d like to turn our focus to the payloads that can be delivered after an exploit is successful.
Differentiating Exploit and Payload
To start off we need to understand the difference between an exploit and a payload. The exploit is the method of circumventing security, while the payload is what you do with the gained access.
Previously we learned that attacks can be detected by either examining the agent’s composition (signature-based detection) or its activity (behavior-based detection). Security signatures can be written to detect either an exploit or a payload.
There are a variety of payloads that can be delivered on an exploited network resource.
Perhaps the most reported network attack payloads are those that deal with service disruption. A denial-of-service (DoS) attack is a payload designed to prevent a server from performing its function. In 1996, the infamous denial-of-service aptly called “The Ping of Death” was released onto the budding Internet. By sending a data payload much larger than a legal ping packet (>1000x), a vulnerable system would become completely unresponsive until rebooted. A more common type of DoS attack involves sending more service requests than a server can physically respond to. When geographically dispersed computers are coordinating a denial-of-service attack on a host, it is called a distributed denial-of-service (DDoS) attack.
It is possible to detect “Ping of Death” type DoS attacks through the use of signatures. Behavioral/anomaly detection is necessary to detect the “over-request” type of DoS. An example would be thousands of hacked computers (known as a botnet or “zombie network”) being directed by their hacker to request the homepage of their target as fast as they can. Signature and heuristic detection couldn’t work here because the composition of the traffic is valid (requesting the home page), it’s the deviation from normal frequency of requests that reveals the malevolence of the communications.
A successful network attack may be nothing more than rolling a new recruit into the fold. Sometimes sophisticated attacks are used to gain control over a high value server. When hackers gain complete access over a computer they call it “owning” the machine. More often than “owning” a critical server, attackers try to recruit pawns. Across the globe, hackers have installed malicious software (malware) onto thousands of computers so that they have an army of computers standing by to receive commands. A single computer infected with this type of payload is called a “bot.” Collectively all computers under the control of their attacker are called a “botnet” or sometimes a “zombie network.” Botnets can be used for a variety of purposes ranging from sending spam and scamming online advertisers (click fraud) to DDoS and covert network infiltration.
As enterprises have begun to allow employees to bring their own laptops, tablets and smartphones onto the network (a policy known as “bring-your-own-device” or BYOD), it has provided a method for attackers to get bots under their control within otherwise inaccessible networks. Much like the challenges of determining when a previously reliable soldier has become a traitor, detecting bots on the network can be difficult when they are inactive. Signature-based detection has to rely on communication to a list of known “bad” Internet addresses to detect an internal host beaconing to its nefarious controller (known as a command-and-control server). Since criminals in cyberspace can relocate much easier than their physical world counterparts, it makes these types of lists (and signatures) ineffective. Behavioral-based detection is able to see new types of communications coming out of newly infected hosts and provides an opportunity for incident response.
Loose Lips . . .
If having computers on your network under the control of an unknown hacker scares you, the prospect of your proprietary information being uploaded to a server belonging to a criminal (or competitor) might make your hair fall out. When a computer on your network sends information it shouldn’t out of the network, it is called “data loss.” With today’s criminal penalties for disclosure of personal, credit card or medical information, any unauthorized disclosure of data can permanently hamstring an organization.
Signature-based detection of data loss can look for SSN and credit card patterns in outbound communication. It can also look for the names of files being transferred out of the network. Hackers soon realized that the same encryption technology that was created to protect the citizenry from their attacks could be used to foil signature-based data loss protection. By using encrypted communication channels to send protected information out of the network, signature-based analysis tools are rendered blind. Since behavioral detection doesn’t require a look inside the transmission data, it is able to flag computers with protected information transferring data abnormally. Instead of throwing a signature-based alarm (e.g., “Suspected SSN Sent”) a behavioral detection system would report an “Abnormal Data Disclosure” or “Possible Data Loss” alarm by observing the deviation in normal traffic usage.
We’ve taken a quick look at what cybercriminals can do once they bypass security mechanisms. In the next installment of this series, we will examine how having better intelligence than the attackers can keep networks secure.