Building a DevSpecOps Team

As I have had opportunity to demonstrate our product to cybersecurity veterans I am often asked “How did your very small team do this when larger, well-funded teams cannot?” It is true, the WitFoo development team has never been larger than 5 active members at any time and we have only had 10 contributors to the code-base. We don’t Frankenstein together open source code, we custom build it all. All told, our code consists of more than 4 million lines of proprietary code written by a handful of hard-hitting warrior developers. As we wrap our newest and grandest release, I’d like to share some insight into how we pulled it off.

Hypnosis of your Tech

We started WitFoo because we were moved by the pain we were seeing on the faces of our customers in previous endeavors. We knew that there had to be fundamental changes to how security software supported the craft. We decided we would study, listen and follow the needs of our front line investigators. We would build what they need to win against adversaries and to communicate with their broader business.

Lessons in InfoSec Graph Theory

One of the areas we research heavily at WitFoo is how to reduce the number of investigations our customers have to perform each day. Internally, we call this the “n” problem. Another area of focus is how to reduce the amount of time our customers spend on each investigation. We refer to this as the “t” problem. The lower we drive and t, the more work our customers can accomplish each day.

People > Machines (Part Three)

Computer scientists love the idea of artificial intelligence (AI). It is the centerpiece of many mainstream science fiction works. It’s also a preferred buzzword of lazy vendors and marketers. Until computers can convince (trick) a reasonable human being that they are living beings (Turing test) all claims of AI are misleading at best. In this installment, I won’t debunk the types of claims of AI. We will examine the difference between how computers and humans think and the implications of the differences.

People > Machines (Part Two)

When I was learning how to troubleshoot and repair electronics in the Navy, I would sometimes challenge one of the instructors on how something worked. If I delved into a complicated subject I was often told it worked on “FM” which meant f***ing magic. That rarely stopped me however, and I often found the concepts were not overly complicated, just not directly relevant to my training.

There is some FM in information security that I’d like to demystify as we examine how tools can enable and not hinder the craft. We’ll examine algorithms and machine learning in this installment.

People > Machines (Part one)

Cybersecurity Incident Response has only been a part of human history for a couple of decades. Over the short course of time, industry leaders, analysts and vendors have put a heavy focus on the importance of technology solving problems within the craft. In this series, we will examine the preeminent importance of the craftsman over his tools and the role tools should play in making the world safer.

Learning Foo

Learning Wit

Fail fast.  It’s one of the Agile buzz phrases that gets thrown around a lot in software product organizations these days.  Particularly, organizations trying to embrace the Lean/Agile approach to production.  The term ‘fail fast’ is grounded in the Lean concept of continuous learning.  Lean theory contends that learning is not a singular event, but rather a continuous process of trial and error.  The Lean approach advocates that the smaller the ‘set’ of learning and the faster it takes place, the better.  Thus, fail fast should really be ‘learn fast’ or ‘learn something small fast,’ but that’s not nearly as catchy.  This is all grounded in the heavily researched area of human learning.  Humans learn by trial and error.  Lean simply says so should the organizations.

Evolution of Data

First, the nature of evolution discards noise. Much like the concept in biology, only fit, useful facts survive the evolution process. When exposed to more complex systems, noise goes the way of the dodo bird. A “possible SQL injection attack on MySQL” event becomes irrelevant when vulnerability reports show the targeted server isn’t running MySQL. As data becomes a more mature, evolved object the irrelevant events fall away.

Failure Reports

When I was leading the Network Security Group at the US Naval Postgraduate School, I was overwhelmed with the degree of failure we experienced. The amount of events, complexity of investigations and immature security infrastructure created an environment of perpetual failure. After gathering the basic business metrics I discussed in Metering Incident Response 101 I decided it was time to push the problem up the chain of command.

Metering Incident Response 101

A core tenet to success in any endeavor is defining, collecting and analyzing core metrics. Incident Response teams can only develop plans that lead to success when it can be defined and metered. Understanding and collecting two key metrics can aid in defining, metering and reporting on success.

Origin of WitFoo

In 1995, I started my Navy training as an Aviation Electronics Technician. I spent more than a year learning electrical theory, how to use sophisticated tools (like time domain reflectors)  and the logic associated with troubleshooting avionics. I was ready to go to mano a mano against any aircraft that was daft enough to challenge my acumen.

Sony Breach Thoughts

While I don't like "vendor dogpiles" every time there is a mainstream cybersecurity breach, they do serve as good opportunities to change InfoSec paradigms. In this article discusses the value of monitoring with enforcement as well as repairing or creating response processes.
Subscribe to Newest Content