This is a personal blog. My other stuff: book | home page | Twitter | prepping | CNC robotics | electronics

June 22, 2010

Intrusion detection: doing it wrong

Quite a few thick volumes have been written on the topic of securing corporate environments - but most of them boil down to the following advice:
  1. Reduce your attack surface by eliminating non-essential services and sensibly restricting access to data,
  2. Compartmentalize important services to lower the impact of a compromise,
  3. Keep track of all assets and remediate known vulnerabilities in a timely manner,
  4. Teach people to write secure code and behave responsibly,
  5. Audit these processes regularly to make sure they actually work.
We have an array of practical methodologies and robust tools to achieve these goals - but we also have a pretty good understanding of where this model falls apart. As epitomized by Charlie Miller's goofy catchphrase, "I was not in your threat model", the reason for this is two-fold:
  • You will likely get owned, by kids: reasonably clued people with some time on their hands are (and for the foreseeable future will be) able to put together a fuzzer and find horrible security flaws in most of the common server or desktop software in a matter of days. Modern, large-scale enterprises with vast IT infrastructure, complex usability needs, and a diverse internal user base, are always extremely vulnerable to this class of attackers.

    As a feel-good measure, this discussion is often framed in terms of high-profile vulnerability trade, international crime syndicates, or government-sponsored cyberwarfare - but chances are, the harbinger of doom will be a bored teenager, or a geek with an outlandish agenda; they are less predictable than foreign governments, too - so in some ways, we should be fearing them more.

  • Compartmentalization will not save you: determined attackers will take their time, and will get creative if needs be. Compartmentalization may buy a couple of days, but simply can't be designed to keep them away forever, yet keep the business thriving: as witnessed by a number of well-publicized security incidents, design compromises and poor user judgment inevitably create escalation paths.
Past a certain point, proactive measures begin to offer diminishing returns: throwing money at the problem will probably never get you to a point where a compromise is unlikely, and the business can go on. This is not a cheering prospect - but something we have to live with.

The key to surviving a compromise may lie in the capability to detect a successful attack very early on. The attackers you should be fearing the most are just humans, and have to learn about the intricacies of your networks, and the value of every asset, as they go. These precious hours may give you the opportunity to recover - right before an incident becomes a disaster.

This brings us to the topic of intrusion detection - a surprisingly hard and hairy challenge in the world of information security. Most of the detection techniques at our disposal today are inherently bypassable; this is particularly true for bulk of the tricks employed by most of the commercial AV, IDS, IPS, and WAF systems I know of. And that's where the problem lies: because the internals of these tools are essentially public knowledge, off-the-shelf intrusion detection systems often amount to a fairly expensive (and often by itself vulnerable!) tool to deter only the dumbest of attackers. A competent adversary, prepared in advance or simply catching the scent of a specific IDS toolkit, is reasonably likely to work around it without breaking a sweat.

The interesting - and highly contentious - question is what happens when the design of your in-house intrusion detection system becomes a secret. Many of my peers would argue this is actually harmful: in most contexts, security-by-obscurity does nothing to correct the underlying problems, and merely sweeps them under the rug. Yet, I am inclined to argue that in this particular case, it offers a qualitative difference. Here's why:

Let's begin by proposing a single, trivial anomaly detection rule, custom-tailored for our operating environment (and therefore, reasonably sensitive and unlikely to generate false positives); for example, it could be a simple daemon to take notice of execve() calls with stdin pointing directly to a network socket - a common sign of server-targeted shellcode. When the architecture is not shared with common commercial tools, external attackers stand a certain chance of tripping this check, and a certain chance of evading it - but this is governed almost solely by having dumb luck, and not by their skill. The odds are not particularly reassuring, but are a starting point.

(Now, an insider stands a better chance of defeating the mechanism - an unavoidable if less common problem - but a rogue IT employee is an issue that, for all intents and purposes, defies all attempts to solve it with technology alone.)

Let's continue further down this road: perhaps also introduce a simple tool to identify unexpected interactive sessions within encrypted and non-encrypted network traffic; or even a tweaked version of /bin/sh that alerts us to unusual -c or stdin payloads. Building on top of this, we can proceed to business logic: say, checks for database queries for unusual patterns, or coming from workstations belonging to users not usually engaged in customer support. Each of these checks is trivial, and stands only an average chance of detecting a clued attacker. Yet, as the chain of tools grows longer, and the number of variables that needs to be guessed perfectly right increases, the likelihood of evading detection - especially early in the process - becomes extremely low. Simplifying a bit, the odds of strolling past ten completely independent, 50% reliable checks, are just 1 in 1024; it does not matter whether the attacker is the best hacker in the world or not (unless also a clairvoyant).

For better or worse, intrusion detection seems to be an essential survival skill - and I think we are all too often doing it wrong. A successful approach on the uniqueness and diversity - and not necessarily the complexity - of the tools used; the moment you neatly package them and share the product with the world, your IDS becomes a $250,000 novelty toy.

Sadly, large organizations often lack the expertise, or just the courage, to get creative. There is a stigma of low expectations attached to intrusion detection in general, to security-by-obscurity as a defense strategy, and to maintaining in-house code that can't generate pie charts on a quarterly basis.

But when you are a high-profile target, defending only against the dumb attackers in a world full of brilliant ones - some of them driven by peculiar and unpredictable incentives - strikes me as a poor approach in the long run.


  1. This is great stuff and I totally agree. At OWASP I've been working on this concept for the last couple of years. I encourage you or others interested to take a look at the OWASP AppSensor Project. I'd love to incorporate new ideas.

    We are adding these "traps" (aka detection points) within the web application (e.g in the code, not an external WAF approach) to detect malicious behavior and then take action against the attacker (log out, lock account, etc).


  2. This is great material and a fantastic point of view.

    Our HoneyPoint Security Server and HoneyPage work is exactly along this line and integrates well into the OWASP work above. and @lbhuston

    Great blog post!

  3. If this was ten years ago, I would echo a hardy amen. A few things seemed to have changed. Yes, it is a teenager with time on his hands, but all too often today he is in the employ directly or indirectly of organized crime and or political motivation (China/Russia).

    Dr. Eric Cole says, protection is ideal, but detection is a must. If you use the 6 step incident handling model, preparation, detection, containment, eradication, restoration and lessons learned, you reach the interesting observation that without detection there *is* no incident. Ummmm. And early detection is ideal, but data mining is also valuable. Every organization with means and valuable IP should be encouraged to hire a bright, curious person and set up a Niksun or NetWitness or NetIntercept and look for the lows and slows, the pattern of recon and or trial and error.

    Some of this stuff should have been automated by now. Stiennon's paper, Intrusion Detection is dead is now 7 years old. Maybe it is time to write SIEM is dead. I think Hightower was on the right track. When I look at their blog, I realize the Prism folks know what to do, they just haven't fully implemented it. I realize that it sounds like I disagree with you when I say this can be coded, but if I can take common cause off the table, that leaves special cause and I will take those odds even in the age of fuzzers.

    A final note, I am not sure about the Advanced in Advanced Persistent Threat, but the Persistent part is well said. It is going to come and come and come every single day. I arranged to have our attack logs bundled up and sent to my email account every day and it primarily serves to remind me of the persistence. Truly, eternal vigilance is the price of liberty. Peace.

  4. The way I like to put it for people who think one is talking about security-through-obscurity when one proposes private detection:

    Burglar alarms can always be bypassed if the intruder knows the details. You don't publish your burglar alarm details.

  5. Intrusion Detection, Intrusion Prevention and what not are concept that implies alot more
    than the tool it self, so let say snort,BRO,Prelude,suricata and others have their codebase
    published, it dosen't mean that their innefective.

    The Public eye's can help fix issue that otherwise would probably still exist.

    We all know that as the technology evolve, challenges changes,
    but basic principles stay the same.

    10 years ago, IDS technology rule based detection was focused mainly on shellcode aside from
    a few well know strings like root(0) and phf?Qalias=%0A/bin/cat%20/etc/passwd and others,

    then came public way's to challenge their behavior:

    And things evolved.

    Nowdays you will hardly find any effective IDS systems that focus only on rule logic
    to be reliable.

    Protocolar encapsulation being an other hard to weight problem.

    In the last few years new challenges have emerged as the rise of the client side exploit
    has become a stagering issue.

    And from the DMZ logic, the security infrastructure had to evolve to a more
    global and unified threat monitoring and has been (since the last few years)
    and it is still a very current challenge in the field.

    But as bandwidth grow, dataflow follow's and like any weapon/defense system
    its an arm race.

    I like your idea's of simple tools, but creating "detection" mechanism is something,
    unifing them and scaling them to other environement and setup is in its self a challenge.

    Implementing Intrusion Detection/Preventions in an environement that has not been
    built in a way to handle threats is probably more challenging that most people think, and
    it require more than security brains to understand security.

    We could transpose this example to the real world by comparing the Ramstein Air base and
    Baghdad green zone.

  6. I agree, and so did Marcus Ranum nearly ten years ago when he advocated almost exactly the same ideas and similar examples. Instrument your systems based on what you know about them and their use. Generate events from that instrumentation and start figuring out how drive towards actionable data and/or some automated responses.

  7. I think this is all common sense, or perhaps that's just because I hold the same views :)

    I think it was Schneier who said that the only advantage the defender has is domain knowledge.

    I think I've compared it to the network equivalent of assert()ing impossible conditions in the programming world.

  8. This is what I meant by relying upon "stealth and surprise" in a recent dailydave post. Here's a couple of thoughts on IDS, and keeping detection methods secret:
    And most importantly,

    Marcus Ranum told me long ago ('97) about how he had once recompiled ls to immediately shut down the system if executed as root, and how he learned to type "echo *" instead.

    The list of "simple IDS tricks" is very, very long. ;-)

  9. Re: "Burglar alarms can always be bypassed if the intruder knows the details. You don't publish your burglar alarm details."

    And if you do get hit, and they remain undetected, your list of suspects is very short; basically anyone who could reasonably have known all the details necessary.

    Kerckhoff's principle is great when you can accomplish it; when on an even playing field, it is not always an attainable design goal.

    BTW, NIPS systems that simply block connections after a signature match allow people to remotely map out the signature set.