This is a personal blog. My other stuff: book | home page | Twitter | prepping | CNC robotics | electronics

February 12, 2012

It's this time of the year again

Yeah, welcome to the 2012 edition of the full disclosure debate!

As usual, there are reasonable people who disagree about the merits of non-coordinated disclosure; a more recent trend is to debate the value of developing and publishing exploits, even for already patched bugs. The short-term risks are pretty clear to any sensible person: there is robust data to show that the availability of functioning exploits drives a good chunk of low-tier, large-scale attacks.

The long-term benefits are more speculative. I like to think of it as a necessary evil: non-disclosure does not prevent sophisticated and resourceful attackers from developing their own exploits and going after high-value targets, but it quickly leads to complacency when it comes to fixing the underlying problems and monitoring your infrastructure. We would not have Windows Update, silent autoupdates in Chrome, or MacOS X ASLR improvements weren't it for the constant stream of public exploits and the accompanying attacks.

The cost-benefit calculation here is mostly a matter of personal taste, and we won't be able to settle it any time soon. I'm a bit on the fence, too: I am at best ambivalent about the merits of exploit packs and frameworks such as CORE Impact or Metasploit. I am also deeply uncomfortable with exploit trading, a trend all-too-eagerly embraced and supported by the industry.

But the merits of the debate aside, there is a disturbing propensity for parties who struggled with security response, and have sometimes adopted openly hostile tactics to suppress security research, to be on the front lines of the anti-disclosure movement. This is why I couldn't help but find parallels between Brad Arkin's recent statements, and a position taken ten years ago by Scott Culp. Brad says:

"My goal isn't to find and fix every security bug. I'd like to drive up the cost of writing exploits. But when researchers go public with techniques and tools to defeat mitigations, they lower that cost. [...] Too much attention is being paid these days to responding to vulnerability reports instead of focusing on blocking live exploits."

"[We need to] work closer with the research community to curb the publication of information that can help malicious hackers. [...] Something hard becomes very very easy. These exploits and techniques are copied, adapted and modified very cheaply."

We all agree that bug-free products are not a realistic goal, but reducing the availability of information is probably also an ill-advised one. If it's still possible to write an exploit, and just "expensive" to do so - for example, because the knowledge on how to bypass ASLR is not common - then indeed, unskilled attackers will be less likely to go after your mom's credit card information; but going after her bank will be a fair game.

As for unintended consequences: in this scenario, the bank no longer has to deal with a steady stream of nuisance malware, so they probably care less about patching and monitoring, and the attacker is more likely to succeed.

Sure, one shouldn't be running on a vulnerability response treadmill. We can escape it to some extent simply by making the process more agile and lightweight. It is also very important to reduce the likelihood of malicious exploitation, but we should do so by tweaking factors other than the "cost" of acquiring domain-specific knowledge. We should embrace proactive approaches such as sensible coding practices, developer education, fuzzing, or tools such as ASLR, JIT randomization, and sandboxing - and when something slips through the cracks, we need to be thankful for the data point and simply make our solution more robust. Let's not obsess about what specific flavor of disclosure policies the researchers believe in: they haven't sold it to the highest bidder, and that's already pretty good.

"We have patched hundreds of CVEs over the last year. But, very, very few exploits have been written against those vulnerabilities. Over the past 24 months, we’ve seen about two dozen actual exploits."

That's a frighteningly high number of exploits, by the way.

1 comment:

  1. I see Brad Arkin's perspective but I see more of his frustration than an appeal. Adobe has had a tough time with security lately but the people and organizations (end users) have had even tougher times coping with these security bugs in Adobe every other week.

    Why not adopt a bug bounty program and buy some of that intelligence Mr. Arkin?