This is a personal blog. My other stuff: book | home page | Twitter | prepping | CNC robotics | electronics

July 06, 2010

Hi! I'm a security researcher, and here's your invoice.

It always struck me as a simple deal: there are benefits to openly participating in the security research community - peer recognition and job opportunities. There is also a cost of doing it as a hobby - loss of potential income in other pursuits. After having made a name for themselves, some people decide that the benefits no longer offset the costs - and stop spending their time on non-commercial projects. Easy, right?

Well, this is not what's on the minds of several of my respected peers. Somewhere in 2009, Alex Sotirov, Charlie Miller, and Dino Dai Zovi announced that there will be no more free bugs; in Charlie's own words:

"As long as folks continue to give bugs to companies for free, the companies will never appreciate (or reward) the effort. So I encourage you all to stop the insanity and stop giving away your hard work."

The three researchers did not feel adequately compensated for their (unsolicited) research, and opted not to disclose this information to vendors or the public - but continued the work in private, and sometimes boasted about the inherently unverifiable, secret finds.

Is this a good strategy? I think it is important to realize that many vendors, being driven by commercial incentives, spend exactly as much on security engineering as they think is appropriate - and this is influenced chiefly by external factors: PR issues, contractual obligations, regulatory risks. Full disclosure puts many of the poor performers under intense public scrutiny, and may force them to try harder and hire security talent (that's you!).

Exactly because of this unwanted pressure, they do not inherently benefit from the unsolicited services, and will probably not work with you to nourish them: if you "threaten" them by promising to essentially stop being a PR problem (unless compensated) - well, don't be surprised if they do not call back soon with a counter-offer.

Having said that, there is an interesting way one could make this work: the "pay us or else..." approach - where the "else" part may be implied to mean:

  • Selling the information to unnamed third parties, to use it as they see fit (with potential consequences to the vendor's customers),

  • Shaming the vendor in public to suggest negligence ("company X obviously values customer safety well below our $10,000 asking price"),

  • Simply tellling the world without giving the vendor a chance to respond if your demands are not met.
There's only one problem: I think these tricks are extremely sleazy. There are good and rather uncontroversial reasons why disclosing true information about an individual is often legal, but engaging in blackmail never is; the parallels here are really easy to draw.

This is why I am disappointed by the news of VUPEN apparently adopting a similar strategy (full article); and equally disappointed by how few people called it out:

"French security services provider VUPEN claims to have discovered two critical security vulnerabilities in the recently released Office 2010 – but has passed information on the vulnerabilities and advice on mitigation to its own customers only. For now, the company does not intend to fill Microsoft in on the details, as they consider the quid pro quo – a mention in the credits in the security bulletin – inadequate.

'Why should security services providers give away for free information aimed at making paid-for software more secure?' asked [VUPEN CEO] Bekrar."

Here's the thing: security researchers don't have to give any information away for free; but if you need to resort to arm-twisting tactics to sell a service, you have some serious soul searching to do.

5 comments:

  1. Hey lcamtuf,

    I'm really glad that you are blogging now and have enjoyed your well thought-out opinions. Keep them coming. But I did want to counter one key argument in this post.

    "Full disclosure puts many of the poor performers under intense public scrutiny, and may force them to try harder and hire security talent (that's you!)."

    I'd wager that fully disclosing bugs in a large vendor's products without notifying them first will not inspire them to hire you, but it may inspire them to hire more security researchers that aren't you. It is unfortunately more likely to make them attack you publicly as we have recently seen.

    I agree that public pressure on the vendors is a good way to get them to put forth more effort, but full disclosure isn't the only way to achieve this. I like ZDI-type (ok, I think eEye did it first) "upcoming advisories" page does a decent job of this and it'd be great to open up an "advisory countdown" page to everyone so that the press and public can get an idea of just how many reported bugs are in a particular vendor's queue to be fixed. That might scare them away from using a certain vendor's products but still depends on a cadre of researchers doing this work in their spare time for free, paid by ZDI/iDefense bounties, or working for one of the companies that employs vulnerability researchers full-time to find and report bugs in commercial software. One could hardly argue that simply keeping tabs on how long it takes vendors to fix reported vulnerabilities places any users at risk and instead performs a valuable public service.

    It also gives other researchers an idea of just how many other bugs of a similar type (i.e. local priv escalation, browser-based code exec, etc) are in the queue. And they may realize that cutting in line in front of all those other vulns is just plain rude unless that particular bug is being exploited in the wild. In which case, immediate full disclosure allows AV/HIPS vendors to produce a signature to stop the attacks way faster than the vendor ever could.

    My theory at this point is that greater Internet security will come through increased situational awareness of actual attacks on the Internet and quickly mobilizing a variety of defenses to counter the actual attacks. Patching individual vulnerabilities in software products that contain so many is such a piecemeal approach that I doubt it will ever succeed alone. What vulnerability disclosures do reveal is which products are built using shoddy software construction materials and practices.

    Cheers,

    -Dino

    ReplyDelete
  2. The risk of professional retribution is real, although let's not overestimate it: despite all the PR buzz, almost every major software vendor employs at least some security people who 0-dayed or criticized them in the past.

    I think the key is simply to stay reasonable and focused; the moment you seem to be fueled by irrational hatred ("$foo is evil and sucks!"), rather than a sensible disagreement over policies, it is a career-limiting move - but for reasons unrelated to how you disclose bugs.

    As for ZDI-style "upcoming vulnerabilities" lists - they are are interesting... but let's face it, very few people within the industry know about them and track them; the media and the general public are completely unaware of their existence. When vendor X releases a cummulative security update, the press never digs deep enough to find out how old these vulnerabilities were - and bulk vulnerability purchasers don't make a fuss about it, because it's in their interest to have exclusive access to vulnerabilities for as long as possible.

    ReplyDelete
  3. Thanks for the nice non-flamebait post about this difficult subject.

    Aside from what they should do from some moral perspective, If I were one of these vendors with millions of exploitable customers I might try to be a little nicer to the security research community.

    For example, recently there was an case involving a researcher who discovered a MS bug, ostensibly through his personal research. He contacted MSRC to negotiate the disclosure of a bug. While we don't know the details, outward appearances are that when agreement wasn't reached with MSRC, they assailed his daytime employer through the industry press. This is hardly an encouragement for anyone who is considering contacting MSRC with bugs in the future.

    I find it curious that it is unfunded independent security researchers who are expected to be operating from these high moral standards, whereas corporations are expected only to minimally comply with law in the pursuit of their business endeavors. You can bet that none of these vendors would hesitate to make a buck from inside information about their customers or their competition, yet researchers are expected to take a vow of poverty. Let us not forget who profited from the creation and the original sale of these bugs, who turned their overly-trusting consumers into victims, and who inexplicably cannot now find the resources to ship the fixes in any reasonable time frame. For these to now complain about an open disclosure process seems a bit like someone complaining angrily about the tech support they get with the software they didn't pay for.

    Except it's worse than that. Some vendors seem to behave like a family member with an abuse problem. Agreeing to "responsibly" delay disclosure may amount to helping them to cover it up thus enabling and prolonging this wretched condition. While it's perhaps a noble desire to want to minimize the world's exposure to an in-the-wild exploit, the instinct to secrecy may be based on the conceited (and often incorrect) assumption that this researcher is the only one to discover the bug. Although there may not be any web page defacements resulting from this vulnerability, it may be that the bug could be selectively used for targeted attacks by the kind of bad guys who prefer to keep their exploits secret.

    In any case, those who are vulnerable at least deserve to know about their predicament so that they may deploy whatever countermeasures may be available or at least know what to look out for.

    - Marsh

    ReplyDelete
  4. There are multiple problems at play here.

    There are so many problems with any of the disclosure types -- they are all broken. It is simply too complex.

    There might be an ISO method to report a finding to a vendor, but the ISO standard will not make an ethical decision. It is up to you, the vulnerability finder, to make that decision.

    I propose a new model that I call "Full Responsible Disclosure".

    "Full Responsible Disclosure" happens when an honest researcher finds an honest bug. He/she reports the bug to the vendor and the vendor integrates this person into their on-going efforts to fix classes of bugs like the one found. The discloser is paid an equal amount of the profits to fix that class in the product permanently (i.e. the bug is "stamped out") along with all of the other FTE or consultant team members (i.e. appdev and appsec managers, analysts, developers, testers, etc).

    Basically, if it takes 20k man hours to stamp-out SQLi after JoeBobHacker finds a SQLi in the new Google product; and the average pay is $50/hour with a team of 10 people; then JoeBobHacker walks away with $100k -- but only after the bug is stamped-out. There is no sense paying JoeBobHacker if JoeBobHacker can go find another SQLi in the same product later that same afternoon after he gets paid.

    Additionally, if there is credit -- the credit should go to the team that stamped-out that class of bugs, not the individual who originally reported a single bug. Note that JoeBobHacker doesn't get any money for finding the original bug, but only stamping-out that class of bugs in that specific Google product/rev.

    A vendor may decide it's too expensive to try to stamp-out a class of bugs. In that case, JoeBobHacker gets nothing because he/she has contributed effectively nothing.

    If JoeBobHacker cannot (because of skills) or is unwilling to help Google stamp-out that SQLi in Product-XYZ, then he should be able to recommend a third-party to fill in his place. For example, JoeBobHacker contracts Gotham Digital Science to help Google stamp-out SQLi in that product/rev. JoeBobHacker needs to work out some sort of finder's fee with Gotham. Perhaps these numbers will be similar to those utilized by iDefense, ZDI, etc today.

    Unlike Dino, I do not believe that we can continue to "stop attacks way faster than the vendor ever could" as a long-term strategy. This is why ZDI and similar initiatives will lose more and more relevance and ground over time. The current exploit-farm combined with delivery tools like Fragus and botnets like ZeuS makes for a very nasty future where IPS/AV/FW/etc are irrelevant. Exploits will be discovered in the wild less and less over time (and so will malware). Software protection (especially obfuscation) is going to prevent a lot of us from doing the jobs that we take for granted now. Security via obscurity is going to take a new twist: it's going to be used against us, and we're not going to be able to stop it.

    We must prevent the vulnerabilities through appsec programs. Integrating honest bug finders into these appsec programs is as easy as doing it in open-source. Except that you have to pay them.

    ReplyDelete
  5. That was indeed the immediate thought I had
    with the VUPEN announcement - that they were
    becoming a paid source of zero-days, available
    to anyone willing to fork over their fee.

    ReplyDelete