This is a personal blog. My other stuff: book | home page | Twitter | prepping | CNC robotics | electronics

June 11, 2010

Browser-side XSS detectors of doom

The prevalence of cross-site scripting - an unfortunate consequence of how the web currently operates - is one of the great unsolved challenges in the world of information security. Short of redesigning HTML from scratch, browser developers are not particularly well-positioned to fix this issue; but understandably, they are eager to at least mitigate the risk.

One of the most publicized efforts along these lines is the concept of browser-side, reflected XSS detectors: the two most notable implementations are David Ross' XSS filter (shipping in Internet Explorer 8) and Adam Barth's XSS Auditor (WebKit browsers - currently in Safari 5). The idea behind these tools is very simple: if query parameters seen in the request look suspiciously close to any "active" portions of the rendered page, the browser should assume foul play - and step in to protect the user.

Naturally, nothing is that simple in the browser world. The major design obstacle is that the check has to be passive - active probes are likely to cause persistent side effects on the server. Because of this, the detector can merely look for correlation, but not confirm causation; try this or this search in Internet Explorer 8 to see a canonical example of why this distinction matters.

Since passive checks inevitably cause false positives, the authors of these implementations defaulted to a non-disruptive "soft fail" mode: when a suspicious pattern is detected, the browser will still attempt to render the page - just with the naughty bits selectively removed or defanged in some way.

While a fair amount of issues with XSS detectors were pointed in the past - from hairy implementation flaws to trivial bypass scenarios - the "soft fail" design creates some more deeply rooted problems that may affect the safety of the web in the long run. Perhaps the most striking example is this snippet, taken from a real-world website:

<script>
...
// Safari puts <style> tags into new <head> elements, so
// let's account for this here.
...
</script>
...
[ sanitized attacker-controlled data goes here ]
The data displayed there is properly escaped; under normal circumstances, this page is perfectly safe. But now, consider what happens if the attacker-controlled string is } body { color: expression(alert(1)) } - and ?q=<script> is appended by at the end of the URL. The filter employed in MSIE8 will neutralize the initial <script>, resulting in the subsequent <style> tag being interpreted literally - putting the browser in a special parsing mode. This, in turn, causes the somewhat perverted CSS parser to skip any leading and trailing text, and interpret the attacker-controlled string as a JavaScript expression buried in a stylesheet.

Eep. This particular case should be fixed by the June security update, but the principle seems dicey.

The risk of snafus like this aside, the fundamental problem with XSS detectors is that quite simply, client-side JavaScript is increasingly depended upon to implement security-critical features. The merits of doing so may be contested by purists, but it's where the web is headed. No real alternatives exist, too: a growing number of applications uses servers merely as a dumb, ACL-enforcing storage backend, with everything else implemented on client side; mechanisms such as localStorage and widget manifests actually remove the server component altogether.

To this client-heavy architecture, XSS detectors pose an inherent threat: they make it possible for third-party attackers to selectively tamper with the execution of the client-side code, and cause the application to end up in an unexpected, inconsistent state. Disabling clickjacking defenses is the most prosaic example; but more profound attacks against critical initialization or control routines are certainly possible, and I believe they will happen in the future.

The skeptic in me is not entirely convinced that XSS filters will ever be robust and reliable enough to offset the risks they create - but I might be wrong; time will tell. Until this is settled, I and several other people pleaded for an opt-in strict filtering mode that prevents the page from being rendered at all when a suspicious pattern is detected. Now that it is available, enabling it on your site is probably a good idea.

2 comments:

  1. It's reasonable to consider what happens when XSS filter blocking behavior is no longer fine-grained. For the IE XSS Filter in particular, the level of granularity achieved with matching can stay constant even while blocking granularity changes to best meet the needs of security and compatibility.

    So imagine that this scenario has played out and the argument against fine-grained blocking has become a no-op. At that point, it will still be problematic for web applications to rely upon security-critical features that are based on assumptions about the behavior of markup or active content.

    Frame busters just "seemed to work" against ClickJacking. They also seemed to be the only viable solution available to web applications. This resulted in an assumption that frame busters were best practice which could be used to defeat ClickJacking -- if they didn't work, what would?

    But as we now know this mitigation has proven unreliable, and for various different reasons in different browsers. The "Busting Frame Busting" paper presents some great mechanisms that can be leveraged to defeat frame busting. I know many researchers have their own favorite techniques as well. It's unrealistic for each browser to take an update to "make it work" every time a deficiency is revealed in a security mitigation implemented in markup or active content. Javascript, CSS, or perhaps even SVG might be repurposed to achieve some perceived security goal, simply because they appear to function. Even if the solution works today, who's to say these mitigations will hold in the future after time passes and browsers evolve?

    So any web app security mitigation that relies upon client-side behavior should be backed up by a contract, guarantee, or declarative security feature that dictates not only what the web application is responsible for, but also the responsibilities of the underlying platform. This is what is being achieved with X-FRAME-OPTIONS and it's also an inherent part of such efforts as Mozilla CSP. These mitigations provide great clarity as new browser features are developed and threat modeled.

    As an example, consider a new browser feature undergoing threat modeling. A web security subject matter expert would reasonably identify if the feature bypasses X-FRAME-OPTIONS. They would also not have a hard time demonstrating the need to resolve such a threat. Contrast that to the scenario where a specific new feature invalidates a sub-species of frame buster in use on a particular site. It will not only be hard to identify this issue, but it will also be hard to prove it's a platform issue vs. an issue in the particular script.

    I am optimistic that when the next "ClickJacking" sort of scenario rears its ugly head, the security community will be less trusting of mitigations that do not rely on well-defined platform guarantees. That will hopefully help to avoid the finger-pointing we've seen as the frame-buster based ClickJacking mitigations melted down.

    ReplyDelete
  2. Frame busting is not something I am particularly concerned with: you are completely right that it failed in a number of unexpected ways before (but we shouldn't jump in to criticize web developers for relying on it: they had no real alternatives for a while).

    So, while XSS filters make developing a robust framebuster even harder, it probably does not matter much. Today, the browsers that ship with XSS filters also ship with X-Frame-Options support, so clued engineers are not any worse off.

    A more fundamental concern is simply with client-side logic that was never meant to be a security kludge, and implements legitimate application functionality instead: a naive example is a web-based text editor, where one script block loads the document to be edited and initializes editor variables, and another script block handles auto-saving your work; disabling that first block is quite likely to cause problems, and it's probably not fair to expect web developers to be prepared for this. Seeing the current trends in web application development, I think this behavior is going to be problematic.

    That said, I am not really against XSS filtering; I am just in favor of strict (mode=block) behavior by default, but I can also see why the way these filters work makes it unlikely for vendors to go there.

    In non-strict mode, I do fear that the benefits may not outweigh the risks, but as noted, I take myself with a grain of salt. The number of bypass scenarios reported in all the implementations (probably exceeding the number of framebuster bypass tricks in existence) slightly undermines the message, but it will probably get better.

    ReplyDelete