This is a personal blog. My other stuff: book | home page | Twitter | prepping | CNC robotics | electronics

November 30, 2014

afl-fuzz: nobody expects CDATA sections in XML

I made a very explicit, pragmatic design decision with afl-fuzz: for performance and reliability reasons, I did not want to get into static analysis or symbolic execution to understand what the program is actually doing with the data we are feeding to it. The basic algorithm for the fuzzer can be just summed up as randomly mutating the input files, and gently nudging the process toward new state transitions discovered in the targeted binary. That discovery part is done with the help of lightweight and extremely simple instrumentation injected by the compiler.

I had a working theory that this would make the fuzzer a bit smarter than a potato, but I wasn't expecting any fireworks. So, when the algorithm managed to not only find some useful real-world bugs, but to successfully synthesize a JPEG file out of nothing, I was genuinely surprised by the outcome.

Of course, while it was an interesting result, it wasn't an impossible one. In the end, the fuzzer simply managed to wiggle its way through a long and winding sequence of conditionals that operated on individual bytes, making them well-suited for the guided brute-force approach. What seemed perfectly clear, though, is that the algorithm wouldn't be able to get past "atomic", large-search-space checks such as:

if (strcmp(header.magic_password, "h4ck3d by p1gZ")) goto terminate_now;

...or:

if (header.magic_value == 0x12345678) goto terminate_now;

This constraint made the tool less useful for properly exploring extremely verbose, human-readable formats such as HTML or JavaScript.

Some doubts started to set in when afl-fuzz effortlessly pulled out four-byte magic values and synthesized ELF files when testing programs such as objdump or file. As I later found out, this particular example is often used as a benchmark for complex static analysis or symbolic execution frameworks. But still, guessing four bytes could have been just a happy accident. With fast targets, the fuzzer can pull off billions of execs per day on a single machine, so it could have been dumb luck.

(As an aside: to deal with strings, I had this very speculative idea of special-casing memory comparison functions such as strcmp() and memcmp() by replacing them with non-optimized versions that can be instrumented easily. I have one simple demo of that principle bundled with the fuzzer in experimental/instrumented_cmp/, but I never got around to actually implementing it in the fuzzer itself.)

Anyway, nothing quite prepared me for what the recent versions were capable of doing with libxml2. I seeded the session with:

<a b="c">d</a>

...and simply used that as the input for a vanilla copy of xmllint. I was merely hoping to stress-test the very basic aspects of the parser, without getting into any higher-order features of the language. Yet, after two days on a single machine, I found this buried in test case #4641 in the output directory:

...<![<CDATA[C%Ada b="c":]]]>...

What the heck?!

As most of you probably know, CDATA is a special, differently parsed section within XML, separated from everything else by fairly complex syntax - a nine-character sequence of bytes that can't be realistically discovered by just randomly flipping bits.

The finding is actually not magic; there are two possible explanations:

  • As a recent "well, it's cheap, so let's see what happens" optimization, AFL automatically sets -O3 -funroll-loops when calling the compiler for instrumented binaries, and some of the shorter fixed-string comparisons will be actually just expanded inline. For example, if the stars align just right, strcmp(buf, "foo") may be unrolled to:
    cmpb   $0x66,0x200c32(%rip)        # 'f'
    jne    4004b6 
    cmpb   $0x6f,0x200c2a(%rip)        # 'o'
    jne    4004b6 
    cmpb   $0x6f,0x200c22(%rip)        # 'o'
    jne    4004b6 
    cmpb   $0x0,0x200c1a(%rip)         # NUL
    jne    4004b6 
    
    ...which, by the virtue of having a series of explicit and distinct branch points, can be readily instrumented on a per-character basis by afl-fuzz.

  • If that fails, it just so happens that some of the string comparisons in libxml2 in parser.c are done using a bunch of macros that will compile to similarly-structured code (as spotted by Ben Hawkes). This is presumably done so that the compiler can optimize this into a tree-style parser - whereas a linear sequence of strcmp() calls would lead to repeated and unnecessary comparisons of the already-examined chars.

    (Although done by hand in this particular case, the pattern is fairly common for automatically generated parsers of all sorts.)
The progression of test cases seems to support both of these possibilities:

<![
<![C b="c">
<![CDb m="c">
<![CDAĹĹ@
<![CDAT<!
...

I find this result a bit spooky because it's an example of the fuzzer defiantly and secretly working around one of its intentional and explicit design limitations - and definitely not something I was aiming for =)

Of course, treat this first and foremost as a novelty; there are many other circumstances where similar types of highly verbose text-based syntax would not be discoverable to afl-fuzz - or where, even if the syntax could be discovered through some special-cased shims, it would be a waste of CPU time to do it with afl-fuzz, rather than a simple syntax-aware, template-based tool.

(Coming up with an API to make template-based generators pluggable into AFL may be a good plan.)

By the way, here are some other gems from the randomly generated test cases:

<!DOCTY.
<?xml version="2.666666666666666666667666666">
<?xml standalone?>

November 24, 2014

afl-fuzz: crash exploration mode

One of the most labor-intensive portions of any fuzzing project is the work needed to determine if a particular crash poses a security risk. A small minority of all fault conditions will have obvious implications; for example, attempts to write or jump to addresses that clearly come from the input file do not need any debate. But most crashes are more ambiguous: some of the most common issues are NULL pointer dereferences and reads from oddball locations outside the mapped address space. Perhaps they are a manifestation of an underlying vulnerability; or perhaps they are just harmless non-security bugs. Even if you prefer to err on the side of caution and treat them the same, the vendor may not share your view.

If you have to make the call, sifting through such crashes may require spending hours in front of a debugger - or, more likely, rejecting a good chunk of them based on not much more than a hunch. To help triage the findings in a more meaningful way, I decided to add a pretty unique and nifty feature to afl-fuzz: the brand new crash exploration mode, enabled via -C.

The idea is very simple: you take a crashing test case and give it to afl-fuzz as a starting point for the automated run. The fuzzer then uses its usual feedback mechanisms and genetic algorithms to see how far it can get within the instrumented codebase while still keeping the program in the crashing state. Mutations that stop the crash from happening are thrown away; so are the ones that do not alter the execution path in any appreciable way. The occasional mutation that makes the crash happen in a subtly different way will be kept and used to seed subsequent fuzzing rounds later on.

The beauty of this mode is that it very quickly produces a small corpus of related but somewhat different crashes that can be effortlessly compared to pretty accurately estimate the degree of control you have over the faulting address, or to figure out whether you can get past the initial out-of-bounds read by nudging it just the right way (and if the answer is yes, you probably get to see what happens next). It won't necessarily beat thorough code analysis, but it's still pretty cool: it lets you make a far more educated guess without having to put in any work.

As an admittedly trivial example, let's take a suspect but ambiguous crash in unrtf, found by afl-fuzz in its normal mode:

unrtf[7942]: segfault at 450 ip 0805062b sp bf957e60 error 4 in unrtf[8048000+1c000]

When fed to the crash explorer, the fuzzer took just several minutes to notice that by changing {\cb-44901990 in the converted RTF file to printable representations of other negative integers, it could quickly trigger faults at arbitrary addresses of its choice, corresponding mostly-linearly to the integer set:

unrtf[28809]: segfault at 88077782 ip 0805062b sp bff00210 error 4 in unrtf[8048000+1c000]
unrtf[26656]: segfault at 7271250 ip 0805062b sp bf957e60 error 4 in unrtf[8048000+1c000]

Given a bit more time, it would also almost certainly notice that choosing values within the mapped address space get it past the crashing location and permit even more fun. So, automatic exploit writing next?

November 10, 2014

Exploitation modelling matters more than we think

Our own Krzysztof Kotowicz put together a pretty neat site called the Bughunter University. The first part of the site deals with some of the most common non-qualifying issues that are reported to our Vulnerability Reward Program. The entries range from mildly humorous to ones that still attract some debate; it's a pretty good read, even if just for the funny bits.

Just as interestingly, the second part of the site also touches on topics that go well beyond the world of web vulnerability rewards. One page in particular deals with the process of thinking through, and then succinctly and carefully describing, the hypothetical scenario surrounding the exploitation of the bugs we find - especially if the bugs are major, novel, or interesting in any other way.

This process is often shunned as unnecessary; more often than not, I see this discussion missing, or done in a perfunctory way, in conference presentations, research papers, or even the reports produced as the output of commercial penetration tests. That's unfortunate: we tend to be more fallible than we think we are. The seemingly redundant exercise in attack modelling forces us to employ a degree of intellectual rigor that often helps spot fatal leaps in our thought process and correct them early on.

Perhaps the most common fallacy of this sort is the construction of security attacks that fully depend on the exposure to pre-existing risks of a magnitude that is comparable or greater than the danger posed by the new attack. Familiar examples of this trend may include:
  • Attacks on account data that can be performed only if the attacker already has shell-level access to said account. Some of research in this category deals with the ability to extract HTTP cookies by examining process memory or disk, or to backdoor the browser by placing a DLL in a directory not accessible to other UIDs. Other publications may focus on exploiting buffer overflows in non-privileged programs through a route that is unlikely to ever be exposed to the outside world.

  • Attacks that require physical access to brick or otherwise disable a commodity computing device. After all, in almost all cases, having the attacker bring a hammer or wire cutters will work just as well.

  • Web application security issues that are exploitable only against users who are using badly outdated browsers or plugins. Sure, the attack may work - but so will dozens of remote code execution and SOP bypass flaws that the client software is already known to be vulnerable to.

  • New, specific types of attacks that work only against victims who already exhibit behaviors well-understood to carry unreasonable risk - say, the willingness to retype account credentials without looking at the address bar, or to accept and execute unsolicited downloads.

  • Sleight-of-hand vectors that assume, without explaining why, that the attacker can obtain or tamper with some types of secrets (e.g., capability-bearing URLs), but not others (e.g., user's cookies, passwords, server's private SSL keys), despite their apparent similarity.

Some theorists argue that security issues exist independently of exploitation vectors, and that they must be remedied regardless of whether one can envision a probable attack vector. Perhaps this distinction is useful in some contexts - but it is still our responsibility to precisely and unambiguously differentiate between immediate hazards and more abstract thought experiments of that latter kind.

November 07, 2014

Pulling JPEGs out of thin air

This is an interesting demonstration of the capabilities of afl; I was actually pretty surprised that it worked!

$ mkdir in_dir
$ echo 'hello' >in_dir/hello
$ ./afl-fuzz -i in_dir -o out_dir ./jpeg-9a/djpeg

In essence, I created a text file containing just "hello" and asked the fuzzer to keep feeding it to a program that expects a JPEG image (djpeg is a simple utility bundled with the ubiquitous IJG jpeg image library; libjpeg-turbo should also work). Of course, my input file does not resemble a valid picture, so it gets immediately rejected by the utility:

$ ./djpeg '../out_dir/queue/id:000000,orig:hello'
Not a JPEG file: starts with 0x68 0x65

Such a fuzzing run would be normally completely pointless: there is essentially no chance that a "hello" could be ever turned into a valid JPEG by a traditional, format-agnostic fuzzer, since the probability that dozens of random tweaks would align just right is astronomically low.

Luckily, afl-fuzz can leverage lightweight assembly-level instrumentation to its advantage - and within a millisecond or so, it notices that although setting the first byte to 0xff does not change the externally observable output, it triggers a slightly different internal code path in the tested app. Equipped with this information, it decides to use that test case as a seed for future fuzzing rounds:

$ ./djpeg '../out_dir/queue/id:000001,src:000000,op:int8,pos:0,val:-1,+cov'
Not a JPEG file: starts with 0xff 0x65

When later working with that second-generation test case, the fuzzer almost immediately notices that setting the second byte to 0xd8 does something even more interesting:

$ ./djpeg '../out_dir/queue/id:000004,src:000001,op:havoc,rep:16,+cov'
Premature end of JPEG file
JPEG datastream contains no image

At this point, the fuzzer managed to synthesize the valid file header - and actually realized its significance. Using this output as the seed for the next round of fuzzing, it quickly starts getting deeper and deeper into the woods. Within several hundred generations and several hundred million execve() calls, it figures out more and more of the essential control structures that make a valid JPEG file - SOFs, Huffman tables, quantization tables, SOS markers, and so on:
$ ./djpeg '../out_dir/queue/id:000008,src:000004,op:havoc,rep:2,+cov'
Invalid JPEG file structure: two SOI markers
...
$ ./djpeg '../out_dir/queue/id:001005,src:000262+000979,op:splice,rep:2'
Quantization table 0x0e was not defined
...
$ ./djpeg '../out_dir/queue/id:001282,src:001005+001270,op:splice,rep:2,+cov' >.tmp; ls -l .tmp
-rw-r--r-- 1 lcamtuf lcamtuf 7069 Nov  7 09:29 .tmp

The first image, hit after about six hours on an 8-core system, looks very unassuming: it's a blank grayscale image, 3 pixels wide and 784 pixels tall. But the moment it is discovered, the fuzzer starts using the image as a seed - rapidly producing a wide array of more interesting pics for every new execution path:

Of course, synthesizing a complete image out of thin air is an extreme example, and not necessarily a very practical one. But more prosaically, fuzzers are meant to stress-test every feature of the targeted program. With instrumented, generational fuzzing, lesser-known features (e.g., progressive, black-and-white, or arithmetic-coded JPEGs) can be discovered and locked onto without requiring a giant, high-quality corpus of diverse test cases to seed the fuzzer with.

The cool part of the libjpeg demo is that it works without any special preparation: there is nothing special about the "hello" string, the fuzzer knows nothing about image parsing, and is not designed or fine-tuned to work with this particular library. There aren't even any command-line knobs to turn. You can throw afl-fuzz at many other types of parsers with similar results: with bash, it will write valid scripts; with giflib, it will make GIFs; with fileutils, it will create and flag ELF files, Atari 68xxx executables, x86 boot sectors, and UTF-8 with BOM. In almost all cases, the performance impact of instrumentation is minimal, too.

Of course, not all is roses; at its core, afl-fuzz is still a brute-force tool. This makes it simple, fast, and robust, but also means that certain types of atomically executed checks with a large search space may pose an insurmountable obstacle to the fuzzer; a good example of this may be:

if (strcmp(header.magic_password, "h4ck3d by p1gZ")) goto terminate_now;

In practical terms, this means that afl-fuzz won't have as much luck "inventing" PNG files or non-trivial HTML documents from scratch - and will need a starting point better than just "hello". To consistently deal with code constructs similar to the one shown above, a general-purpose fuzzer would need to understand the operation of the targeted binary on a wholly different level. There is some progress on this in the academia, but frameworks that can pull this off across diverse and complex codebases in a quick, easy, and reliable way are probably still years away.

PS. Several folks asked me about symbolic execution and other inspirations for afl-fuzz; I put together some notes in this doc.