This is a personal blog. My other stuff: book | home page | Twitter | prepping | CNC robotics | electronics

August 06, 2021

Practical Doomsday

 Practical Doomsday

Practical Doomsday is an enjoyable, data-packed romp through the world of rational emergency preparedness. It cuts through the noise of 24-hour news to help you zero in on what actually matters: building a diversified rainy-day fund, staying safe online, and dealing with common mishaps ranging from prolonged power outages to wildfires or floods.

The goal of the book is not to convince you that the end is nigh. To the contrary: I want to reclaim the concept of prepping from the bunker-dwelling prophets of doom. Disasters are not rare, but emergency preparedness is not about expecting the worst; it's about being able to enjoy our lives to the fullest without worrying about the apocalyptic headline of the day.

☛ Click here to read a sample chapter
☛ Click to order from the publisher

Use code PREORDERDOOMSDAY to get 30% off. Preorders from the publisher get instant early access to a PDF version of the book. Orders can also be placed on Amazon, Barnes & Noble, and in most other places where books are sold.

March 03, 2018

Setting up bug bounties for success

Bug bounties end up in the news with some regularity, usually for the wrong reasons. I've been itching to write about that for a while - but instead of dwelling on the mistakes of the bygone days, I figured it may be better to talk about some of the ways to get vulnerability rewards right.

What do you get out of bug bounties?

There's plenty of differing views, but I like to think of such programs simply as a bid on researchers' time. In the most basic sense, you get three benefits:

  • Improved ability to detect bugs in production before they become major incidents.
  • A comparatively unbiased feedback loop to help you prioritize and measure other security work.
  • A robust talent pipeline for when you need to hire.

What bug bounties don't offer?

You don't get anything resembling a comprehensive security program or a systematic assessment of your platforms. Researchers end up looking for bugs that offer favorable effort-to-payoff ratios for their skills and given the very imperfect information they have about your enterprise. In other words, you may end up with a hundred people looking for XSS and just one person looking for RCE.

Your reward structure can steer them toward the targets and bugs you care about, but it's difficult to fully eliminate this inherent skew. There's only so far you can jack up your top-tier rewards, and only so far you can go lowering the bottom-tier ones.

Don't you have to outcompete the black market to get all the "good" bugs?

There is a free market price discovery component to it all: if you're not getting the engagement you were hoping for, you should probably consider paying more.

That said, there are going to be researchers who'd rather hurt you than work for you, no matter how much you pay; you don't have to win them over, and you don't have to outspend every authoritarian government or every crime syndicate. A bug bounty is effective simply if it attracts enough eyeballs to make bugs statistically harder to find, and reduces the useful lifespan of any zero-days in black market trade. Plus, most researchers don't want their work to be used to crack down on dissidents in Egypt or Vietnam.

Another factor is that you're paying for different things: a black market buyer probably wants a reliable exploit capable of delivering payloads, and then demands silence for months or years to come; a vendor-run bug bounty program is usually perfectly happy with a reproducible crash and doesn't mind a researcher blogging about their work.

In fact, while money is important, you will probably find out that it's not enough to retain your top talent; many folks want bug bounties to be more than a business transaction, and find a lot of value in having a close relationship with your security team, comparing notes, and growing together. Fostering that partnership can be more important than adding another $10,000 to your top reward.

How do I prevent it all from going horribly wrong?

Bug bounties are an unfamiliar beast to most lawyers and PR folks, so it's a natural to be wary and try to plan for every eventuality with pages and pages of impenetrable rules and fine-print legalese.

This is generally unnecessary: there is a strong self-selection bias, and almost every participant in a vulnerability reward program will be coming to you in good faith. The more friendly, forthcoming, and approachable you seem, and the more you treat them like peers, the more likely it is for your relationship to stay positive. On the flip side, there is no faster way to make enemies than to make a security researcher feel that they are now talking to a lawyer or to the PR dept.

Most people have strong opinions on disclosure policies; instead of imposing your own views, strive to patch reported bugs reasonably quickly, and almost every reporter will play along. Demand researchers to cancel conference appearances, take down blog posts, or sign NDAs, and you will sooner or later end up in the news.

But what if that's not enough?

As with any business endeavor, mistakes will happen; total risk avoidance is seldom the answer. Learn to sincerely apologize for mishaps; it's not a sign of weakness to say "sorry, we messed up". And you will almost certainly not end up in the courtroom for doing so.

It's good to foster a healthy and productive relationship with the community, so that they come to your defense when something goes wrong. Encouraging people to disclose bugs and talk about their experiences is one way of accomplishing that.

What about extortion?

You should structure your program to naturally discourage bad behavior and make it stand out like a sore thumb. Require bona fide reports with complete technical details before any reward decision is made by a panel of named peers; and make it clear that you never demand non-disclosure as a condition of getting a reward.

To avoid researchers accidentally putting themselves in awkward situations, have clear rules around data exfiltration and lateral movement: assure them that you will always pay based on the worst-case impact of their findings; in exchange, ask them to stop as soon as they get a shell and never access any data that isn't their own.

So... are there any downsides?

Yep. Other than souring up your relationship with the community if you implement your program wrong, the other consideration is that bug bounties tend to generate a lot of noise from well-meaning but less-skilled researchers.

When this happens, do not get frustrated and do not penalize such participants; instead, help them grow. Consider publishing educational articles, giving advice on how to investigate and structure reports, or offering free workshops every now and then.

The other downside is cost; although bug bounties tend to offer far more bang for your buck than your average penetration test, they are more random. The annual expenses tend to be fairly predictable, but there is always some possibility of having to pay multiple top-tier rewards in rapid succession. This is the kind of uncertainty that many mid-level budget planners react badly to.

Finally, you need to be able to fix the bugs you receive. It would be nuts to prefer to not know about the vulnerabilities in the first place - but once you invite the research, the clock starts ticking and you need to ship fixes reasonably fast.

So... should I try it?

There are folks who enthusiastically advocate for bug bounties in every conceivable situation, and people who dislike them with fierce passion; both sentiments are usually strongly correlated with the line of business they are in.

In reality, bug bounties are not a cure-all, and there are some ways to make them ineffectual or even dangerous. But they are not as risky or expensive as most people suspect, and when done right, they can actually be fun for your team, too. You won't know for sure until you try.

February 24, 2018

Getting product security engineering right

Product security is an interesting animal: it is a uniquely cross-disciplinary endeavor that spans policy, consulting, process automation, in-depth software engineering, and cutting-edge vulnerability research. And in contrast to many other specializations in our field of expertise - say, incident response or network security - we have virtually no time-tested and coherent frameworks for setting it up within a company of any size.

In my previous post, I shared some thoughts on nurturing technical organizations and cultivating the right kind of leadership within. Today, I figured it would be fitting to follow up with several notes on what I learned about structuring product security work - and about actually making the effort count.

The "comfort zone" trap

For security engineers, knowing your limits is a sought-after quality: there is nothing more dangerous than a security expert who goes off script and starts dispensing authoritatively-sounding but bogus advice on a topic they know very little about. But that same quality can be destructive when it prevents us from growing beyond our most familiar role: that of a critic who pokes holes in other people's designs.

The role of a resident security critic lends itself all too easily to a sense of supremacy: the mistaken belief that our cognitive skills exceed the capabilities of the engineers and product managers who come to us for help - and that the cool bugs we file are the ultimate proof of our special gift. We start taking pride in the mere act of breaking somebody else's software - and then write scathing but ineffectual critiques addressed to executives, demanding that they either put a stop to a project or sign off on a risk. And hey, in the latter case, they better brace for our triumphant "I told you so" at some later date.

Of course, escalations of this type have their place, but they need to be a very rare sight; when practiced routinely, they are a telltale sign of a dysfunctional team. We might be failing to think up viable alternatives that are in tune with business or engineering needs; we might be very unpersuasive, failing to communicate with other rational people in a language they understand; or it might be that our tolerance for risk is badly out of whack with the rest of the company. Whatever the cause, I've seen high-level escalations where the security team spoke of valiant efforts to resist inexplicably awful design decisions or data sharing setups; and where product leads in turn talked about pressing business needs randomly blocked by obstinate security folks. Sometimes, simply having them compare their notes would be enough to arrive at a technical solution - such as sharing a less sensitive subset of the data at hand.

To be effective, any product security program must be rooted in a partnership with the rest of the company, focused on helping them get stuff done while eliminating or reducing security risks. To combat the toxic us-versus-them mentality, I found it helpful to have some team members with software engineering backgrounds, even if it's the ownership of a small open-source project or so. This can broaden our horizons, helping us see that we all make the same mistakes - and that not every solution that sounds good on paper is usable once we code it up.

Getting off the treadmill

All security programs involve a good chunk of operational work. For product security, this can be a combination of product launch reviews, design consulting requests, incoming bug reports, or compliance-driven assessments of some sort. And curiously, such reactive work also has the property of gradually expanding to consume all the available resources on a team: next year is bound to bring even more review requests, even more regulatory hurdles, and even more incoming bugs to triage and fix.

Being more tractable, such routine tasks are also more readily enshrined in SDLs, SLAs, and all kinds of other official documents that are often mistaken for a mission statement that justifies the existence of our teams. Soon, instead of explaining to a developer why they should fix a particular problem right away, we end up pointing them to page 17 in our severity classification guideline, which defines that "severity 2" vulnerabilities need to be resolved within a month. Meanwhile, another policy may be telling them that they need to run a fuzzer or a web application scanner for a particular number of CPU-hours - no matter whether it makes sense or whether the job is set up right.

To run a product security program that scales sublinearly, stays abreast of future threats, and doesn't erect bureaucratic speed bumps just for the sake of it, we need to recognize this inherent tendency for operational work to take over - and we need to reign it in. No matter what the last year's policy says, we usually don't need to be doing security reviews with a particular cadence or to a particular depth; if we need to scale them back 10% to staff a two-quarter project that fixes an important API and squashes an entire class of bugs, it's a short-term risk we should feel empowered to take.

As noted in my earlier post, I find contingency planning to be a valuable tool in this regard: why not ask ourselves how the team would cope if the workload went up another 30%, but bad financial results precluded any team growth? It's actually fun to think about such hypotheticals ahead of the time - and hey, if the ideas sound good, why not try them out today?

Living for a cause

It can be difficult to understand if our security efforts are structured and prioritized right; when faced with such uncertainty, it is natural to stick to the safe fundamentals - investing most of our resources into the very same things that everybody else in our industry appears to be focusing on today.

I think it's important to combat this mindset - and if so, we might as well tackle it head on. Rather than focusing on tactical objectives and policy documents, try to write down a concise mission statement explaining why you are a team in the first place, what specific business outcomes you are aiming for, how do you prioritize it, and how you want it all to change in a year or two. It should be a fluid narrative that reads right and that everybody on your team can take pride in; my favorite way of starting the conversation is telling folks that we could always have a new VP tomorrow - and that the VP's first order of business could be asking, "why do you have so many people here and how do I know they are doing the right thing?". It's a playful but realistic framing device that motivates people to get it done.

In general, a comprehensive product security program should probably start with the assumption that no matter how many resources we have at our disposal, we will never be able to stay in the loop on everything that's happening across the company - and even if we did, we're not going to be able to catch every single bug. It follows that one of our top priorities for the team should be making sure that bugs don't happen very often; a scalable way of getting there is equipping engineers with intuitive and usable tools that make it easy to perform common tasks without having to worry about security at all. Examples include standardized, managed containers for production jobs; safe-by-default APIs, such as strict contextual autoescaping for XSS or type safety for SQL; security-conscious style guidelines; or plug-and-play libraries that take care of common crypto or ACL enforcement tasks.

Of course, not all problems can be addressed on framework level, and not every engineer will always reach for the right tools. Because of this, the next principle that I found to be worth focusing on is containment and mitigation: making sure that bugs are difficult to exploit when they happen, or that the damage is kept in check. The solutions in this space can range from low-level enhancements (say, hardened allocators or seccomp-bpf sandboxes) to client-facing features such as browser origin isolation or Content Security Policy.

The usual consulting, review, and outreach tasks are an important facet of a product security program, but probably shouldn't be the sole focus of your team. It's also best to avoid undue emphasis on vulnerability showmanship: while valuable in some contexts, it creates a hypercompetitive environment that may be hostile to less experienced team members - not to mention, squashing individual bugs offers very limited value if the same issue is likely to be reintroduced into the codebase the next day. I like to think of security reviews as a teaching opportunity instead: it's a way to raise awareness, form partnerships with engineers, and help them develop lasting habits that reduce the incidence of bugs. Metrics to understand the impact of your work are important, too; if your engagements are seen mostly as a yet another layer of red tape, product teams will stop reaching out to you for advice.

The other tenet of a healthy product security effort requires us to recognize at a scale and given enough time, every defense mechanism is bound to fail - and so, we need ways to prevent bugs from turning into incidents. The efforts in this space may range from developing product-specific signals for the incident response and monitoring teams; to offering meaningful vulnerability reward programs and nourishing a healthy and respectful relationship with the research community; to organizing regular offensive exercises in hopes of spotting bugs before anybody else does.

Oh, one final note: an important feature of a healthy security program is the existence of multiple feedback loops that help you spot problems without the need to micromanage the organization and without being deathly afraid of taking chances. For example, the data coming from bug bounty programs, if analyzed correctly, offers a wonderful way to alert you to systemic problems in your codebase - and later on, to measure the impact of any remediation and hardening work.

February 02, 2018

Progressing from tech to leadership

I've been a technical person all my life. I started doing vulnerability research in the late 1990s - and even today, when I'm not fiddling with CNC-machined robots or making furniture, I'm probably clobbering together a fuzzer or writing a book about browser protocols and APIs. In other words, I'm a geek at heart.

My career is a different story. Over the past two decades and a change, I went from writing CGI scripts and setting up WAN routers for a chain of shopping malls, to doing pentests for institutional customers, to designing a series of network monitoring platforms and handling incident response for a big telco, to building and running the product security org for one of the largest companies in the world. It's been an interesting ride - and now that I'm on the hook for the well-being of about 100 folks across more than a dozen subteams around the world, I've been thinking a bit about the lessons learned along the way.

Of course, I'm a bit hesitant to write such a post: sometimes, your efforts pan out not because of your approach, but despite it - and it's possible to draw precisely the wrong conclusions from such anecdotes. Still, I'm very proud of the culture we've created and the caliber of folks working on our team. It happened through the work of quite a few talented tech leads and managers even before my time, but it did not happen by accident - so I figured that my observations may be useful for some, as long as they are taken with a grain of salt.

But first, let me start on a somewhat somber note: what nobody tells you is that one's level on the leadership ladder tends to be inversely correlated with several measures of happiness. The reason is fairly simple: as you get more senior, a growing number of people will come to you expecting you to solve increasingly fuzzy and challenging problems - and you will no longer be patted on the back for doing so. This should not scare you away from such opportunities, but it definitely calls for a particular mindset: your motivation must come from within. Look beyond the fight-of-the-day; find satisfaction in seeing how far your teams have come over the years.

With that out of the way, here's a collection of notes, loosely organized into three major themes.

The curse of a techie leader

Perhaps the most interesting observation I have is that for a person coming from a technical background, building a healthy team is first and foremost about the subtle art of letting go.

There is a natural urge to stay involved in any project you've started or helped improve; after all, it's your baby: you're familiar with all the nuts and bolts, and nobody else can do this job as well as you. But as your sphere of influence grows, this becomes a choke point: there are only so many things you could be doing at once. Just as importantly, the project-hoarding behavior robs more junior folks of the ability to take on new responsibilities and bring their own ideas to life. In other words, when done properly, delegation is not just about freeing up your plate; it's also about empowerment and about signalling trust.

Of course, when you hand your project over to somebody else, the new owner will initially be slower and more clumsy than you; but if you pick the new leads wisely, give them the right tools and the right incentives, and don't make them deathly afraid of messing up, they will soon excel at their new jobs - and be grateful for the opportunity.

A related affliction of many accomplished techies is the conviction that they know the answers to every question even tangentially related to their domain of expertise; that belief is coupled with a burning desire to have the last word in every debate. When practiced in moderation, this behavior is fine among peers - but for a leader, one of the most important skills to learn is knowing when to keep your mouth shut: people learn a lot better by experimenting and making small mistakes than by being schooled by their boss, and they often try to read into your passing remarks. Don't run an authoritarian camp focused on total risk aversion or perfectly efficient resource management; just set reasonable boundaries and exit conditions for experiments so that they don't spiral out of control - and be amazed by the results every now and then.

Death by planning

When nothing is on fire, it's easy to get preoccupied with maintaining the status quo. If your current headcount or budget request lists all the same projects as last year's, or if you ever find yourself ending an argument by deferring to a policy or a process document, it's probably a sign that you're getting complacent. In security, complacency usually ends in tears - and when it doesn't, it leads to burnout or boredom.

In my experience, your goal should be to develop a cadre of managers or tech leads capable of coming up with clever ideas, prioritizing them among themselves, and seeing them to completion without your day-to-day involvement. In your spare time, make it your mission to challenge them to stay ahead of the curve. Ask your vendor security lead how they'd streamline their work if they had a 40% jump in the number of vendors but no extra headcount; ask your product security folks what's the second line of defense or containment should your primary defenses fail. Help them get good ideas off the ground; set some mental success and failure criteria to be able to cut your losses if something does not pan out.

Of course, malfunctions happen even in the best-run teams; to spot trouble early on, instead of overzealous project tracking, I found it useful to encourage folks to run a data-driven org. I'd usually ask them to imagine that a brand new VP shows up in our office and, as his first order of business, asks "why do you have so many people here and how do I know they are doing the right things?". Not everything in security can be quantified, but hard data can validate many of your assumptions - and will alert you to unseen issues early on.

When focusing on data, it's important not to treat pie charts and spreadsheets as an art unto itself; if you run a security review process for your company, your CSAT scores are going to reach 100% if you just rubberstamp every launch request within ten minutes of receiving it. Make sure you're asking the right questions; instead of "how satisfied are you with our process", try "is your product better as a consequence of talking to us?"

Whenever things are not progressing as expected, it is a natural instinct to fall back to micromanagement, but it seldom truly cures the ill. It's probable that your team disagrees with your vision or its feasibility - and that you're either not listening to their feedback, or they don't think you'd care. It's good to assume that most of your employees are as smart or smarter than you; barking your orders at them more loudly or more frequently does not lead anyplace good. It's good to listen to them and either present new facts or work with them on a plan you can all get behind.

In some circumstances, all that's needed is honesty about the business trade-offs, so that your team feels like your "partner in crime", not a victim of circumstance. For example, we'd tell our folks that by not falling behind on basic, unglamorous work, we earn the trust of our VPs and SVPs - and that this translates into the independence and the resources we need to pursue more ambitious ideas without being told what to do; it's how we game the system, so to speak. Oh: leading by example is a pretty powerful tool at your disposal, too.

The human factor

I've come to appreciate that hiring decent folks who can get along with others is far more important than trying to recruit conference-circuit superstars. In fact, hiring superstars is a decidedly hit-and-miss affair: while certainly not a rule, there is a proportion of folks who put the maintenance of their celebrity status ahead of job responsibilities or the well-being of their peers.

For teams, one of the most powerful demotivators is a sense of unfairness and disempowerment. This is where tech-originating leaders can shine, because their teams usually feel that their bosses understand and can evaluate the merits of the work. But it also means you need to be decisive and actually solve problems for them, rather than just letting them vent. You will need to make unpopular decisions every now and then; in such cases, I think it's important to move quickly, rather than prolonging the uncertainty - but it's also important to sincerely listen to concerns, explain your reasoning, and be frank about the risks and trade-offs.

Whenever you see a clash of personalities on your team, you probably need to respond swiftly and decisively; being right should not justify being a bully. If you don't react to repeated scuffles, your best people will probably start looking for other opportunities: it's draining to put up with constant pie fights, no matter if the pies are thrown straight at you or if you just need to duck one every now and then.

More broadly, personality differences seem to be a much better predictor of conflict than any technical aspects underpinning a debate. As a boss, you need to identify such differences early on and come up with creative solutions. Sometimes, all you need is taking some badly-delivered but valid feedback and having a conversation with the other person, asking some questions that can help them reach the same conclusions without feeling that their worldview is under attack. Other times, the only path forward is making sure that some folks simply don't run into each for a while.

Finally, dealing with low performers is a notoriously hard but important part of the game. Especially within large companies, there is always the temptation to just let it slide: sideline a struggling person and wait for them to either get over their issues or leave. But this sends an awful message to the rest of the team; for better or worse, fairness is important to most. Simply firing the low performers is seldom the best solution, though; successful recovery cases are what sets great managers apart from the average ones.

Oh, one more thought: people in leadership roles have their allegiance divided between the company and the people who depend on them. The obligation to the company is more formal, but the impact you have on your team is longer-lasting and more intimate. When the obligations to the employer and to your team collide in some way, make sure you can make the right call; it might be one of the the most consequential decisions you'll ever make.

December 10, 2017

Weekend distractions, part deux: a bench, and stuff

Continuing the tradition of the previous post, here's a perfectly good bench:

The legs are 8/4 hard maple, cut into 2.3" (6 cm) strips and then glued together. The top is 4/4 domestic walnut, with an additional strip glued to the bottom to make it look thicker (because gosh darn, walnut is expensive).

Cut on a bandsaw, joined together with a biscuit joiner + glue, then sanded, that's about it. Still applying finish (nitrocellulose lacquer from a rattle can), but this was the last moment when I could snap a photo (about to get dark) and it basically looks like the final product anyway. Pretty simple but turned out nice.

Several additional, smaller woodworking projects here.

November 04, 2017

Weekend distractions: a perfectly good dining table

I've been a DIYer all my adult life. Some of my non-software projects still revolve around computers, especially when they deal with CNC machining or electronics. But I've been also dabbling in woodworking for quite a while. I have not put that much effort into documenting my projects (say, cutting boards) - but I figured it's time to change that. It may inspire some folks to give a new hobby a try - or help them overcome a problem or two.

So, without further ado, here's the build log for a dining table I put together over the past two weekends or so. I think I turned out pretty nice:

Have fun!

April 22, 2017

AFL experiments, or please eat your brötli

When messing around with AFL, you sometimes stumble upon something unexpected or amusing. Say, having the fuzzer spontaneously synthesize JPEG files, come up with non-trivial XML syntax, or discover SQL semantics.

It is also fun to challenge yourself to employ fuzzers in non-conventional ways. Two canonical examples are having your fuzzing target call abort() whenever two libraries that are supposed to implement the same algorithm produce different outputs when given identical input data; or when a library produces different outputs when asked to encode or decode the same data several times in a row.

Such tricks may sound fanciful, but they actually find interesting bugs. In one case, AFL-based equivalence fuzzing revealed a bunch of fairly rudimentary flaws in common bignum libraries, with some theoretical implications for crypto apps. Another time, output stability checks revealed long-lived issues in IJG jpeg and other widely-used image processing libraries, leaking data across web origins.

In one of my recent experiments, I decided to fuzz brotli, an innovative compression library used in Chrome. But since it's been already fuzzed for many CPU-years, I wanted to do it with a twist: stress-test the compression routines, rather than the usually targeted decompression side. The latter is a far more fruitful target for security research, because decompression normally involves dealing with well-formed inputs, whereas compression code is meant to accept arbitrary data and not think about it too hard. That said, the low likelihood of flaws also means that the compression bits are a relatively unexplored surface that may be worth poking with a stick every now and then.

In this case, the library held up admirably - spare for a handful of computationally intensive plaintext inputs (that are now easy to spot due to the recent improvements to AFL). But the output corpus synthesized by AFL, after being seeded just with a single file containing just "0", featured quite a few peculiar finds:

  • Strings that looked like viable bits of HTML or XML: <META HTTP-AAA IDEAAAA, DATA="IIA DATA="IIA DATA="IIADATA="IIA, </TD>.

  • Non-trivial numerical constants: 1000,1000,0000000e+000000, 0,000 0,000 0,0000 0x600, 0000,$000: 0000,$000:00000000000000.

  • Nonsensical but undeniably English sentences: them with them m with them with themselves, in the fix the in the pin th in the tin, amassize the the in the in the inhe@massive in, he the themes where there the where there, size at size at the tie.

  • Bogus but semi-legible URLs: CcCdc.com/.com/m/ /00.com/.com/m/ /00(0(000000CcCdc.com/.com/.com

  • Snippets of Lisp code: )))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))).

The results are quite unexpected, given that they are just a product of randomly mutating a single-byte input file and observing the code coverage in a simple compression tool. The explanation is that brotli, in addition to more familiar binary coding methods, uses a static dictionary constructed by analyzing common types of web content. Somehow, by observing the behavior of the program, AFL was able to incrementally reconstruct quite a few of these hardcoded keywords - and then put them together in various semi-interesting ways. Not bad.

August 26, 2016

So you want to work in security (but are too lazy to read Parisa's excellent essay)

If you have not seen it yet, Parisa Tabriz penned a lengthy and insightful post about her experiences on what it takes to succeed in the field of information security.

My own experiences align pretty closely with Parisa's take, so if you are making your first steps down this path, I strongly urge you to give her post a good read. But if I had to sum up my lessons from close to two decades in the industry, I would probably boil them down to four simple rules:

  1. Infosec is all about the mismatch between our intuition and the actual behavior of the systems we build. That makes it harmful to study the field as an abstract, isolated domain. To truly master it, dive into how computers work, then make a habit of asking yourself "okay, but what if assumption X does not hold true?" every step along the way.

  2. Security is a protoscience. Think of chemistry in the early 19th century: a glorious and messy thing, chock-full of colorful personalities, unsolved mysteries, and snake oil salesmen. You need passion and humility to survive. Those who think they have all the answers are a danger to themselves and to people who put their faith in them.

  3. People will trust you with their livelihoods, but will have no way to truly measure the quality of your work. Don't let them down: be painfully honest with yourself and work every single day to address your weaknesses. If you are not embarrassed by the views you held two years ago, you are getting complacent - and complacency kills.

  4. It will feel that way, but you are not smarter than software engineers. Walk in their shoes for a while: write your own code, show it to the world, and be humiliated by all the horrible mistakes you will inevitably make. It will make you better at your job - and will turn you into a better person, too.

August 04, 2016

CSS mix-blend-mode is bad for your browsing history

Up until mid-2010, any rogue website could get a good sense of your browsing habits by specifying a distinctive :visited CSS pseudo-class for any links on the page, rendering thousands of interesting URLs off-screen, and then calling the getComputedStyle API to figure out which pages appear in your browser's history.

After some deliberation, browser vendors have closed this loophole by disallowing almost all attributes in :visited selectors, spare for the fairly indispensable ability to alter foreground and background colors for such links. The APIs have been also redesigned to prevent the disclosure of this color information via getComputedStyle.

This workaround did not fully eliminate the ability to probe your browsing history, but limited it to scenarios where the user can be tricked into unwittingly feeding the style information back to the website one URL at a time. Several fairly convincing attacks have been demonstrated against patched browsers - my own 2013 entry can be found here - but they generally depended on the ability to solicit one click or one keypress per every URL tested. In other words, the whole thing did not scale particularly well.

Or at least, it wasn't supposed to. In 2014, I described a neat trick that exploited normally imperceptible color quantization errors within the browser, amplified by stacking elements hundreds of times, to implement an n-to-2n decoder circuit using just the background-color and opacity properties on overlaid <a href=...> elements to easily probe the browsing history of multiple URLs with a single click. To explain the basic principle, imagine wanting to test two links, and dividing the screen into four regions, like so:

  • Region #1 is lit only when both links are not visited (¬ link_a ∧ ¬ link_b),
  • Region #2 is lit only when link A is not visited but link B is visited (¬ link_a ∧ link_b),
  • Region #3 is lit only when link A is visited but link B is not (link_a ∧ ¬ link_b),
  • Region #4 is lit only when both links are visited (link_a ∧ link_b).

While the page couldn't directly query the visibility of the segments, we just had to convince the user to click the visible segment once to get the browsing history for both links, for example under the guise of dismissing a pop-up ad. (Of course, the attack could be scaled to far more than just 2 URLs.)

This problem was eventually addressed by browser vendors by simply improving the accuracy of color quantization when overlaying HTML elements; while this did not eliminate the risk, it made the attack far more computationally intensive, requiring the evil page to stack millions of elements to get practical results. Gave over? Well, not entirely. In the footnote of my 2014 article, I mentioned this:

"There is an upcoming CSS feature called mix-blend-mode, which permits non-linear mixing with operators such as multiply, lighten, darken, and a couple more. These operators make Boolean algebra much simpler and if they ship in their current shape, they will remove the need for all the fun with quantization errors, successive overlays, and such. That said, mix-blend-mode is not available in any browser today."

As you might have guessed, patience is a virtue! As of mid-2016, mix-blend-mode - a feature to allow advanced compositing of bitmaps, very similar to the layer blending modes available in photo-editing tools such as Photoshop and GIMP - is shipping in Chrome and Firefox. And as it happens, in addition to their intended purpose, these non-linear blending operators permit us to implement arbitrary Boolean algebra. For example, to implement AND, all we need to do is use multiply:

  • black (0) x black (0) = black (0)
  • black (0) x white (1) = black (0)
  • white (1) x black (0) = black (0)
  • white (1) x white (1) = white (1)

For a practical demo, click here. A single click in that whack-a-mole game will reveal the state of 9 visited links to the JavaScript executing on the page. If this was an actual game and if it continued for a bit longer, probing the state of hundreds or thousands of URLs would not be particularly hard to pull off.

May 11, 2016

Clearing up some misconceptions around the "ImageTragick" bug

The recent, highly publicized "ImageTragick" vulnerability had countless web developers scrambling to fix a remote code execution vector in ImageMagick - a popular bitmap manipulation tool commonly used to resize, transcode, or annotate user-supplied images on the Web. Whatever your take on "branded" vulnerabilities may be, the flaw certainly is notable for its ease of exploitation: it is an embarrassingly simple shell command injection bug reminiscent of the security weaknesses prevalent in the 1990s, and nearly extinct in core tools today. The issue also bears some parallels to the more far-reaching but equally striking Shellshock bug.

That said, I believe that the publicity that surrounded the flaw was squandered by failing to make one very important point: even with this particular RCE vector fixed, anyone using ImageMagick to process attacker-controlled images is likely putting themselves at a serious risk.

The problem is fairly simple: for all its virtues, ImageMagick does not appear to be designed with malicious inputs in mind - and has a long and colorful history of lesser-known but equally serious security flaws. For a single data point, look no further than the work done several months ago by Jodie Cunningham. Jodie fuzzed IM with a vanilla setup of afl-fuzz - and quickly identified about two dozen possibly exploitable security holes, along with countless denial of service flaws. A small sample of Jodie's findings can be found here.

Jodie's efforts probably just scratched the surface; after "ImageTragick", a more recent effort by Hanno Boeck uncovered even more bugs; from what I understand, Hanno's work also went only as far as using off-the-shelf fuzzing tools. You can bet that, short of a major push to redesign the entire IM codebase, the trickle won't stop any time soon.

And so, the advice sorely missing from the "ImageTragick" webpage is this:

  • If all you need to do is simple transcoding or thumbnailing of potentially untrusted images, don't use ImageMagick. Make a direct use of libpng, libjpeg-turbo, and giflib; for a robust way to use these libraries, have a look at the source code of Chromium or Firefox. The resulting implementation will be considerably faster, too.

  • If you have to use ImageMagick on untrusted inputs, consider sandboxing the code with seccomp-bpf or an equivalent mechanism that robustly restricts access to all user space artifacts and to the kernel attack surface. Rudimentary sandboxing technologies, such as chroot() or UID separation, are probably not enough.

  • If all other options fail, be zealous about limiting the set of image formats you actually pass down to IM. The bare minimum is to thoroughly examine the headers of the received files. It is also helpful to explicitly specify the input format when calling the utility, as to preempt auto-detection code. For command-line invocations, this can be done like so:

    convert [...other params...] -- jpg:input-file.jpg jpg:output-file.jpg

    The JPEG, PNG, and GIF handling code in ImageMagick is considerably more robust than the code that supports PCX, TGA, SVG, PSD, and the likes.

February 09, 2016

Automatically inferring file syntax with afl-analyze

The nice thing about the control flow instrumentation used by American Fuzzy Lop is that it allows you to do much more than just, well, fuzzing stuff. For example, the suite has long shipped with a standalone tool called afl-tmin, capable of automatically shrinking test cases while still making sure that they exercise the same functionality in the targeted binary (or that they trigger the same crash). Another similar tool, afl-cmin, employed a similar trick to eliminate redundant files in any large testing corpora.

The latest release of AFL features another nifty new addition along these lines: afl-analyze. The tool takes an input file, sequentially flips bytes in this data stream, and then observes the behavior of the targeted binary after every flip. From this information, it can infer several things:

  • Classify some content as no-op blocks that do not elicit any changes to control flow (say, comments, pixel data, etc).
  • Checksums, magic values, and other short, atomically compared tokens where any bit flip causes the same change to program execution.
  • Longer blobs exhibiting this property - almost certainly corresponding to checksummed or encrypted data.
  • "Pure" data sections, where analyzer-injected changes consistently elicit differing changes to control flow.

This gives us some remarkable and quick insights into the syntax of the file and the behavior of the underlying parser. It may sound too good to be true, but actually seems to work in practice. For a quick demo, let's see what afl-analyze has to say about running cut -d ' ' -f1 on a text file:

We see that cut really only cares about spaces and newlines. Interestingly, it also appears that the tool always tokenizes the entire line, even if it's just asked to return the first token. Neat, right?

Of course, the value of afl-analyze is greater for incomprehensible binary formats than for simple text utilities; perhaps even more so when dealing with black-box parsers (which can be analyzed thanks to the runtime QEMU instrumentation supported in AFL). To try out the tool's ability to deal with binaries, let's check out libpng:

This looks pretty damn good: we have two four-byte signatures, followed by chunk length, four-byte chunk name, chunk length, some image metadata, and then a comment section. Neat, right? All in a matter of seconds: no configuration needed and no knobs to turn.

Of course, the tool shipped just moments ago and is still very much experimental; expect some kinks. Field testing and feedback welcome!

June 11, 2015

New in AFL: persistent mode

Although American Fuzzy Lop comes with a couple of nifty performance optimizations, it still relies on a fairly resource-intensive routine that is common to most general-purpose fuzzers: it continually creates new processes, feeds them a single test case, and then discards them to start over from scratch.

To avoid the overhead of the notoriously slow execve() syscall and the linking process, the fuzzer automatically leverages the forkserver optimization, where new processes are cloned from a copy-on-write master perpetually kept in a virgin state. This allows many targets to be fuzzed faster than with other, conventional tools. But even with this hack, each new input still incurs the cost of fork(). On all supported OSes with the exception of MacOS X, the fork() call is actually surprisingly fast - but certainly does not come free.

For some common fuzzing targets, such as zlib or libpng, the constant cycle of forking and initialization is a significant and avoidable bottleneck. In many cases, the underlying APIs are either stateless, or can be reliably reset to a nearly-pristine state across inputs - so at least in principle, you don't have to throw away the child process after every single run. That's where in-process fuzzing tends to shine: in this scheme, the test cases are generated inline and fed to the underlying API in a custom-written, single-process loop. The speed gains offered by in-process fuzzing can be as high as 10x, but the approach comes at a price; for example, it is easily derailed by accidental memory leaks or DoS conditions in the tested code.

Well, the good news is that starting with version 1.81b, afl-fuzz supports an optional "persistent" mode that combines the benefits of in-process fuzzing with the robustness of a more traditional multi-process tool. In this scheme, the fuzzer feeds test cases to a separate, long-lived process that reads the input data, passes it to the instrumented API, notifies the parent about successful run by stopping its own execution; eventually, when resumed by the parent, the process simply loops back to the start. You just need to write a minimalist harness to implement the loop, but AFL takes care of most of the tricky stuff, including crash handling, stall detection, and the usual instrumentation magic that AFL is designed for:

int main(int argc, char** argv) {

  while (__AFL_LOOP(1000)) {

    /* Reset state. */
    memset(buf, 0, 100);

    /* Read input data. */
    read(0, buf, 100);

    /* Parse it in some vulnerable way. You'd normally call a library here. */
    if (buf[0] != 'p') puts("error 1"); else
    if (buf[1] != 'w') puts("error 2"); else
    if (buf[2] != 'n') puts("error 3"); else
      abort();

  }

}

For a more complete example, see experimental/persistent_demo/ and be sure to read the last section of llvm_mode/README.llvm. This feature is inspired by the work done by Kostya Serebryany on LibFuzzer (which is, in turn, inspired by AFL); additional credit goes to Christian Holler, who started a conversation that finally prompted me to integrate this mode with the tool.

April 14, 2015

Finding bugs in SQLite, the easy way

SQLite is probably the most popular embedded database in use today; it is also known for being exceptionally well-tested and robust. In contrast to traditional SQL solutions, it does not rely on the usual network-based client-server architecture and does not employ a complex ACL model; this simplicity makes it comparatively safe.

At the same time, because of its versatility, SQLite sometimes finds use as the mechanism behind SQL-style query APIs that are exposed between privileged execution contexts and less-trusted code. For an example, look no further than the WebDB / WebSQL mechanism available in some browsers; in this setting, any vulnerabilities in the SQLite parser can open up the platform to attacks.

With this in mind, I decided to take SQLite for a spin with - you guessed it - afl-fuzz. As discussed some time ago, languages such as SQL tend to be difficult to stress-test in a fully automated manner: without an intricate model of the underlying grammar, random mutations are unlikely to generate anything but trivially broken statements. That said, afl-fuzz can usually leverage the injected instrumentation to sort out the grammar on its own. All I needed to get it started is a basic dictionary; for that, I took about 5 minutes to extract a list of reserved keywords from the SQLite docs (now included with the fuzzer as testcases/_extras/sql/). Next, I seeded the fuzzer with a single test case:

create table t1(one smallint);
insert into t1 values(1);
select * from t1;

This approach netted a decent number of interesting finds, some of which were mentioned in an earlier blog post that first introduced the dictionary feature. But when looking at the upstream fixes for the initial batch, I had a sudden moment of clarity and recalled that the developers of SQLite maintained a remarkably well-structured and comprehensive suite of hand-written test cases in their repository.

I figured that this body of working SQL statements may be a much better foundation for the fuzzer to build on, compared to my naive query - so I grepped the test cases out, split them into files, culled the resulting corpus with afl-cmin, and trimmed the inputs with afl-tmin. After a short while, I had around 550 files, averaging around 220 bytes each. I used them as a starting point for another run of afl-fuzz.

This configuration very quickly yielded a fair number of additional, unique fault conditions, ranging from NULL pointer dereferences, to memory fenceposts visible only under ASAN or Valgrind, to pretty straightforward uses of uninitialized pointers (link), bogus calls to free() (link), heap buffer overflows (link), and even stack-based ones (link). The resulting collection of 22 crashing test cases is included with the fuzzer in docs/vuln_samples/sqlite_*. They include some fairly ornate minimized inputs, say:

CREATE VIRTUAL TABLE t0 USING fts4(x,order=DESC);
INSERT INTO t0(docid,x)VALUES(-1E0,'0(o');
INSERT INTO t0 VALUES('');
INSERT INTO t0 VALUES('');
INSeRT INTO t0 VALUES('o');
SELECT docid FROM t0 WHERE t0 MATCH'"0*o"';

All in all, it's a pretty good return on investment for about 30 minutes of actual work - especially for a piece of software functionally tested and previously fuzzed to such a significant extent.

PS. I was truly impressed with Richard Hipp fixing each and every of these cases within a couple of hours of sending in a report. The fixes have been incorporated in version 3.8.9 of SQLite and have been public for a while, but there was no upstream advisory; depending on your use case, you may want to update soon.

March 30, 2015

On journeys

- 1 -

Poland is an ancient country whose history is deeply intertwined with that of the western civilization. In its glory days, the Polish-Lithuanian Commonwealth sprawled across vast expanses of land in central Europe, from Black Sea to Baltic Sea. But over the past two centuries, it suffered a series of military defeats and political partitions at the hands of its closest neighbors: Russia, Austria, Prussia, and - later - Germany.

After more than a hundred years of foreign rule, Poland re-emerged as an independent state in 1918, only to face the armies of Nazi Germany at the onset of World War II. With Poland's European allies reneging on their earlier military guarantees, the fierce fighting left the country in ruins. Some six million people have died within its borders - more than ten times the death toll in France or in the UK. Warsaw was reduced to a sea of rubble, with perhaps one in ten buildings still standing by the end of the war.

With the collapse of the Third Reich, Franklin D. Roosevelt, Winston Churchill, and Joseph Stalin held a meeting in Yalta to decide the new order for war-torn Europe. At Stalin's behest, Poland and its neighboring countries were placed under Soviet political and military control, forming what has become known as the Eastern Bloc.

Over the next several decades, the Soviet satellite states experienced widespread repression and economic decline. But weakened by the expense of the Cold War, the communist chokehold on the region eventually began to wane. In Poland, even the introduction of martial law in 1981 could not put an end to sweeping labor unrest. Narrowly dodging the specter of Soviet intervention, the country regained its independence in 1989 and elected its first democratic government; many other Eastern Bloc countries soon followed suit.

Ever since then, Poland has enjoyed a period of unprecedented growth and has emerged as one of the more robust capitalist democracies in the region. In just two decades, it shed many of its backwardly, state-run heavy industries and adopted a modern, service-oriented economy. But the effects of the devastating war and the lost decades under communist rule still linger on - whether you look at the country's infrastructure, at its socrealist cityscapes, at its political traditions, or at the depressingly low median wage.

When thinking about the American involvement in the Cold War, people around the world may recall Vietnam, Bay of Pigs, or the proxy wars fought in the Middle East. But in Poland and many of its neighboring states, the picture you remember the most is the fall of the Berlin Wall.

- 2 -

I was born in Warsaw in the winter of 1981, at the onset of martial law, with armored vehicles rolling onto Polish streets. My mother, like many of her generation, moved to the capital in the sixties as a part of an effort to rebuild and repopulate the war-torn city. My grandma would tell eerie stories of Germans and Soviets marching through their home village somewhere in the west. I liked listening to the stories; almost every family in Poland had some to tell.

I did not get to know my father. I knew his name; he was a noted cinematographer who worked on big-ticket productions back in the day. He left my mother when I was very young and never showed interest in staying in touch. He had a wife and other children, so it might have been that.

Compared to him, mom hasn't done well for herself. We ended up in social housing in one of the worst parts of the city, on the right bank of the Vistula river. My early memories from school are that of classmates sniffing glue from crumpled grocery bags. I remember my family waiting in lines for rationed toilet paper and meat. As a kid, you don't think about it much.

The fall of communism came suddenly. I have a memory of grandma listening to broadcasts from Radio Free Europe, but I did not understand what they were all about. I remember my family cheering one afternoon, transfixed to a black-and-white TV screen. I recall my Russian language class morphing into English; I had my first taste of bananas and grapefruits. There is the image of the monument of Feliks Dzierżyński coming down. I remember being able to go to a better school on the other side of Warsaw - and getting mugged many times on the way.

The transformation brought great wealth to some, but many others have struggled to find their place in the fledgling and sometimes ruthless capitalist economy. Well-educated and well read, my mom ended up in the latter pack, at times barely making ends meet. I think she was in part a victim of circumstance, and in part a slave to way of thinking that did not permit the possibility of taking chances or pursuing happiness.

- 3 -

Mother always frowned upon popular culture, seeing it as unworthy of an educated mind. For a time, she insisted that I only listen to classical music. She angrily shunned video games, comic books, and cartoons. I think she perceived technology as trivia; the only field of science she held in high regard was abstract mathematics, perhaps for its detachment from the mundane world. She hoped that I would learn Latin, a language she could read and write; that I would practice drawing and painting; or that I would read more of the classics of modernist literature.

Of course, I did almost none of that. I hid my grunge rock tapes between Tchaikovsky, listened to the radio under the sheets, and watched the reruns of The A-Team while waiting for her to come back from work. I liked electronics and chemistry a lot more than math. And when I laid my hands on my first computer - an 8-bit relic of British engineering from 1982 - I soon knew that these machines, in their incredible complexity and flexibility, were what I wanted to spend my time on.

I suspected I could become a competent programmer, but never had enough faith in my skill. Yet, in learning about computers, I realized that I had a knack for understanding complex systems and poking holes in how they work. With a couple of friends, we joined the nascent information security community in Europe, comparing notes on mailing lists. Before long, we were taking on serious consulting projects for banks and the government - usually on weekends and after school, but sometimes skipping a class or two. Well, sometimes more than that.

All of the sudden, I was facing an odd choice. I could stop, stay in school and try to get a degree - going back every night to a cramped apartment, my mom sleeping on a folding bed in the kitchen, my personal space limited to a bare futon and a tiny desk. Or, I could seize the moment and try to make it on my own, without hoping that one day, my family would be able to give me a head start.

I moved out, dropped out of school, and took on a full-time job. It paid somewhere around $12,000 a year - a pittance anywhere west of the border, but a solid wage in Poland even today. Not much later, I was making two times as much, about the upper end of what one could hope for in this line of work. I promised myself to keep taking courses after hours, but I wasn't good at sticking to the plan. I moved in with my girlfriend, and at the age of 19, I felt for the first time that things were going to be all right.

- 4 -

Growing up in Europe, you get used to the barrage of low-brow swipes taken at the United States. Your local news will never pass up the opportunity to snicker about the advances of creationism somewhere in Kentucky. You can stay tuned for a panel of experts telling you about the vastly inferior schools, the medieval justice system, and the striking social inequality on the other side of the pond. You don't doubt their words - but deep down inside, no matter how smug the critics are, or how seemingly convincing their arguments, the American culture still draws you in.

My moment of truth came in the summer of 2000. A company from Boston asked me if I'd like to talk about a position on their research team; I looked at the five-digit figure and could not believe my luck. Moving to the US was an unreasonable risk for a kid who could barely speak English and had no safety net to fall back to. But that did not matter: I knew I had no prospects of financial independence in Poland - and besides, I simply needed to experience the New World through my own eyes.

Of course, even with a job offer in hand, getting into the United States is not an easy task. An engineering degree and a willing employer opens up a straightforward path; it is simple enough that some companies would abuse the process to source cheap labor for menial, low-level jobs. With a visa tied to the petitioning company, such captive employees could not seek better wages or more rewarding work.

But without a degree, the options shrink drastically. For me, the only route would be a seldom-granted visa reserved for extraordinary skill - meant for the recipients of the Nobel Prize and other folks who truly stand out in their field of expertise. The attorneys looked over my publication record, citations, and the supporting letters from other well-known people in the field. Especially given my age, they thought we had a good shot. A few stressful months later, it turned out that they were right.

On the week of my twentieth birthday, I packed two suitcases and boarded a plane to Boston. My girlfriend joined me, miraculously securing a scholarship at a local university to continue her physics degree; her father helped her with some of the costs. We had no idea what we were doing; we had perhaps few hundred bucks on us, enough to get us through the first couple of days. Four thousand miles away from our place of birth, we were starting a brand new life.

- 5 -

The cultural shock gets you, but not in the sense you imagine. You expect big contrasts, a single eye-opening day to remember for the rest of your life. But driving down a highway in the middle of a New England winter, I couldn't believe how ordinary the world looked: just trees, boxy buildings, and pavements blanketed with dirty snow.

Instead of a moment of awe, you drown in a sea of small, inconsequential things, draining your energy and making you feel helpless and lost. It's how you turn on the shower; it's where you can find a grocery store; it's what they meant by that incessant "paper or plastic" question at the checkout line. It's how you get a mailbox key, how you make international calls, it's how you pay your bills with a check. It's the rules at the roundabout, it's your social security number, it's picking the right toll lane, it's getting your laundry done. It's setting up a dial-up account and finding the food you like in the sea of unfamiliar brands. It's doing all this without Google Maps or a Facebook group to connect with other expats nearby.

The other thing you don't expect is losing touch with your old friends; you can call or e-mail them every day, but your social frames of reference begin to drift apart, leaving less and less to talk about. The acquaintances you make in the office will probably never replace the folks you grew up with. We managed, but we weren't prepared for that.

- 6 -

In the summer, we had friends from Poland staying over for a couple of weeks. By the end of their trip, they asked to visit New York City one more time; we liked the Big Apple, so we took them on a familiar ride down I-95. One of them went to see the top of World Trade Center; the rest of us just walked around, grabbing something to eat before we all headed back. A few days later, we were all standing in front of a TV, watching September 11 unfold in real time.

We felt horror and outrage. But when we roamed the unsettlingly quiet streets of Boston, greeted by flags and cardboard signs urging American drivers to honk, we understood that we were strangers a long way from home - and that our future in this country hanged in the balance more than we would have thought.

Permanent residency is a status that gives a foreigner the right to live in the US and do almost anything they please - change jobs, start a business, or live off one's savings all the same. For many immigrants, the pursuit of this privilege can take a decade or more; for some others, it stays forever out of reach, forcing them to abandon the country in a matter of days as their visas expire or companies fold. With my O-1 visa, I always counted myself among the lucky ones. Sure, it tied me to an employer, but I figured that sorting it out wouldn't be a big deal.

That proved to be a mistake. In the wake of 9/11, an agency known as Immigration and Naturalization Services was being dismantled and replaced by a division within the Department of Homeland Security. My own seemingly straightforward immigration petition ended up somewhere in the bureaucratic vacuum that formed in between the two administrative bodies. I waited patiently, watching the deepening market slump, and seeing my employer's prospects get dimmer and dimmer every month. I was ready for the inevitable, with other offers in hand, prepared to make my move perhaps the very first moment I could. But the paperwork just would not come through. With the Boston office finally shutting down, we packed our bags and booked flights. We faced the painful admission that for three years, we chased nothing but a pipe dream. The only thing we had to show for it were two adopted cats, now sitting frightened somewhere in the cargo hold.

The now-worthless approval came through two months later; the lawyers, cheerful as ever, were happy to send me a scan. The hollowed-out remnants of my former employer were eventually bought by Symantec - the very place from where I had my backup offer in hand.

- 7 -

In a way, Europe's obsession with America's flaws made it easier to come home without ever explaining how the adventure really played out. When asked, I could just wing it: a mention of the death penalty or permissive gun laws would always get you a knowing nod, allowing the conversation to move on.

Playing to other people's preconceptions takes little effort; lying to yourself calls for more skill. It doesn't help that when you come back after three years away from home, you notice all the small annoyances that you used to simply tune out. Back then, Warsaw still had a run-down vibe: the dilapidated road from the airport; the drab buildings on the other side of the river; the uneven pavements littered with dog poop; the dirty walls at my mother's place, with barely any space to turn. You can live with it, of course - but it's a reminder that you settled for less, and it's a sensation that follows you every step of the way.

But more than the sights, I couldn't forgive myself something else: that I was coming back home with just loose change in my pocket. There are some things that a failed communist state won't teach you, and personal finance is one of them; I always looked at money just as a reward for work, something you get to spend to brighten your day. The indulgences were never extravagant: perhaps I would take the cab more often, or have take-out every day. But no matter how much I made, I kept living paycheck-to-paycheck - the only way I knew, the way our family always did.

- 8 -

With a three-year stint in the US on your resume, you don't have a hard time finding a job in Poland. You face the music in a different way. I ended up with a salary around a fourth of what I used to make in Massachusetts; I simply decided not to think about it much. I wanted to settle down, work on interesting projects, marry my girlfriend, have a child. I started doing consulting work whenever I could, setting almost all the proceeds aside.

After four years with T-Mobile in Poland, I had enough saved to get us through a year or so - and in a way, it changed the way I looked at my work. Being able to take on ambitious challenges and learn new things started to matter more than jumping ships for a modest salary bump. Burned by the folly of pursuing riches in a foreign land, I put a premium on boring professional growth.

Comically, all this introspection made me realize that from where I stood, I had almost nowhere left to go. Sure, Poland had telcos, refineries, banks - but they all consumed the technologies developed elsewhere, shipped here in a shrink-wrapped box; as far as their IT went, you could hardly tell the companies apart. To be a part of the cutting edge, you had to pack your bags, book a flight, and take a jump into the unknown. I sure as heck wasn't ready for that again.

And then, out of the blue, Google swooped in with an offer to work for them from the comfort of my home, dialing in for a videoconference every now and then. The starting pay was about the same as what I was making at a telco, but I had no second thoughts. I didn't say it out loud, but deep down inside, I already knew what needed to happen next.

- 9 -

We moved back to the US in 2009, two years after taking the job, already on the hook for a good chunk of Google's product security and with the comfort of knowing where we stood. In a sense, my motive was petty: you could call it a desire to vindicate a failed adolescent dream. But in many other ways, I have grown fond of the country that shunned us once before; and I wanted our children to grow up without ever having to face the tough choices and the uncertain prospects I had to deal with in my earlier years.

This time, we knew exactly what to do: a quick stop at a grocery store on a way from the airport, followed by e-mail to our immigration folks to get the green card paperwork out the door. A bit more than half a decade later, we were standing in a theater in Campbell, reciting the Oath of Allegiance and clinging on to our new certificates of US citizenship.

The ceremony closed a long and interesting chapter in my life. But more importantly, standing in that hall with people from all over the globe made me realize that my story is not extraordinary; many of them had lived through experiences far more harrowing and captivating than mine. If anything, my tale is hard to tell apart from that of countless other immigrants from the former Eastern Bloc. By some estimates, in the US alone, the Polish diaspora is about 9 million strong.

I know that the Poland of today is not the Poland I grew up in. It's not not even the Poland I came back to in 2003; the gap to Western Europe is shrinking every single year. But I am grateful to now live in a country that welcomes more immigrants than any other place on Earth - and at the end of their journey, makes many of them them feel at home. It also makes me realize how small and misguided are the conversations we are having about immigration - on both sides of the aisle, and not just here, but all over the developed world.

March 11, 2015

Another round of image bugs: PNG and JPEG XR

Today's release of MS15-024 and MS15-029 addresses two more image-related memory disclosure vulnerabilities in Internet Explorer - this time, affecting the little-known JPEG XR format supported by this browser, plus the far more familiar PNG. Similarly to the previously discussed bugs in MSIE TIFF and JPEG parsing, and to the BMP, ICO, and GIF and JPEG DHT & SOS flaws in Firefox and Chrome, these two were found with afl-fuzz. The earlier posts have more context - today, just enjoy some pretty pics, showing subsequent renderings of the same JPEG XR image:

Proof-of-concepts are here (JXR) and here (PNG). I am happy to report that Microsoft fixed them within roughly three months of the original report.

The total number of bugs squashed in this category is now ten. I have just one more multi-browser image parsing bug outstanding - but it should be an interesting one. Stay tuned.

February 10, 2015

Bi-level TIFFs and the tale of the unexpectedly early patch

Today's release of MS15-016 (CVE-2015-0061) fixes another of the series of browser memory disclosure bugs found with afl-fuzz - this time, related to the handling of bi-level (1-bpp) TIFFs in Internet Explorer (yup, MSIE displays TIFFs!). You can check out a simple proof-of-concept here, or simply enjoy this screenshot of eight subsequent renderings of the same TIFF file:

The vulnerability is conceptually similar to other previously-identified problems with GIF and JPEG handling in popular browsers (example 1, example 2), with the SOS handling bug in libjpeg, or the DHT bug in libjpeg-turbo (details here) - so I will try not to repeat the same points in this post.

Instead, I wanted to take note of what really sets this bug apart: Microsoft has addressed it in precisely 60 days, counting form my initial e-mail to the availability of a patch! This struck me as a big deal: although vulnerability research is not my full-time job, I do have a decent sample size - and I don't think I have seen this happen for any of the few dozen MSIE bugs that I reported to MSRC over the past few years. The average patch time always seemed to be closer to 6+ months - coupled with what the somewhat odd practice of withholding attribution in security bulletins and engaging in seemingly punitive PR outreach if the reporter ever went public before that.

I am very excited and hopeful that rapid patching is the new norm - and huge thanks to MSRC folks if so :-)

January 09, 2015

afl-fuzz: making up grammar with a dictionary in hand

One of the most significant limitations of afl-fuzz is that its mutation engine is syntax-blind and optimized for compact data formats, such as binary files (e.g., archives, multimedia) or terse human-readable languages (RTF, shell scripts). Any general-purpose fuzzer will have a harder time dealing with more verbose dialects, such as SQL or HTTP. You can improve your odds in a variety of ways, and the results can be surprisingly good - but ultimately, it's never easy to get from Set-Cookie: FOO=BAR to Content-Length: -1 by randomly flipping bits.

The common wisdom is that if you want to fuzz data formats with such ornate grammars, you need to build an one-off, protocol-specific mutation engine with the appropriate syntax templates baked in. Of course, writing such code isn't easy. In essence, you need to manually build a model precise enough so that the generated test cases almost always make sense to the targeted parser - but creative enough to trigger unintended behaviors in that codebase. It takes considerable experience and a fair amount of time to get it just right.

I was thinking about using afl-fuzz to reach some middle ground between the two worlds. I quickly realized that if you give the fuzzer a list of basic syntax tokens - say, the set of reserved keywords defined in the spec - the instrumentation-guided nature of the tool means that even if we just mindlessly clobber the tokens together, we will be able to distinguish between combinations that are nonsensical and ones that actually follow the rules of the underlying grammar and therefore trigger new states in the instrumented binary. By discarding that first class of inputs and refining the other, we could progressively construct more complex and meaningful syntax as we go.

Ideas are cheap, but when I implemented this one, it turned out to be a good bet. For example, I tried it against sqlite, with the fuzzer fed a collection of keywords grabbed from the project's docs (-x testcases/_extras/sql/). Equipped with this knowledge, afl-fuzz quickly spewed out a range of valid if unusual statements, such as:

select sum(1)LIMIT(select sum(1)LIMIT -1,1); select round( -1)````; select group_concat(DISTINCT+1) |1; select length(?)in( hex(1)+++1,1); select abs(+0+ hex(1)-NOT+1) t1; select DISTINCT "Y","b",(1)"Y","b",(1); select - (1)AND"a","b"; select ?1in(CURRENT_DATE,1,1); select - "a"LIMIT- /* */ /* */- /* */ /* */-1; select strftime(1, sqlite_source_id());

(It also found a couple of crashing bugs.)

All right, all right: grabbing keywords is much easier than specifying the underlying grammar, but it still takes some work. I've been wondering how to scratch that itch, too - and came up with a fairly simple algorithm that can help those who do not have the time or the inclination to construct a proper dictionary.

To explain the approach, it's useful to rely on the example of a PNG file. The PNG format uses four-byte, human-readable magic values to indicate the beginning of a section, say:

89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 | .PNG........IHDR
00 00 00 20 00 00 00 20 02 03 00 00 00 0e 14 92 | ................

The algorithm in question can identify "IHDR" as a syntax token by piggybacking on top of the deterministic, sequential bit flips that are already being performed by afl-fuzz across the entire file. It works by identifying runs of bytes that satisfy a simple property: that flipping them triggers an execution path that is distinct from the product of flipping stuff in the neighboring regions, yet consistent across the entire sequence of bytes.

This signal strongly implies that touching any of the affected bytes causes the failure of an underlying atomic check, such as header.magic_value == 0xDEADBEEF or strcmp(name, "Set-Cookie"). When such a behavior is detected, the entire blob of data is added to the dictionary, to be randomly recombined with other dictionary tokens later on.

This second trick is not a substitute for a proper, hand-crafted list of keywords; for one, it will only know about the syntax tokens that were present in the input files, or could be synthesized easily. It will also not do much when pitted against optimized, tree-based parsers that do not perform atomic string comparisons. (The fuzzer itself can often clear that last obstacle anyway, but the process will be slow.)

Well, that's it. If you want to try out the new features, click here and let me know how it goes!

November 30, 2014

afl-fuzz: nobody expects CDATA sections in XML

I made a very explicit, pragmatic design decision with afl-fuzz: for performance and reliability reasons, I did not want to get into static analysis or symbolic execution to understand what the program is actually doing with the data we are feeding to it. The basic algorithm for the fuzzer can be just summed up as randomly mutating the input files, and gently nudging the process toward new state transitions discovered in the targeted binary. That discovery part is done with the help of lightweight and extremely simple instrumentation injected by the compiler.

I had a working theory that this would make the fuzzer a bit smarter than a potato, but I wasn't expecting any fireworks. So, when the algorithm managed to not only find some useful real-world bugs, but to successfully synthesize a JPEG file out of nothing, I was genuinely surprised by the outcome.

Of course, while it was an interesting result, it wasn't an impossible one. In the end, the fuzzer simply managed to wiggle its way through a long and winding sequence of conditionals that operated on individual bytes, making them well-suited for the guided brute-force approach. What seemed perfectly clear, though, is that the algorithm wouldn't be able to get past "atomic", large-search-space checks such as:

if (strcmp(header.magic_password, "h4ck3d by p1gZ")) goto terminate_now;

...or:

if (header.magic_value == 0x12345678) goto terminate_now;

This constraint made the tool less useful for properly exploring extremely verbose, human-readable formats such as HTML or JavaScript.

Some doubts started to set in when afl-fuzz effortlessly pulled out four-byte magic values and synthesized ELF files when testing programs such as objdump or file. As I later found out, this particular example is often used as a benchmark for complex static analysis or symbolic execution frameworks. But still, guessing four bytes could have been just a happy accident. With fast targets, the fuzzer can pull off billions of execs per day on a single machine, so it could have been dumb luck.

(As an aside: to deal with strings, I had this very speculative idea of special-casing memory comparison functions such as strcmp() and memcmp() by replacing them with non-optimized versions that can be instrumented easily. I have one simple demo of that principle bundled with the fuzzer in experimental/instrumented_cmp/, but I never got around to actually implementing it in the fuzzer itself.)

Anyway, nothing quite prepared me for what the recent versions were capable of doing with libxml2. I seeded the session with:

<a b="c">d</a>

...and simply used that as the input for a vanilla copy of xmllint. I was merely hoping to stress-test the very basic aspects of the parser, without getting into any higher-order features of the language. Yet, after two days on a single machine, I found this buried in test case #4641 in the output directory:

...<![<CDATA[C%Ada b="c":]]]>...

What the heck?!

As most of you probably know, CDATA is a special, differently parsed section within XML, separated from everything else by fairly complex syntax - a nine-character sequence of bytes that can't be realistically discovered by just randomly flipping bits.

The finding is actually not magic; there are two possible explanations:

  • As a recent "well, it's cheap, so let's see what happens" optimization, AFL automatically sets -O3 -funroll-loops when calling the compiler for instrumented binaries, and some of the shorter fixed-string comparisons will be actually just expanded inline. For example, if the stars align just right, strcmp(buf, "foo") may be unrolled to:
    cmpb   $0x66,0x200c32(%rip)        # 'f'
    jne    4004b6 
    cmpb   $0x6f,0x200c2a(%rip)        # 'o'
    jne    4004b6 
    cmpb   $0x6f,0x200c22(%rip)        # 'o'
    jne    4004b6 
    cmpb   $0x0,0x200c1a(%rip)         # NUL
    jne    4004b6 
    
    ...which, by the virtue of having a series of explicit and distinct branch points, can be readily instrumented on a per-character basis by afl-fuzz.

  • If that fails, it just so happens that some of the string comparisons in libxml2 in parser.c are done using a bunch of macros that will compile to similarly-structured code (as spotted by Ben Hawkes). This is presumably done so that the compiler can optimize this into a tree-style parser - whereas a linear sequence of strcmp() calls would lead to repeated and unnecessary comparisons of the already-examined chars.

    (Although done by hand in this particular case, the pattern is fairly common for automatically generated parsers of all sorts.)
The progression of test cases seems to support both of these possibilities:

<![
<![C b="c">
<![CDb m="c">
<![CDAĹĹ@
<![CDAT<!
...

I find this result a bit spooky because it's an example of the fuzzer defiantly and secretly working around one of its intentional and explicit design limitations - and definitely not something I was aiming for =)

Of course, treat this first and foremost as a novelty; there are many other circumstances where similar types of highly verbose text-based syntax would not be discoverable to afl-fuzz - or where, even if the syntax could be discovered through some special-cased shims, it would be a waste of CPU time to do it with afl-fuzz, rather than a simple syntax-aware, template-based tool.

(Coming up with an API to make template-based generators pluggable into AFL may be a good plan.)

By the way, here are some other gems from the randomly generated test cases:

<!DOCTY.
<?xml version="2.666666666666666666667666666">
<?xml standalone?>

November 24, 2014

afl-fuzz: crash exploration mode

One of the most labor-intensive portions of any fuzzing project is the work needed to determine if a particular crash poses a security risk. A small minority of all fault conditions will have obvious implications; for example, attempts to write or jump to addresses that clearly come from the input file do not need any debate. But most crashes are more ambiguous: some of the most common issues are NULL pointer dereferences and reads from oddball locations outside the mapped address space. Perhaps they are a manifestation of an underlying vulnerability; or perhaps they are just harmless non-security bugs. Even if you prefer to err on the side of caution and treat them the same, the vendor may not share your view.

If you have to make the call, sifting through such crashes may require spending hours in front of a debugger - or, more likely, rejecting a good chunk of them based on not much more than a hunch. To help triage the findings in a more meaningful way, I decided to add a pretty unique and nifty feature to afl-fuzz: the brand new crash exploration mode, enabled via -C.

The idea is very simple: you take a crashing test case and give it to afl-fuzz as a starting point for the automated run. The fuzzer then uses its usual feedback mechanisms and genetic algorithms to see how far it can get within the instrumented codebase while still keeping the program in the crashing state. Mutations that stop the crash from happening are thrown away; so are the ones that do not alter the execution path in any appreciable way. The occasional mutation that makes the crash happen in a subtly different way will be kept and used to seed subsequent fuzzing rounds later on.

The beauty of this mode is that it very quickly produces a small corpus of related but somewhat different crashes that can be effortlessly compared to pretty accurately estimate the degree of control you have over the faulting address, or to figure out whether you can get past the initial out-of-bounds read by nudging it just the right way (and if the answer is yes, you probably get to see what happens next). It won't necessarily beat thorough code analysis, but it's still pretty cool: it lets you make a far more educated guess without having to put in any work.

As an admittedly trivial example, let's take a suspect but ambiguous crash in unrtf, found by afl-fuzz in its normal mode:

unrtf[7942]: segfault at 450 ip 0805062b sp bf957e60 error 4 in unrtf[8048000+1c000]

When fed to the crash explorer, the fuzzer took just several minutes to notice that by changing {\cb-44901990 in the converted RTF file to printable representations of other negative integers, it could quickly trigger faults at arbitrary addresses of its choice, corresponding mostly-linearly to the integer set:

unrtf[28809]: segfault at 88077782 ip 0805062b sp bff00210 error 4 in unrtf[8048000+1c000]
unrtf[26656]: segfault at 7271250 ip 0805062b sp bf957e60 error 4 in unrtf[8048000+1c000]

Given a bit more time, it would also almost certainly notice that choosing values within the mapped address space get it past the crashing location and permit even more fun. So, automatic exploit writing next?

November 07, 2014

Pulling JPEGs out of thin air

This is an interesting demonstration of the capabilities of afl; I was actually pretty surprised that it worked!

$ mkdir in_dir
$ echo 'hello' >in_dir/hello
$ ./afl-fuzz -i in_dir -o out_dir ./jpeg-9a/djpeg

In essence, I created a text file containing just "hello" and asked the fuzzer to keep feeding it to a program that expects a JPEG image (djpeg is a simple utility bundled with the ubiquitous IJG jpeg image library; libjpeg-turbo should also work). Of course, my input file does not resemble a valid picture, so it gets immediately rejected by the utility:

$ ./djpeg '../out_dir/queue/id:000000,orig:hello'
Not a JPEG file: starts with 0x68 0x65

Such a fuzzing run would be normally completely pointless: there is essentially no chance that a "hello" could be ever turned into a valid JPEG by a traditional, format-agnostic fuzzer, since the probability that dozens of random tweaks would align just right is astronomically low.

Luckily, afl-fuzz can leverage lightweight assembly-level instrumentation to its advantage - and within a millisecond or so, it notices that although setting the first byte to 0xff does not change the externally observable output, it triggers a slightly different internal code path in the tested app. Equipped with this information, it decides to use that test case as a seed for future fuzzing rounds:

$ ./djpeg '../out_dir/queue/id:000001,src:000000,op:int8,pos:0,val:-1,+cov'
Not a JPEG file: starts with 0xff 0x65

When later working with that second-generation test case, the fuzzer almost immediately notices that setting the second byte to 0xd8 does something even more interesting:

$ ./djpeg '../out_dir/queue/id:000004,src:000001,op:havoc,rep:16,+cov'
Premature end of JPEG file
JPEG datastream contains no image

At this point, the fuzzer managed to synthesize the valid file header - and actually realized its significance. Using this output as the seed for the next round of fuzzing, it quickly starts getting deeper and deeper into the woods. Within several hundred generations and several hundred million execve() calls, it figures out more and more of the essential control structures that make a valid JPEG file - SOFs, Huffman tables, quantization tables, SOS markers, and so on:
$ ./djpeg '../out_dir/queue/id:000008,src:000004,op:havoc,rep:2,+cov'
Invalid JPEG file structure: two SOI markers
...
$ ./djpeg '../out_dir/queue/id:001005,src:000262+000979,op:splice,rep:2'
Quantization table 0x0e was not defined
...
$ ./djpeg '../out_dir/queue/id:001282,src:001005+001270,op:splice,rep:2,+cov' >.tmp; ls -l .tmp
-rw-r--r-- 1 lcamtuf lcamtuf 7069 Nov  7 09:29 .tmp

The first image, hit after about six hours on an 8-core system, looks very unassuming: it's a blank grayscale image, 3 pixels wide and 784 pixels tall. But the moment it is discovered, the fuzzer starts using the image as a seed - rapidly producing a wide array of more interesting pics for every new execution path:

Of course, synthesizing a complete image out of thin air is an extreme example, and not necessarily a very practical one. But more prosaically, fuzzers are meant to stress-test every feature of the targeted program. With instrumented, generational fuzzing, lesser-known features (e.g., progressive, black-and-white, or arithmetic-coded JPEGs) can be discovered and locked onto without requiring a giant, high-quality corpus of diverse test cases to seed the fuzzer with.

The cool part of the libjpeg demo is that it works without any special preparation: there is nothing special about the "hello" string, the fuzzer knows nothing about image parsing, and is not designed or fine-tuned to work with this particular library. There aren't even any command-line knobs to turn. You can throw afl-fuzz at many other types of parsers with similar results: with bash, it will write valid scripts; with giflib, it will make GIFs; with fileutils, it will create and flag ELF files, Atari 68xxx executables, x86 boot sectors, and UTF-8 with BOM. In almost all cases, the performance impact of instrumentation is minimal, too.

Of course, not all is roses; at its core, afl-fuzz is still a brute-force tool. This makes it simple, fast, and robust, but also means that certain types of atomically executed checks with a large search space may pose an insurmountable obstacle to the fuzzer; a good example of this may be:

if (strcmp(header.magic_password, "h4ck3d by p1gZ")) goto terminate_now;

In practical terms, this means that afl-fuzz won't have as much luck "inventing" PNG files or non-trivial HTML documents from scratch - and will need a starting point better than just "hello". To consistently deal with code constructs similar to the one shown above, a general-purpose fuzzer would need to understand the operation of the targeted binary on a wholly different level. There is some progress on this in the academia, but frameworks that can pull this off across diverse and complex codebases in a quick, easy, and reliable way are probably still years away.

PS. Several folks asked me about symbolic execution and other inspirations for afl-fuzz; I put together some notes in this doc.