This is a personal blog. My other stuff: book | home page | Twitter | prepping | CNC robotics | electronics

September 14, 2010

The rise and fall of perfect security

Modern societies, however resilient, are built on remarkably shaky foundations: every single day, we all depend on the moral standards and the restraint of thousands of random strangers. The rules of this game are weakly enforced through series of very imperfect deterrence mechanisms (less than 20% of all property crime is ever solved in the United States) - but in the end, our world is little more than an incredibly elaborate honor system that we all voluntarily participate in.

That's probably okay - we are programmed to play along, and this approach proved to be a smart evolutionary move. A degree of trust is essential to advancing our civilization at a reasonable pace; and paradoxically, despite the apparent weaknessess, the accelerated rate of progress makes us stronger and more adaptable as a species in the long run.

When it comes to the online existence, our attitudes seem drastically different, though: we only joke about the idea of using the evil bit - and yet, we are perfectly comfortable that the locks on our doors can be opened with a safety pin. We scorn web developers who can't seem to be able to get input validation right - even though we certainly don't test our morning coffee for laxatives or LSD. We are being irrational - but why?

Perhaps the reason is simple: the mankind had thousands of years to work out the rules for social interactions in the real world; societies collapsed, new ones emerged - with an increasingly complex system of moral values passed from one generation to another. The Internet is much younger in comparison, and in the end, very different from what we are accustomed to: your neighbor will not try to sneak into your house, but may have far fewer qualms about using your wireless network - a concept that feels much less like a crime. He will not condone theft - but likely feels ambivalent about making unlawful copies of digital content. He may frown upon crude graffiti - but just chuckle at the sight of exploited persistent XSS on a popular website.

An argument can be made that the incentives in online interactions are so different from these in the physical realm, that any such comparisons are simply inappropriate. But then, consider Wikipedia - a design that stands against everything we know about information security, yet demonstrates remarkable resilience in the face of attacks.

Here's a perverse thought, then: what if our pursuit of perfection in information security stems from a fundamental misunderstanding of how human communities can emerge and flourish? We are essentially preaching a model of a society based on complete distrust - but as the complexity of the online world approaches that of real life, the odds of being able to design perfectly secure software are rapidly diminishing; and the impact of being so paranoid is already taking its toll on how much we can achieve today.

If this model is not sustainable, will our online world share the fate of many other early civilizations - collapsing under the weight of its own imperfections, and ultimately, going the way of the dinosaur?

Perhaps; if so - new, more enlightened communities will certainly emerge.

9 comments:

  1. I believe this was exactly RMS's vision -- he refused to build security tools in GNU for this reason, arguing that computer users should be able to trust each other. GNU never became a real OS, so in the end his decision didn't matter.

    ReplyDelete
  2. If *anyone* in the world who thought it might be a good idea, or just fun, to put a laxative in the coffee *you* drink, you would be testing it, though. But you left that out of the argument on purpose. so I barely dare mention it.

    More important, in my opinion, is the question of whether we are paying the toll of paranoia, or just just directing huge amounts of money in the general direction of the booth with incredible inefficiency. Yes, a lot of people are professionally occupied with security matters, and that represents a significant combined amount of salaries. But programming languages where buffer overflows, to take a concrete example, are impossible, have existed for some time. Granted, that leaves the issue of bugs in the sandboxing itself open, but if we were at all serious about security, we would have switched to any of the existing, statically strongly typed, or dynamically strongly typed, solutions.

    But wait! Perhaps we wouldn't even have to switch anything at all! Perhaps we could just compile our existing, legacy-but-still-being-expanded-in-the-original-development-language-for-the-sake-of-efficiency programs with concretizations of research efforts such as CCured (http://hal.cs.berkeley.edu/ccured/ ), the failsafe ANSI C compiler (http://web.yl.is.s.u-tokyo.ac.jp/~oiwa/thesis.pdf ), that other memory-safe C compiler (http://www.seclab.cs.sunysb.edu/mscc/ ), or continuing efforts for reducing the run-time overhead for making sure that unsafe programs exit cleanly instead of letting the attacker put laxative in your coffee. The whole situation makes me think that we are merely misdirecting existing efforts.

    The philosophical debate *should* be moot.

    ReplyDelete
  3. The only way early civilizations survived was building a big wall surrounding the town, and war training everyday. Probably is not a bad idea to adapt that into your network today.

    ReplyDelete
  4. Do you think that computers and "action at a distance" and "virtual space" makes this a fundamentally different situation?

    That is, that our fundamental trust models break down with these new variables?

    ReplyDelete
  5. It is definitely different in the sense that it's a very new setting for human actors: the anonymity of seemingly impersonal remote interactions is not something we are equipped to handle that well today.

    Maybe it's an obstacle we can't overcome. But then, part of me also wonders if we can fix this in any other way; writing sufficiently robust software is clearly also not our strong suit.

    ReplyDelete
  6. Brings to mind an anecdote of some pre-WWII politician arguing against funding cryptanalysis work because "gentleman don't read each other's mail."

    More than not having evolved rules for social interaction, we don't have any idea what kind of risks we need to worry about on the Internet. Some of us think we do, but it's shortsighted. They change too fast, and seem to generally increase, unlike physical world risks, which change very slowly, and generally decrease.

    While recognizing that this is just idle musing, patterns in nonequilibrium complex systems like the Internet don't tend to be as simple as the one you are postulating.

    ReplyDelete
  7. To be frank, I do not necessarily subscribe to the view outlined in my post; I simply find this to be an interesting "what if" thought.

    ReplyDelete
  8. Never before has my property of real value been virtual and potentially instantaneously available to masses from every culture, social class and age group (the most dangerous thing in the world is the pre-adolescent male) before now.

    It's not a matter of practicality and implementation of the technology I use -- trust is something that is earned and technology's uses can always be perverted. These models of security built on distrust simply reflect both the everyday destruction of trust (often automated!) as it is given online and the fact that all of these online neighbors haven't earned each other's trust yet. And won't - given the opportunity to participate, there eventually will always be "criminal behavior" online or otherwise. Hence, the demand for perfect virtual deadbolts that can eventually be broken anyways, just like in the real world.

    Besides, there will always be a man to corrupt hadleyburg. :)

    ReplyDelete
  9. Your argument is flawed. My coffee *is* a laxative.

    ReplyDelete