다음을 통해 공유


You don’t have to be faster than the bear

Note – this post disappeared during the blog upgrade, recovered due to search cache.

Just got done reading Michal Zalewski's really interesting post on the Zero Day blog, found here.

His premise, which I don't debate, is that we've done a lousy job of defining software security on a scholarly basis. He goes on to point out, often humorously, the flaws with many of the existing approaches.

My belief is that a lot of this is because we are too often computer _scientists_, and not software _engineers_. Engineering allows for failure. We don't attempt to build aircraft that cannot possibly fall out of the sky, just aircraft that don't do that very often. We have also built many aircraft where we don't have a perfect understanding of the science involved. We flew at supersonic speeds for many years before we understood the math. Even now, the Navier-Stokes equation that governs air flow (to be precise, fluids) isn't solvable in closed form – to non-mathematicians, that means we can only approximate the lift on a wing. Computers help with the problem, since numerical analysis will give better results than we did when we had to use equations and calculators or slide rules. Generally, this means we do something within some error bars.

Back to software security, I think it's all relative.

Given the attack tools that we have at any given moment, can a piece of software be attacked using fewer resources than the value of the resource that the software protects? If so, then it's insecure, if not, then it is good enough for now.

That sort of squishy reasoning makes people trained in Boolean logic really uncomfortable, since in our world, everything ought to be a 0 or 1, none of this maybe stuff.

The core problem is that we've known for some time, dating back to the JPL studies, that there will always be some number of errors per thousand lines of code. We can get more errors with sloppy development practices, and fewer with better development practices. Even Daniel J. Bernstein makes security mistakes.

So a given piece of non-trivial software will always have some number of security flaws. Next problem is how much work it is to find one or more flaws – this is one of the most neglected aspects of Saltzer and Schroeder's security design principles – work factor. We then factor in what's protected by the software, and then throw in a splash of economics theory that says people will invest their time rationally in terms of perceived rewards.

What this boils down to is that if few people use your software, then you don't really need to put much effort into security. No one's attacking you, so vulnerabilities don't matter. Seems to be the approach of some companies out there, and this tends to be a problem if you get popular, or just annoy the attackers who then make a project out of you.

If a lot of people use your software, like the stuff I work on, then we should put a lot of effort into security – and we do.

Another point to Michal's post is that a networked system of computers is really a different problem than a fixed piece of software. If you're trying to secure a network, then it is inhabited by users and admins, both of whom inject random behaviors that we can't model well. To work with that, you need to look at what I term a security dependency analysis, where you look at the escalation paths present independently of the potential for vulnerabilities on any given node.

Comments

  • Anonymous
    May 28, 2010
    What? But that is exactly what I and others have been expecting all along ... Are you saying that the fact that you don't have to outrun the bear has only so recently reached you, David? [dcl] No, not at all. That part is obvious. What is not so obvious is that we keep expecting security to be binary - either something is completely secure, or it is insecure. In reality, it is a function of time.

  • Anonymous
    June 03, 2010
    The link to the blog article, lost from the original article, is: www.zdnet.com/.../6503