What makes a bug a security bug?
In my last post, I mentioned that security bugs were different from other bugs. Daniel Prochnow asked:
What is the difference between bug and vulnerability?
In my point of view, in a production enviroment, every bug that may lead to a loss event (CID, image, $) must be considered a security incident.
What do you think?
I answered in the comments, but I think the answer deserves a bit more commentary, especially when Evan asked:
“I’m curious to hear an elaboration of this. System A takes information from System B. The information read from System A causes a[sic] System B to act in a certain way (which may or may not lead to leakage of data) that is unintended. Is this a security issue or just a bug?”
Microsoft Technet has a definition for a security vulnerability:
“A security vulnerability is a flaw in a product that makes it infeasible – even using the product properly – to prevent an attacker from usurping privileges on the user’s system, regulating it’s operation, compromising data on it or assuming ungranted trust.”
IMHO, that’s a bit too lawyerly, although the article does an excellent job of breaking down the definition and making it understandable.
Crispin Cowan gave me an alternate definition, which I like much better:
Security is the preservation of:
· Confidentiality: your secret stuff stays secret
· Integrity: your data stays intact
· Availability: your systems and data remain available
A vulnerability is a bug such that an attacker can compromise one or more of the above properties
In Evan’s example, I think there is a security bug, but maybe not. For instance, it’s possible that System A validates (somehow) that System B hasn’t been compromised. In that case, it might be ok to trust the data read from System B. That’s part of the reason for the wishy-washy language of the official vulnerability definition.
To me, the key concept in determining if a bug is a security bug or not is that of an unauthorized actor. If an authorized user performs operations on a file to which the user has access and the filesystem corrupts their data, it’s a bug (a bad bug that MUST be fixed, but a bug nonetheless). If an unauthorized user can cause the filesystem to corrupt the data of another user, that’s a security bug.
When a user downloads a file from the Internet, they’re undoubtedly authorized to do that. They’re also authorized to save the file to the local system. However the program that reads the file downloaded from the Internet cannot trust the contents of the file (unless it has some way of ensuring that the file contents haven’t been tampered with[1]). So if there’s a file parsing bug in the program that parses the file, and there’s no check to ensure the integrity of the file, it’s a security bug.
Michael Howard likes using this example:
char foo[3];
foo[3] = 0;
Is it a bug? Yup. Is it a security bug? Nope, because the attacker can’t control anything. Contrast that with:
struct
{
int value;
} buf;
char foo[3];_read(fd, &buf, sizeof(buf));
foo[buf->value] = 0;
That’s a 100% gen-u-wine security bug.
Hopefully that helps clear this up.
[1] If the file is cryptographically signed with a signature from a known CA and the certificate hasn’t been revoked, the chances of the file’s contents being corrupted are very small, and it might be ok to trust the contents of the file without further validation. That’s why it’s so important to ensure that your application updater signs its updates.
Comments
Anonymous
August 18, 2008
PingBack from http://housesfunnywallpaper.cn/?p=1249Anonymous
August 18, 2008
Hi Larry, I'm not entirely convinced that a security bug neccesarily is worse than an "ordinary" bug. Obviously it will all depend on the bug in question (security or otherwise), but in many way one can at least (try to) archtecture oneself around security bugs. E.g. imagine you have a database product that for some reason or other anyone with network access can craft a udp packet and flip it over. This is obviously a bad security bug - but you can protect your database by placing a firewall in front of your database - not eliminating the security bug - but at least reducing the risk of someone being able to send that udp packet in the first place On the other hand, if you have an "ordinary" bug that say stops your machines from starting after a certain date - then regardless if anyone would like to attack you or not you're in trouble. Obvisouly a purist might want to call the latter example an issue and not a risk - but you probably get the point I'm getting at. -AshAnonymous
August 18, 2008
The comment has been removedAnonymous
August 18, 2008
Ash, I'm not saying that they are "worse". I'm saying that the risk associated with a security bug is greater than the risk associated with a non-security bug, because some unauthorized person can exploit a security bug to do "bad things" (for an unspecified value of "bad things"). Remember that at the end of the day, once a product has shipped, applying a bug fix has a certain amount of risk associated with taking the fix. Organizations need to weigh the risk associated with taking the fix. If a bug is a security bug, that increases the risk of NOT taking the fix.Anonymous
August 18, 2008
Actually, you have to do a whole lot more than just "ensure that your application updater signs its updates". I know this is probably not what you meant, but saying that a signature check alone is enough to trust file contents is dangerous. Just because code is signed does not mean that it is free of bugs. Furthermore, even enforcing a rule that code must be signed with your key is not particularly enough to ensure that everything is kosher, depending on how your software update mechanism works. The problem is that, assuming you sign all of your updates, and you have an update that introduces a security bug, and then another update that fixes said security bug, a malicious user in the update server path might be able to just feed you the signed binaries with security bug present, which will cause a software update mechanism that consists of simply "check signature, replace file" to happily reintroduce security holes at the whim of any attacker. This might sound a bit farfetched, but it's a very real problem (in fact, one that many Linux distributions with a centralized package management system that spiders out to third party mirrors that host "signed" packages are hard hit by). The problem is made even worse when you consider that there are scenarios where you may want to allow a user to run an old version of a particular software version, which even happens with Microsoft software (say, if you want to run Windows Vista SP0 for a while still, even though Windows Vista SP1 is out). Blind "new file version is higher than old file version" checks don't really cut it either. This tends to be a even more common with third party software than with Microsoft software out in the real world, in my experience. The unfortunate fact is that updating software securely is very hard to do, and it's a whole lot more complex than simply slapping a digital signature check on the whole process and simply calling it done. And this also assumes that the process running the signature check has ensured that the update file is at a secure location before it checks the signature (so that a user can't exploit a time of use check if the update was, say, running from the user's %TEMP% directory), and all the "usual" local security problems, which is a whole other, non-trivial can of worms.
- S
Anonymous
August 18, 2008
Skywing, as always, you're right. I seriously glossed over the difficulty associated with writing an updater.Anonymous
August 18, 2008
The comment has been removedAnonymous
August 18, 2008
Ash, I think we're in essentially total agreement. I'd love it if people picked up the latest service packs. And if they kept their machines up-to-date. I wish for lots of things.Anonymous
August 18, 2008
The comment has been removedAnonymous
August 19, 2008
The comment has been removedAnonymous
August 19, 2008
The comment has been removedAnonymous
August 19, 2008
I don't mind admitting that the example with the struct vs. the array doesn't jump out at me as a bug at all, let alone a security bug. Heck - it's 20 years since I wrote C in anger, and even then, "mild irritation" might have been a more accurate claim. I'm guessing that the second example is a security bug because it allows the bad guy to decide where he wants the 0 to go. The assumption here is that in the first scenario he can't combine the ability to write his 0 in an unexpected place with some other exploit to produce a problem. Did I miss the point?Anonymous
August 19, 2008
The comment has been removedAnonymous
August 19, 2008
> If that is the definition of security then nearly every > version of Windows that I have used is insecure. This surprises you? Crispin is actually just quoting the Department of Defense definition of security. There was excellent work done on the theory of security in the pre-Windows era, most of which has been forgotten and yet to be rediscovered.Anonymous
August 20, 2008
The comment has been removedAnonymous
August 20, 2008
The comment has been removedAnonymous
August 20, 2008
The comment has been removedAnonymous
August 21, 2008
The comment has been removedAnonymous
August 21, 2008
The comment has been removedAnonymous
August 21, 2008
The comment has been removedAnonymous
August 22, 2008
The comment has been removedAnonymous
September 03, 2008
A { COLOR: #0033cc } A:link { COLOR: #0033cc } A.local:visited { COLOR: #0033cc } A:visited { COLOR: