Larry Osterman's WebLog

Confessions of an Old Fogey
Blog - Title

What makes a bug a security bug?

What makes a bug a security bug?

Rate This
  • Comments 22

In my last post, I mentioned that security bugs were different from other bugs.  Daniel Prochnow asked:

What is the difference between bug and vulnerability?

In my point of view, in a production enviroment, every bug that may lead to a loss event (CID, image, $) must be considered a security incident.

What do you think?

I answered in the comments, but I think the answer deserves a bit more commentary, especially when Evan asked:

“I’m curious to hear an elaboration of this.  System A takes information from System B.  The information read from System A causes a[sic] System B to act in a certain way (which may or may not lead to leakage of data) that is unintended.  Is this a security issue or just a bug?”

Microsoft Technet has a definition for a security vulnerability:

“A security vulnerability is a flaw in a product that makes it infeasible – even using the product properly – to prevent an attacker from usurping privileges on the user’s system, regulating it’s operation, compromising data on it or assuming ungranted trust.”

IMHO, that’s a bit too lawyerly, although the article does an excellent job of breaking down the definition and making it understandable.

Crispin Cowan gave me an alternate definition, which I like much better:

Security is the preservation of:

· Confidentiality: your secret stuff stays secret

· Integrity: your data stays intact

· Availability: your systems and data remain available

A vulnerability is a bug such that an attacker can compromise one or more of the above properties

 

In Evan’s example, I think there is a security bug, but maybe not.  For instance, it’s possible that System A validates (somehow) that System B hasn’t been compromised.  In that case, it might be ok to trust the data read from System B.  That’s part of the reason for the wishy-washy language of the official vulnerability definition.

To me, the key concept in determining if a bug is a security bug or not is that of an unauthorized actor.  If an authorized user performs operations on a file to which the user has access and the filesystem corrupts their data, it’s a bug (a bad bug that MUST be fixed, but a bug nonetheless).  If an unauthorized user can cause the filesystem to corrupt the data of another user, that’s a security bug.

When a user downloads a file from the Internet, they’re undoubtedly authorized to do that.  They’re also authorized to save the file to the local system.  However the program that reads the file downloaded from the Internet cannot trust the contents of the file (unless it has some way of ensuring that the file contents haven’t been tampered with[1]).  So if there’s a file parsing bug in the program that parses the file, and there’s no check to ensure the integrity of the file, it’s a security bug.

 

Michael Howard likes using this example:

char foo[3];
foo[3] = 0;

Is it a bug?  Yup.  Is it a security bug?  Nope, because the attacker can’t control anything.  Contrast that with:

struct
{
    int value;
} buf;
char foo[3];

_read(fd, &buf, sizeof(buf));
foo[buf->value] = 0;

That’s a 100% gen-u-wine security bug.

 

Hopefully that helps clear this up.

 

 

[1] If the file is cryptographically signed with a signature from a known CA and the certificate hasn’t been revoked, the chances of the file’s contents being corrupted are very small, and it might be ok to trust the contents of the file without further validation. That’s why it’s so important to ensure that your application updater signs its updates.

  • PingBack from http://housesfunnywallpaper.cn/?p=1249

  • Hi Larry,

    I'm not entirely convinced that a security bug neccesarily is worse than an "ordinary" bug. Obviously it will all depend on the bug in question (security or otherwise), but in many way one can at least (try to) archtecture oneself around security bugs.

    E.g. imagine you have a database product that for some reason or other anyone with network access can craft a udp packet and flip it over. This is obviously a bad security bug - but you can protect your database by placing a firewall in front of your database - not eliminating the security bug - but at least reducing the risk of someone being able to send that udp packet in the first place

    On the other hand, if you have an "ordinary" bug that say stops your machines from starting after a certain date - then regardless if anyone would like to attack you or not you're in trouble.

    Obvisouly a purist might want to call the latter example an issue and not a risk - but you probably get the point I'm getting at.

    -Ash

  • I personally think you're still emphasizing a false dichotomy.

    Consider the string of bugs that, chained together, produced the exploitable Flash vulnerability:

    http://www.matasano.com/log/1032/this-new-vulnerability-dowds-inhuman-flash-exploit/

    In the real world, code containing something like the out of bounds dereference you showed above would be buried inside a much more complex piece of software, in which it would be infeasible to fully unravel the chain and state conclusively/provably that "this is not a security bug".

  • Ash, I'm not saying that they are "worse".  

    I'm saying that the risk associated with a security bug is greater than the risk associated with a non-security bug, because some unauthorized person can exploit a security bug to do "bad things" (for an unspecified value of "bad things").

    Remember that at the end of the day, once a product has shipped, applying a bug fix has a certain amount of risk associated with taking the fix.  Organizations need to weigh the risk associated with taking the fix.  

    If a bug is a security bug, that increases the risk of NOT taking the fix.  

  • Actually, you have to do a whole lot more than just "ensure that your application updater signs its updates".  I know this is probably not what you meant, but saying that a signature check alone is enough to trust file contents is dangerous.

    Just because code is signed does not mean that it is free of bugs.  Furthermore, even enforcing a rule that code must be signed with your key is not particularly enough to ensure that everything is kosher, depending on how your software update mechanism works.

    The problem is that, assuming you sign all of your updates, and you have an update that introduces a security bug, and then another update that fixes said security bug, a malicious user in the update server path might be able to just feed you the signed binaries *with security bug present*, which will cause a software update mechanism that consists of simply "check signature, replace file" to happily reintroduce security holes at the whim of any attacker.

    This might sound a bit farfetched, but it's a very real problem (in fact, one that many Linux distributions with a centralized package management system that spiders out to third party mirrors that host "signed" packages are hard hit by).  The problem is made even worse when you consider that there are scenarios where you may want to allow a user to run an old version of a particular software version, which even happens with Microsoft software (say, if you want to run Windows Vista SP0 for a while still, even though Windows Vista SP1 is out).  Blind "new file version is higher than old file version" checks don't really cut it either.  This tends to be a even more common with third party software than with Microsoft software out in the real world, in my experience.

    The unfortunate fact is that updating software securely is very hard to do, and it's a whole lot more complex than simply slapping a digital signature check on the whole process and simply calling it done.  And this also assumes that the process running the signature check has ensured that the update file is at a secure location before it checks the signature (so that a user can't exploit a time of use check if the update was, say, running from the user's %TEMP% directory), and all the "usual" local security problems, which is a whole other, non-trivial can of worms.

    - S

  • Skywing, as always, you're right.  I seriously glossed over the difficulty associated with writing an updater.  

  • I agree 100% with you up to the conclusion that a security bug increases the risk of not taking the fix.

    A security bug usually implies someone else can do bad things to your system, however you're not guarenteed that someone will try that.

    A functional bug on the other hand usually implies that bad things can happen to your system when you're using it like you're supposed to.

    My reasoning is that you will be garanteed to use your system, but you're not garanteed that someone will try to hack your system. Thus the likelyhood of being hit by a functional bug is theoretically higher than being hit by a security bug.

    Adding the ability to some extent mitigate security bugs by implementing other measures I see not keeping up to date with functional bug fixes a bigger risk than security bugs.

    This shouldn't take any glory or attention away from security bugs. But I'd like customers to pay more attention to fixing funtional bugs than they are doing today. Just by applying the latest service pack would be a nice start.

    -Ash

  • Ash, I think we're in essentially total agreement.  I'd love it if people picked up the latest service packs.  And if they kept their machines up-to-date.

    I wish for lots of things.  

  • >> Security is the preservation of:

    >> · Confidentiality: your secret stuff stays secret

    >> · Integrity: your data stays intact

    >> · Availability: your systems and data remain available

    If that is the definition of security then nearly every version of Windows that I have used is insecure.

    "A vulnerability is a bug such that an attacker can compromise one or more of the above properties"

    With that definition, vunerability is different from insecure.  With that definition, even if every vulnerability is removed, Windows will still be insecure.  I wonder if Crispin Cowan might want to change his definitions.

    'If an authorized user performs operations on a file to which the user has access and the filesystem corrupts their data, it’s a bug (a bad bug that MUST be fixed, but a bug nonetheless).'

    Wow, you and I agree.  But do you know any way to persuade your employer to agree?

    > Michael Howard likes using this example:

    > char foo[3];

    > foo[3] = 0;

    > Is it a bug?  Yup.  Is it a security bug?  Nope, because the attacker can’t control anything.

    Nope or maybe not nope, because that example looks rather Flashy.

  • As always, these discussions are very interesting.

    Should "->" be "." in foo[buf->value] = 0 in your example?

  • Depending what's declared between foo and the buggy assignment, that could actually be a security bug if it overwrites a security-related value.

  • I don't mind admitting that the example with the struct vs. the array doesn't jump out at me as a bug at all, let alone a security bug. Heck - it's 20 years since I wrote C in anger, and even then, "mild irritation" might have been a more accurate claim.

    I'm guessing that the second example is a security bug because it allows the bad guy to decide where he wants the 0 to go. The assumption here is that in the first scenario he can't combine the ability to write his 0 in an unexpected place with some other exploit to produce a problem.

    Did I miss the point?

  • Dominic - the bug allows an attacker to write a 0 to any place in memory he wants.  Just having the ability to write a 0 in memory at an arbitrary location in memory can be used to create a remote code execution exploit even without other exploits.  

    Just writing a 0 is sufficient.

  • > If that is the definition of security then nearly every

    > version of Windows that I have used is insecure.

    This surprises you?

    Crispin is actually just quoting the Department of Defense

    definition of security.  There was excellent work done on

    the theory of security in the pre-Windows era, most of

    which has been forgotten and yet to be rediscovered.

  • This line strikes me:

    >>> To me, the key concept in determining if a bug is a security bug or not is that of an unauthorized actor.  If an authorized user performs operations on a file to which the user has access and the filesystem corrupts their data, it’s a bug (a bad bug that MUST be fixed, but a bug nonetheless).  If an unauthorized user can cause the filesystem to corrupt the data of another user, that’s a security bug. <<<

    I believe that UAC muddies this just a little bit now.  Since certain apps can run elevated, would you consider this sort of bug in one of those apps a vulnerability now, since it's a different level of "user" access?  I find that interesting - that setup programs can now have "security vulnerabilities" where bugs once existed.

Page 1 of 2 (22 items) 12