Larry Osterman's WebLog

Confessions of an Old Fogey
Blog - Title

Is a DoS a valid security problem?

Is a DoS a valid security problem?

  • Comments 25

Time for some more controversy...

Another Microsoft developer and I recently had a fairly long email discussion about a potential problem.  It turns out that it might be possible to craft a file in such a fashion that would cause some internal applications to crash.  The details aren't really important, but some of the comments that this developer (let's call him "R.") made were rather interesting and worthy of discussion.

In this case, the problem could not be exploited - the worst that could happen with this problem is a crash, which is considered a local DoS attack (a classic buffer overrun also can cause a crash, but they allow the attacker to execute arbitrary code, instead, they're classified as elevation of privilege attacks (the buffer overrun allows the attacker to elevate from unauthenticated to authenticated).

The SDL process states this kind of DoS attack isn't important enough to rate a security bulletin, instead they're classified as being in the "fix in the next service pack" category (IIRC, remote DoS attacks are service bulletin classed issues (I may be wrong on this one)).

So as I mentioned, we're discussing to what level the system should protect itself from this kind of issue.  R. was quite adamant that he considered these kinds of problems to be "non-issues" because the only thing that could happen was that an app could crash.

The rules of the road in this case are quite simple:  If it's possible to craft a file that would cause a part of the system to hang or crash, it's got to be fixed in the next service pack after the problem's discovered (or RTM if it's found in testing, obviously). 

David LeBlanc has written a number of articles where he discusses related issues (see his article "Crashes are Bad, OK?" for an example). 

But R.'s point was essentially the same as David's: Crashes (hangs) are bad got a point, but are they really "security bugs"?  There are tons of ways to trick applications to do bad things, do all of them classify problems that MUST be fixed, even if they're not exploitable?

 

Just something to think about on a Thursday afternoon.

  • you've got to remember we've bought the product, it should just work and should be fixed asap, don't make us pay to be your beta tester (also we know nobodies perfect, thus a best effort is good enough).  If the product is/was free thats a different matter.  With that said if the bug does not create a security issue then its of lower priority then the bug fix! my .0000001 cents worth.

  • Crashes, such as those caused by a NULL pointer dereference, may not directly facilitate the execution of arbitrary code, but they can be enablers.  For example, if the unhandled exception filter can be hijacked, a NULL pointer dereference can be indirectly used to leverage code execution.

    This paper gives an example of this type of scenario:

    http://www.uninformed.org/?v=4&a=5&t=sumry

  • Well, obviously it depends on which app crashes. If a malformed file causes Word or WMP to crash, that's annoying. If it causes CSRSS to crash, that's bad.

  • Keith, absolutely.  And that's included in the rating.  If an app can crash csrss, it qualifies as a higher class (I think that crashing csrss is "important" which means it's worth a bulletin, but I'm not 100% sure).  But if you can crash an application?

    And null, you're right - there ARE exploitable cases that can be launched with exceptions, it's very relevant.

    But after you've done the analysis and realized that you can't exploit it, IS the bug still a security bug?

  • Larry: The idea behind that paper is it allows you to turn things that you didn't think are exploitable into exploitable bugs.

    IOW, that technique allows you to turn a random null pointer dereference in IE, which researching it would lead you to believe - "oh, that's just a crash bug, nothing more" - into remote code execution.

    It is difficult to say for certain that something is not exploitable.  Many people (Microsoft included!) have written off things like that class of null pointer deref bug as non-exploitable, and then it becomes proven that by being clever, you -can- exploit it after all.

    I suppose what I am trying to say is that it can be very hard to definitively say something is not exploitable.  Even if you've done all your homework, can you guarantee that nobody will come up with a way to turn the bug into remote code execution at a later point in time?

    For example, before you had read that paper, would you consider a null pointer dereference in IE as exploitable?  What about after?

  • Skywing, I know.  Believe me, I've read (and enjoyed reading) all of your published papers.  You've really done some very impressive work.  Honestly.

    But in this case, how is allocating 2.8G of physical RAM exploitable?  It will certainly DOS a system (especially if the allocation is in a process like explorer)?

    How about tricking a system API to read through a 4G file 8 bytes at a time?  And again, what if the application in this situation is explorer? Is that exploitable?

    Both of these qualify as DoS attacks, but by no technique that I'm aware of are they exploitable.  They're just profoundly annoying.  So these are security bugs?

    In addition, your attack requires that you somehow are able to gain access to the system level unhandled exception filter.  In many (most?) cases, that isn't possible.  In IE, which hosts 3rd party script which can then run code that could allow you to hijack the top level exception handler, you're right.  In the case of explorer, or mplay32 or notepad, I'm not so sure.  Certainly you'd have to have the ability to run code on the users box before you could hijack the exception handler, and if you can write code that runs on the users box, is there really a difference between forcing an AV which is handled by a hijacked exception handler and just running the code?  

    If you can find an existing exception handler that has a bug that allows arbitrary code execution, I think it's fair to claim that the bug that needs to be fixed is the buggy exception handler, NOT the code that causes the AV (you should really fix both, but IMHO, the security hole is in the buggy exception handler, not the actual AV).

  • Is it a security bug?  Assuming the only possible result is a crash, no further exploit is possible, it depends on the impact of the resulting crash.

    Say the program is one that a user may have unsaved work in.  Just about anything in the Office suite probably meets that test.  If this is true, by tricking you into opening a malicious file, I might cost you work since your last save.  That said, such an attack would be a low-level security bug.  Very annoying, yes, but I'd agree that it should be put off until the next planned release.

  • There's another reason to fix these "yeah, it probably isn't a security problem, but..." bugs --- making it easier for other developers to reason about your code.

    Imagine that you're doing a security review in a few years time, and come across this code again. Do you want to go through the same mental process of tracing down what could go wrong when the file is malformed, then remember that it's that old DoS? Or do you just want to see that the code handles the malformed file, then move on to your next review task?

  • Adrian, I'm going to twist the problem a bit, just for chuckles  (please note, this is NOT the case - it's just a hypothetical). What if the file in question could be created by the most popular application for your platform?  

    In other words, if you fixed the file corruption problem that caused the DoS problem it would render some number of documents produced by your biggest customer unreadable?

    What do you do then?

    This isn't a simple problem, that's why R. and I have been having such a long discussion about it.

  • Larry: The point that I was trying to make with that comment was that it is dangerous to get onto the mode where you can write off bugs as non-exploitable and be done with it in today's day and age.  I'm not claiming that the UEF issue can be used to turn everything into an exploitable bug or anything like that, but more making a general observation that the same kind of thinking that lead to lots of "non-exploitable" crashes in IE not being fixed quickly can very quickly come around to bite you if somebody figures out a clever way to turn your "benign" crash into an exploitable bug later on, in a way you never considered.

    Now, certainly, some (maybe even most) types of crash bugs just will never ever turn into an exploitable remote code scenario, but it's a fair bet that some of those kinds of problems will, especially when you start doing things like using a combination of several different bugs to reach an exploitable situation (like as outlined in the UEF paper).

    For instance, this is a good reason why local privilege escalation bugs need to get patched sooner rather than later.  While on its own, a local privilege escalation bug might not seem to be a good idea, a code execution vulnerability in a lower privileged service can be combined with a local privilege escalation bug to create a "remote root" type of scenario.  (I realize that this isn't what you're describing, but I'm just trying to draw a parallel as far as how several unrelated problems can be combined to create a much more severe exploit scenario than might be immediately obvious.)

    Certainly, you have to draw the line somewhere - it's not practical to fix every single potential problem immediately.  However, I still think that it is a dangerous path (requiring very careful consideration) to start writing off things as non-exploitable just because they can't be used to cause code execution (or remote privileged code execution) -on their own-.  I'm not trying to accuse you of just blindly writing off potential problems, either; just stating that it is not so clear-cut as to whether a certain problem is never going to be exploitable as it might seem on the surface in some cases (although I'm sure you are aware of that).

  • > Crashes (hangs) are bad got a point, but are they really "security bugs"?

    That depends on your definition of security.  Here's a comparison.  A few years ago there were newspaper reports of attacks on some JR lines, where malicious signals were causing trains to stop.  No other damage, i.e. passengers were late for work but not injured.  Was this a security bug?

    I still think the priorities are odd though.  Sure, bugs which cause escalation of privilege and pumping of spam are worse than bugs which just halt applications until the next reboot.  But I still think bugs which destroy disk files are worse than both, exactly the opposite of Microsoft's opinion.  Is destruction of disk files a security bug when the attack vector is a virus but a non-security bug when the attack vector is faulty Windows utilities?

  • My two small cents.

    I know that not all companies have the luxury of time on their hands (we certainly don't). But I think that even though a bug might not be fixed for the next release, it can't be unclassified as a security bug. In other words, if you need a way to "move its fix to the next release", then change the release methodology, not the bug itself. If programmer R was complaining about this, maybe it's because he was up against an unsurmountable wall which is the release process itself.

    I also agree with Skywing about exploitation. Maybe I'm a stickler for the way things should be, but I believe any program should behave with the same 'grace' as a lisp program would. That is, if there's a bug, it's really a logic bug: the program does something you didn't *intend* it to do, but the program doesn't do something *it* didn't intend to do.

    Given that, if giving 'stupid' input into your program hangs your computer (e.g. allocating 2 gigs of ram), it's not really a bug. But giving it specially crafted garbage shouldn't make it break. The program should never go outside of the path which has been explicitly carved for it to tread.

    Maybe I'm missing something, but doesn't breakage of that sort indicate that something is not correct about the infrastructure of the program?

  • "how is allocating 2.8G of physical RAM exploitable"

    Hypothetically, that could push certain processes into the pagefile, which could in turn slow down the productive execution of a process doing security-related tasks (perhaps even the process that was forced to allocate 2.8G).  Such a slowdown could reveal timing information that was previously too difficult to reliably exploit in an attack.

    Careful control of which process allocates memory and where on a NUMA-style system could lead to some interesting synchronization-related attacks.

    These are non-obvious, but not terribly far-fetched either.

    I would tend to treat most such bugs as low priority as well, rather than immediate security issues -- but damn, security is complicated these days!

  • Larry: This is definitely a security bug. It's really close to elevation of priveledge. Unless a normal user has permission to shut down your app, the user suddenly has the ability to do something only a sys admin should be able to do.

    Now, how serious this is and whether it can be put till the next service pack of course depends on the impact and the likelyhood. In your system, maybe it's not a big deal, but if it was something like a public facing web site, where a crash could potentially affect 10,000 other users, then it would be a pretty big deal.

  • I actually just ran into this situation - an open source media player I'm involved in had an entry filed in BugTraq against it for DoS on an invalid stream, the filer left a post linking to it in the forum, and disappeared without leaving any way to duplicate it. So now we may never know whether we've fixed it or not.

    The thing is, the maintainers already know of some perfectly valid (but rare edge case) streams that can crash it, let alone broken streams. It's practically a given that all media players will crash when you point a fuzzer their way, both because of brittle handling of often complex formats for a bit more speed, and a heavy reliance on unpredictable third-party codecs and drivers installed on the system. What I'd consider a noteworth "DoS" is something that crashes the audio/video driver, or an outside component like explorer, or corrupts something (like its library). Of course since all crashes are annoying and possibly exploitable, we try to fix what can be fixed, but crashes have to be triaged like any other bug; other issues may be more important. (And adding it to a security tracker just irks the maintainers.)

Page 1 of 2 (25 items) 12