Security

Larry Osterman's WebLog

Confessions of an Old Fogey
  • Larry Osterman's WebLog

    Insecure vs. Unsecured

    • 5 Comments

    A high school classmate of mine recently posted on Facebook:

    Message just popped up up my screen from Microsoft, I guess. "This site has insecure content." Really? Is the content not feeling good about itself, or, perchance, did they mean "unsecured?" What the ever-lovin' ****?

    I was intrigued, because it was an ambiguous message and it brings up an interesting discussion.   Why the choice of the word “insecure” instead of “unsecured”?  

    It turns out that this message (which doesn’t come from Internet Explorer, but instead from another browser) is generated when you attempt to access a page which contains mixed content.  In other words, a page where the primary page is protected via SSL yet there are child elements in the page that are not protected by SSL. 

    Given that this is a mixed content warning, wouldn’t my friend’s suggestion (that they use “unsecured” in the message rather than “insecure”) be a better choice?   After all, the message is complaining that there is content that hasn’t been secured via SSL on the page, so the content is unsecured (has no security applied).

     

    Well, actually I think that insecure is a better word choice than unsecured, for one reason:  If you have a page with mixed content on it, an attacker can use the unsecured elements to attack the secured elements.  This page from the IE blog (and this article from MSDN) discuss the risks associated with mixed content – the IE blog post points out that even wrapping the unsecured content in a frame won’t make the page secure.

     

    So given a choice between using “insecure” or “unsecured” in the message, I think I prefer “insecure” because it is a slightly stronger statement – “unsecured” implies that it’s a relatively benign configuration error.

     

    Having said all that, IMHO there’s a much better word to use in this scenario than “insecure” – “unsafe”.  To me, “unsafe” is a better term because it more accurately reflects the state – it says that the reason that the content is being blocked is because it’s not ”safe”.

    On the other hand, I���m not sure that describing content secured via SSL as “safe” vs. “unsafe” is really any better, since SSL can only ensure two things: that a bystander cannot listen to the contents of your conversation and that the person you’re talking to is really the person who they say they are (and the last is only as reliable as the certificate authority who granted the certificate is).   There’s nothing that stops a bad guy from using SSL on their phishing site.

    I actually like what IE 9 does when presented with mixed content pages – it blocks the non SSL content with a gold bar which says “Only secure content is displayed” with a link describing the risk and a button that allows all the content to be displayed.  Instead of describing what was blocked, it describes what was shown (thus avoiding the “insecure” vs “unsecured” issue) and it avoids the “safe” vs “unsafe” nomenclature.  But again, it does say that the content is secure – which may be literally true, but many customers believe that “secure” == “safe” which isn’t necessarily true.

  • Larry Osterman's WebLog

    Hacking Windows with Phones… I don’t get it.

    • 12 Comments

    Over the weekend, Engadget and CNet ran a story discussing what was described as a new and novel attack using Android smartphones to attack PCs.  Apparently someone took an Android smartphone and modified the phone to emulate a USB keyboard.

    When the Android phone was plugged into Windows, Windows thought it was a keyboard and allowed the phone to inject keystrokes (not surprisingly, OSX and Linux did the same).  The screenshots I’ve seen show WordPad running with the word “owned!” on the screen, presumably coming from the phone.

     

    I have to say, I don’t get why this is novel.  There’s absolutely no difference between this hack and plugging in an actual keyboard to the computer and typing keys – phones running the software can’t do anything that the user logged into the computer can’t do, they can’t bypass any of Windows security features.  All they can do is be a keyboard.

    If the novelty is that it’s a keyboard that’s being driven by software on the phone, a quick search for “programmable keyboard macro” shows dozens of keyboards which can be programmed to insert arbitrary key sequences.  So even that’s not particularly novel.

     

    I guess the attack could be used to raise awareness of plugging in devices, but that’s not a unique threat.  In fact the 1394 “FireWire” bus is well known for having significant security issues (1394 devices are allowed full DMA access to the host computer). 

    Ultimately this all goes back to Immutable Law #3.  If you let the bad guys tamper with your machine, they can 0wn your machine.  That includes letting the bad guys tamper with the devices which you then plug into your machine.

    Sometimes the issues which tickle the fancy of the press mystify me.

  • Larry Osterman's WebLog

    Microsoft Office team deploys botnet for security research

    • 4 Comments

    Even though it’s posted on April 1st, this is actually *not* an April Fools prank.

    It turns out that the Office team runs a “botnet” internally that’s dedicated to file fuzzing.  Basically they have a tool that’s run on a bunch of machines that runs file fuzzing jobs in their spare time.  This really isn’t a “botnet” in the strictest sense of the word, it’s more like SETI@home orother distributed computing efforts or but “botnet” is the word that the Office team uses when describing the effort.

     

    For those that don’t know what fuzz testing is, it’s a remarkably effective technique that can be used to find bugs in file parsers.  Basically you build a file with random content and you try to parse the file.  Typically you start with a known good file and randomly change the contents of the file.  If you iterate over that process many times, you will typically find dozens or hundreds of bugs.  The SDL actually requires that every file parser be fuzz tested for a very large (hundreds of thousands) number of iterations.

    The Windows team has an entire lab that is dedicated to nothing but fuzz testing.  The testers author fuzz tests (using one of several fuzz testing frameworks) and they hand the tests to the fuzz test lab which actually runs the tests.  This centralizes the fuzz testing effort and keeps teams from having to keep dozens of machines dedicated to fuzz testing.  The Office team took a different tack on the effort – instead of dedicating an entire lab to fuzz testing they also dedicated the spare cycles of their machines.  Very cool.

     

    I’ve known about the Office teams effort for a while now (Tom Gallagher gave a talk about it at a recent BlueHat conference) but I didn’t know that the Office team had discussed it at CanSecWest until earlier today.

  • Larry Osterman's WebLog

    NextGenHacker101 owes me a new monitor

    • 102 Comments

    Because I just got soda all over my current one…

    One of the funniest things I’ve seen in a while. 

     

    And yes, I know that I’m being cruel here and I shouldn’t make fun of the kids ignorance, but he is SO proud of his new discovery and is so wrong in his interpretation of what actually is going on…

     

     

     

    For my non net-savvy readers: The “tracert” command lists the route that packets take from the local computer to a remote computer.  So if I want to find out what path a packet takes from my computer to www.microsoft.com, I would issue “tracert www.microsoft.com”.  This can be extremely helpful when troubleshooting networking problems.  Unfortunately the young man in the video had a rather different opinion of what the command did.

  • Larry Osterman's WebLog

    Why are they called “giblets” anyway?

    • 0 Comments

    Five years ago, I attended one of the initial security training courses as a part of the XP SP2 effort.  I wrote this up in one of my very first posts entitled “Remember the giblets” and followed it up last year with “The Trouble with Giblets”.  I use the term “giblets” a lot but I’d never bothered to go out and figure out where the term came from.

    Well, we were talking about giblets in an email discussion today and one of my co-workers went and asked Michael Howard where the term came from.  Michael forwarded the question to Steve Lipner who was the person who originally coined the term and he came back with the origin of the term.

     

    It turns out that “giblets” is a term that was used at Digital Equipment Corporation back in the 1980s.  DEC used to sell big iron machines (actually I used DEC machines exclusively until I started at Microsoft).  The thing about big machines is that you usually need more than just the machine to build a complete solution – things like Ethernet repeaters and adapters and other fiddly bits.  And of course DEC was more than willing to sell you all these fiddly bits.  It seems that some of the DEC marketing people liked to refer to these bits and pieces as “giblets”. 

    Over time Steve started using the term for the pieces of software that were incidental to the product but which weren’t delivered by the main development team – things like the C runtime library, libJPG, ATL, etc. 

    Later on, someone else (Steve wasn’t sure who, it might have been Eric Bidstrup) pointed out that the giblets that came from a turkey didn’t necessarily come from the actual turkey that you’re eating which makes the analogy even more apt.

    Thanks to Craig Gehre for the picture.

  • Larry Osterman's WebLog

    Good News! strlen isn’t a banned API after all.

    • 6 Comments

    We were doing some code reviews on the new Win7 SDK samples the other day and one of the code reviewers noticed that the code used wcslen to compute the length of a string.

    He pointed out that the SDL Banned API page calls out strlen/wcslen as being banned APIs:

    For critical functions, such as those accepting anonymous Internet connections, strlen must also be replaced:

    Table 19. Banned string length functions and replacements

    Banned APIs StrSafe Replacement Safe CRT Replacement
    strlen, wcslen, _mbslen, _mbstrlen, StrLen, lstrlen String*Length strnlen_s

    I was quite surprised to see this, since I’m not aware of any issues where the use of strlen/wcslen could cause security bugs.

     

    I asked Michael Howard about this and his response was that Table 19 has a typo – the word “server” is missing in the text, it should be “For critical server functions, such as those accepting anonymous Internet connections, strlen must also be replaced”. 

    Adding that one word makes all the difference.  And it makes sense – if you’re a server and accepting anonymous data over the internet, an attacker could cause you to crash by issuing a non null terminated string that was long enough – banning the API forces the developer to think about the length of the string.

    Somewhat OT, but I also think that the table is poorly formatted – the “For critical…” text should be AFTER the table title – the way the text is written, it appears to be a part of the previous section instead of being attached as explanatory text on Table 19 (but that’s just the editor in me).

     

    Apparently in SDL v5.0 (which hasn’t yet shipped) the *len functions are removed from the banned API list entirely.

  • Larry Osterman's WebLog

    Chrome is fixing the file download bug…

    • 13 Comments

    I just noticed that Ryan Naraine has written that Google’s fixed the file download bug in Chrome.  This is awesome, but there’s one aspect of the fix that concerns me.

    According to the changelog:

    This CL adds prompting for dangerous types of files (executable) when they are automatically downloaded.

    When I read this, my first thought was: “I wonder how they determine if a file is ‘dangerous’?”

    One of the things that we’ve learned over time is that there are relatively few files that aren’t “dangerous”.  Sure there are the obvious files (.exe, .dll, .com, .bat, etc) but there are lots of other file types that can contain executable content.  For instance most word processors and spreadsheets support some form of scripting language, that means that most documents downloaded can contain executable content.

    Even if you ignore the files that contain things that are clearly identifiable as “code”, you’ve still got problems.  After all, just about every single file format out there has had readers who have had bugs that would have allowed remote code execution.

    It’s unfortunate, but given the history of the past couple of years, I can’t see how ANY content that was downloaded from the internet could be considered “safe”.

    IMHO Google’s change is a good start, but I’m worried that that it doesn’t go far enough. 

  • Larry Osterman's WebLog

    What makes a bug a security bug?

    • 22 Comments

    In my last post, I mentioned that security bugs were different from other bugs.  Daniel Prochnow asked:

    What is the difference between bug and vulnerability?

    In my point of view, in a production enviroment, every bug that may lead to a loss event (CID, image, $) must be considered a security incident.

    What do you think?

    I answered in the comments, but I think the answer deserves a bit more commentary, especially when Evan asked:

    “I’m curious to hear an elaboration of this.  System A takes information from System B.  The information read from System A causes a[sic] System B to act in a certain way (which may or may not lead to leakage of data) that is unintended.  Is this a security issue or just a bug?”

    Microsoft Technet has a definition for a security vulnerability:

    “A security vulnerability is a flaw in a product that makes it infeasible – even using the product properly – to prevent an attacker from usurping privileges on the user’s system, regulating it’s operation, compromising data on it or assuming ungranted trust.”

    IMHO, that’s a bit too lawyerly, although the article does an excellent job of breaking down the definition and making it understandable.

    Crispin Cowan gave me an alternate definition, which I like much better:

    Security is the preservation of:

    · Confidentiality: your secret stuff stays secret

    · Integrity: your data stays intact

    · Availability: your systems and data remain available

    A vulnerability is a bug such that an attacker can compromise one or more of the above properties

     

    In Evan’s example, I think there is a security bug, but maybe not.  For instance, it’s possible that System A validates (somehow) that System B hasn’t been compromised.  In that case, it might be ok to trust the data read from System B.  That’s part of the reason for the wishy-washy language of the official vulnerability definition.

    To me, the key concept in determining if a bug is a security bug or not is that of an unauthorized actor.  If an authorized user performs operations on a file to which the user has access and the filesystem corrupts their data, it’s a bug (a bad bug that MUST be fixed, but a bug nonetheless).  If an unauthorized user can cause the filesystem to corrupt the data of another user, that’s a security bug.

    When a user downloads a file from the Internet, they’re undoubtedly authorized to do that.  They’re also authorized to save the file to the local system.  However the program that reads the file downloaded from the Internet cannot trust the contents of the file (unless it has some way of ensuring that the file contents haven’t been tampered with[1]).  So if there’s a file parsing bug in the program that parses the file, and there’s no check to ensure the integrity of the file, it’s a security bug.

     

    Michael Howard likes using this example:

    char foo[3];
    foo[3] = 0;

    Is it a bug?  Yup.  Is it a security bug?  Nope, because the attacker can’t control anything.  Contrast that with:

    struct
    {
        int value;
    } buf;
    char foo[3];

    _read(fd, &buf, sizeof(buf));
    foo[buf->value] = 0;

    That’s a 100% gen-u-wine security bug.

     

    Hopefully that helps clear this up.

     

     

    [1] If the file is cryptographically signed with a signature from a known CA and the certificate hasn’t been revoked, the chances of the file’s contents being corrupted are very small, and it might be ok to trust the contents of the file without further validation. That’s why it’s so important to ensure that your application updater signs its updates.

  • Larry Osterman's WebLog

    Linus Torvalds is “Fed up with the ‘security circus’”

    • 23 Comments

    There’s been a lot of discussion on the intertubes about some comments that Linus Torvalds, the creator of Linux has made about security vulnerabilities and disclosure.

    Not surprisingly, there’s been a fair amount of discussion amongst the various MSFT security folks about his comments and about the comments about his comments (are those meta-comments?).

     

    The whole thing started with a posting from Linus where he says:

    Btw, and you may not like this, since you are so focused on security, one reason I refuse to bother with the whole security circus is that I think it glorifies - and thus encourages - the wrong behavior.

    It makes "heroes" out of security people, as if the people who don't just fix normal bugs aren't as important.

    He also made some (IMHO) unprofessional comments about the OpenBSD community, but I don’t think that’s relevant to my point.

    Linus has followed up his initial post with an interview with Network World where he commented:

    “You need to fix things early, and that requires a certain level of disclosure for the developers," Torvalds states, adding, "You also don't need to make a big production out of it."”

    and

    "What does the whole security labeling give you? Except for more fodder for either of the PR camps that I obviously think are both idiots pushing for their own agenda?" Torvalds says. "It just perpetrates that whole false mind-set" and is a waste of resources, he says.

    As a part of our internal discussion, Crispin Cowan pointed out that Linus doesn’t issue security updates for Linux, instead the downstream distributions that contain the Linux kernel issue security fixes.

    That comment was the catalyst – after he made the comment, I realized that I think I understand the meaning behind Linus’ comments.

    IMHO, Linus is thinking about security bugs as an engineer.  And as an engineer, he’s completely right (cue the /. trolls: “MSFT engineer thinks that Linux inventor is right about something!”). 

    As a software engineer, I fully understand where Linus is coming from: From a strict engineering standpoint, security bugs are no different from any other bugs, and treating them as somehow “special” denigrates other bugs.  It’s only when you consider the consequences of security bugs that they become more interesting.

    A non security bug can result in an unbootable system or the loss of data on the affected machine.  And they can be very, very bad.  But security bugs are special because they’re bugs that allow a 3rd party to mess with your system in ways that you didn’t intend.

    Simply put, your customers data is at risk from security bugs in a way that normal defects aren’t.  There are lots of bad people out there who would just love to exploit any security defect in your product.  Security updates are more than just “PR”, they provide critical information that customers use to help determine the risk associated with taking a fix.

    Every time your customer needs to update the software on their computer, they take the risk that the update will break something (that’s a large part of the reason that that MSFT takes it’s time when producing security fixes – we test the heck out of stuff to reduce the risk to our customers).  But because the bad guys can use security vulnerabilities to compromise their customers data, your customers want to roll out security fixes faster than they roll out other fixes.

    That’s why it’s so important to identify security fixes – your customers use this information for risk management.  It’s also why Microsoft’s security bulletins carry mitigating factors that would help identify if customers are at risk.  For example MS08-045 which contains a fix for CVE-2008-2257 has a mitigating factor that mentions that in Windows Server 2003 and Windows Server 2008 the enhanced security configuration mode mitigates this vulnerability.  A customer can use that information to know if they will be affected by MS08-045.

    But Linus’ customers aren’t the users of Linux.  They are the people who package up Linux distribution.  As Crispin commented, the distributions are the ones that issue the security bulletins and they’re the ones that work with their customers to ensure that the users of the distribution are kept safe.

    By not clearly identifying which fixes are security related fixes, IMHO Linus does his customers a disservice – it makes the job of the distribution owner harder because they can’t report security defects to their customers.  And that’s why reporting security bug fixes is so important.

    Edit: cleared out some crlfs

    Edit2: s/Linus/Linux/ :)

  • Larry Osterman's WebLog

    More proof that crypto should be left to the experts

    • 41 Comments

    Apparently two years ago, someone ran a static analysis tool named "Valgrind" against the source code to OpenSSL in the Debian Linux distribution.  The Valgrind tool reported an issue with the OpenSSL package distributed by Debian, so the Debian team decided that they needed to fix this "security bug".

     

    Unfortunately, the solution they chose to implement apparently removed all entropy from the OpenSSL random number generator.  As the OpenSSL team comments "Had Debian [contributed the patches to the package maintainers], we (the OpenSSL Team) would have fallen about laughing, and once we had got our breath back, told them what a terrible idea this was."

     

    And it IS a terrible idea.  It means that for the past two years, all crypto done on Debian Linux distributions (and Debian derivatives like Ubuntu) has been done with a weak random number generator.  While this might seem to be geeky and esoteric, it's not.  It means that every cryptographic key that has been generated on a Debian or Ubuntu distribution needs to be recycled (after you pick up the fix).  If you don't, any data that was encrypted with the weak RNG can be easily decrypted.

     

    Bruce Schneier has long said that cryptography is too important to be left to amateurs (I'm not sure of the exact quote, so I'm using a paraphrase).  That applies to all aspects of cryptography (including random number generators) - even tiny changes to algorithms can have profound effects on the security of the algorithm.   He's right - it's just too easy to get this stuff wrong.

     

    The good news is that there IS a fix for the problem, users of Debian or Ubuntu should read the advisory and take whatever actions are necessary to protect their data.

  • Larry Osterman's WebLog

    Resilience is NOT necessarily a good thing

    • 66 Comments

    I just ran into this post by Eric Brechner who is the director of Microsoft's Engineering Excellence center.

    What really caught my eye was his opening paragraph:

    I heard a remark the other day that seemed stupid on the surface, but when I really thought about it I realized it was completely idiotic and irresponsible. The remark was that it's better to crash and let Watson report the error than it is to catch the exception and try to correct it.

    Wow.  I'm not going to mince words: What a profoundly stupid assertion to make.  Of course it's better to crash and let the OS handle the exception than to try to continue after an exception.

     

    I have a HUGE issue with the concept that an application should catch exceptions[1] and attempt to correct them.  In my experience handling exceptions and attempting to continue is a recipe for disaster.  At best, it takes an easily debuggable problem into one that takes hours of debugging to resolve.  At it's worst, exception handling can either introduce security holes or render security mitigations irrelevant.

    I have absolutely no problems with fail fast (which is what Eric suggests with his "Restart" option).  I think that restarting a process after the process crashes is a great idea (as long as you have a way to prevent crashes from spiraling out of control).  In Windows Vista, Microsoft built this functionality directly into the OS with the Restart Manager, if your application calls the RegisterApplicationRestart API, the OS will offer to restart your application if it crashes or is non responsive.  This concept also shows up in the service restart options in the ChangeServiceConfig2 API (if a service crashes, the OS will restart it if you've configured the OS to restart it).

    I also agree with Eric's comment that asserts that cause crashes have no business living in production code, and I have no problems with asserts logging a failure and continuing (assuming that there's someone who is going to actually look at the log and can understand the contents of the log, otherwise the  logs just consume disk space). 

     

    But I simply can't wrap my head around the idea that it's ok to catch exceptions and continue to run.  Back in the days of Windows 3.1 it might have been a good idea, but after the security fiascos of the early 2000s, any thoughts that you could continue to run after an exception has been thrown should have been removed forever.

    The bottom line is that when an exception is thrown, your program is in an unknown state.  Attempting to continue in that unknown state is pointless and potentially extremely dangerous - you literally have no idea what's going on in your program.  Your best bet is to let the OS exception handler dump core and hopefully your customers will submit those crash dumps to you so you can post-mortem debug the problem.  Any other attempt at continuing is a recipe for disaster.

     

    -------

    [1] To be clear: I'm not necessarily talking about C++ exceptions here, just structured exceptions.  For some C++ and C# exceptions, it's ok to catch the exception and continue, assuming that you understand the root cause of the exception.  But if you don't know the exact cause of the exception you should never proceed.  For instance, if your binary tree class throws a "Tree Corrupt" exception, you really shouldn't continue to run, but if opening a file throws a "file not found" exception, it's likely to be ok.  For structured exceptions, I know of NO circumstance under which it is appropriate to continue running.

     

    Edit: Cleaned up wording in the footnote.

  • Larry Osterman's WebLog

    This is the way the world (wide web) ends...

    • 27 Comments

    Robert Hensing linked to a post by Thomas Ptacek over on the Matasano Chargen blog. Thomas (who is both a good hacker AND a good writer) has a writeup of a “game-over” vulnerability that was just published by Mark Dowd over at IBM's ISS X-Force that affects Flash. For those that don’t speak hacker-speak, in this case, a “game-over” vulnerability is one that can be easily weaponized (his techniques appear to be reliable and can be combined to run an arbitrary payload). As an added bonus, because it’s a vulnerability in Flash, it allows the attacker to write a cross-browser, cross-platform exploit – this puppy works just fine in both IE and Firefox (and potentially in Safari and Opera).

    This vulnerability doesn’t affect Windows directly, but it DOES show how a determined attacker can take what was previously thought to be an unexploitable failure (a null pointer dereference) and turn it into something that can be used to 0wn the machine.

    Every one of the “except not quite” issues that Thomas writes about in the article represented a stumbling block that the attacker (who had no access to the source to Flash) had to overcome – there are about 4 of them, but the attacker managed to overcome all of them.

    This is seriously scary stuff.  People who have flash installed should run, not walk over to Adobe to pick up the update.  Please note that the security update comes with the following warning:

    "Due to the possibility that these security enhancements and changes may impact existing Flash content, customers are advised to review this March 2008 Adobe Developer Center article to determine if the changes will affect their content, and to begin implementing necessary changes immediately to help ensure a seamless transition."

    Edit2: It appears that the Adobe update center I linked to hasn't yet been updated with the fix, I followed their update proceedure, and my Flash plugin still had the vulnerable version number. 

    Edit: Added a link to the relevant Adobe security advisory, thanks JD.

     

  • Larry Osterman's WebLog

    The Trouble with Giblets

    • 26 Comments

    I don't write about the SDL very much, because I figure that the SDL team does a good enough job of it on their blog, but I was reading the news a while ago and realized that one of the aspects of the SDL would have helped if our competitors were to adopt it.

     

    A long time ago, I wrote a short post about "giblets", and they're showing up a lot in the news lately.  "Giblets" are a term coined by Steve Lipner , and they've entered the lexicon of "micro-speak".  Essentially a giblet is a chunk of code that you've included from a 3rd party.  Michael Howard wrote about them on the SDL blog a while ago (early January), and now news comes out that Google's Android SDK contains giblets that contain known exploitable vulnerabilities

    I find this vaguely humorous, and a bit troubling.  As I commented in my earlier post (almost 4 years ago), adding a giblet to your product carries with it the responsibility to monitor the security mailing lists to make sure that you're running the most recent (and presumably secure) version of the giblet.

    What I found truly surprising was that Android development team had shipped code (even in beta) with those vulnerabilities.  Their development team should have known about the problem with giblets and never accepted the vulnerable versions in the first place.  That in turn leads me to wonder about the process management associated with the development of Android.

    I fully understand that you need to lock down the components that are contained in your product during the development process, that's why fixes take time to propagate into distributions. As I've seen it from watching FOSS bugs, the typical lifecycle of a security bug in FOSS code is: A bug is typically found in the component, and fixed quickly.  Then over the next several months, the fix is propagated into the various distributions that contain the fix.  So a fix for the bug is made very quickly (but is completely untested), the teams that package up the distribution consumes the fix and proceeds to test the fix in the distribution.  As a result, distributions naturally lag behind fixes (btw, the MSFT security vulnerabilities follow roughly the same sequence - the fix is usually known within days of the bug being reported, but it takes time to test the fix to ensure that the fix doesn't break things (especially since Microsoft patches vulnerabilities in multiple platforms, the fix for all of them needs to be released simultaneously)).

    But even so, it's surprising that a team would release a beta that contained a version of one of it's giblets that was almost 4 years old (according to the original report, it contained libPNG version 1.2.7, from September 12, 2004)!  This is especially true given the fact that the iPhone had a similar vulnerability found last year (ironically, the finder of this vulnerability was Travis Ormandy of Google).  I'm also not picking on Google because of spite - other vendors like Apple and Microsoft were each bitten by exactly this vulnerability - 3 years ago.  In Apple's case, they did EXACTLY the same thing that the Android team did: They released a phone that contained a 3 year old vulnerability that had previously been fixed in their mainstream operating system.

     

    So how would the SDL have helped the Android team?  The SDL requires that you track giblets in your code - it forces you to have a plan to deal with the inevitable vulnerabilities in the giblets.  In this case, SDL would have forced the development teams to have a process in place to monitor the vulnerabilities (and of course to track the history of the component), so they hopefully would never have shipped vulnerable components.  It also means that when a vulnerability is found after shipping, they would have a plan in place to roll out a fix ASAP.  This latter is critically important because history has shown us that when one component is known to have a vulnerability, the vultures immediately swoop in to find similar vulnerabilities in related code bases (on the theory that if you make a mistake once, you're likely to make it a second or third time).  In fact, that's another requirement of the SDL: When a vulnerability is found in a component, the SDL requires that you also look for similar vulnerabilities in related code bases.

    Yet another example where adopting the SDL would have helped to mitigate a vulnerability[1].

     

    [1] Btw, I'm not saying that the SDL is the only way to solve this problem.  There absolutely are other methodologies that would allow these problems to be mitigated.  But when you're developing software that's going to be deployed connected to a network (any network), you MUST have a solution in place to manage your risk (and giblets are just one form of risk).  The SDL is Microsoft's way, and so far it's clearly shown its value.

  • Larry Osterman's WebLog

    Wow - We hired Crispin Cowan!!!

    • 5 Comments

    Michael Howard just announced that we've hired Crispin Cowan!

    This is incredibly awesome, I have a huge amount of respect for Crispin, he's one of the most respected researchers out there.

    Among other things, Crispin's the author and designer of AppArmor, which adds sandboxing capabilities to Linux.  Apparently he's going to be working on the core Windows Security team, which is absolutely cool.

     

    I'm totally stoked to hear this - I literally let out a whoot when I read Michael's blog post.

     

     

    Welcome aboard Crispin :).

  • Larry Osterman's WebLog

    How to lose customers without really trying...

    • 25 Comments

    Not surprisingly, Valorie and I both do some of our holiday season shopping at ThinkGeek.  But no longer.  Valorie recently placed a substantial order with them, but Instead of processing her order, they sent the following email:

    From: ThinkGeek Customer Service [mailto:custserv@thinkgeek.com]
    Sent: Thursday, November 15, 2007 4:28 AM
    To: <Valorie's Email Address>
    Subject: URGENT - Information Needed to Complete Your ThinkGeek Order

    Hi Valorie,

    Thank you for your recent order with ThinkGeek, <order number>. We would like to process your order as soon as possible, but we need some additional information in order to complete your order.

    To complete your order, we must do a manual billing address verification check.

    If you paid for your order via Paypal, please send us a phone bill or other utility bill showing the same billing address that was entered on your order.

    If you paid for your order via credit card, please send us one of the following:

    - A phone bill or other utility bill showing the same billing address that was entered on your order

    - A credit card statement with your billing address and last four digits of your credit card displayed

    - A copy of your credit card with last four digits displayed AND a copy of a government-issued photo ID, such as a driver's license or passport.

    To send these via e-mail (a scan or legible digital photo) please reply to custserv@thinkgeek.com or via fax (703-839-8611) at your earliest convenience. If you send your documentation as digital images via email, please make sure they total less than 500kb in size or we may not receive your email. We ask that you send this verification within the next two weeks, or your order may be canceled. Also, we are unable to accept billing address verification from customers over the phone. We must receive the requested documentation before your order can be processed and shipped out.

    For the security-minded among you, we are able to accept PGP-encrypted emails. It is not mandatory to encrypt your response, so if you have no idea what we're talking about, don't sweat it. Further information, including our public key and fingerprint, can be found at the following

    link:

    http://www.thinkgeek.com/help/encryption.shtml

    At ThinkGeek we take your security and privacy very seriously. We hope you understand that when we have to take extra security measures such as this, we do it to protect you as well as ThinkGeek.

    We apologize for any inconvenience this may cause, and we appreciate your understanding. If you have any questions, please feel free to email or call us at the number below.

    Thanks-

    ThinkGeek Customer Service

    1-888-433-5788 (phone)

    1-703-839-8611 (fax)

    Wow.  We've ordered from them in the past (and placed other large orders with them), but we've never seen anything as outrageous as this.  They're asking for exactly the kind information that would be necessary to perpetuate an identity theft of Valorie's identity, and they're holding our order hostage if we don't comply.

    What was worse is that their order form didn't even ask for the CVE code on the back of the credit card (the one that's not imprinted).  So not only didn't they follow the "standard" practices that most e-commerce sites follow when dealing with credit cards, but they felt it was necessary for us to provide exactly the kind of information that an identity thief would ask for.

    Valorie contacted them to let them know how she felt about it, and their response was:

    Thank you for your recent ThinkGeek order. Sometimes, when an order is placed with a discrepancy between the billing and the shipping addresses, or with a billing address outside the US, or the order is above a certain value, our ordering system will flag the transaction. In these circumstances, we request physical documentation of the billing address on the order in question, to make sure that the order has been placed by the account holder. At ThinkGeek we take your security and privacy very seriously. We hope you understand that when we have to take extra security measures such as this, we do it to protect you as well as ThinkGeek.
    Unfortunately, without this documentation, we are unable to complete the processing of your order. If we do not receive the requested documentation within two weeks of your initial order date, your order will automatically be cancelled. If you can't provide documentation of the billing address on your order, you will need to cancel your current order and reorder using the proper billing address for your credit card. Once we receive and process your documentation, you should not need to provide it on subsequent orders. Please let us know if you have any further questions.

    The good news is that we have absolutely no problems with them canceling the order, and we're never going to do business with them again.  There are plenty of other retailers out there that sell the same stuff that ThinkGeek does who are willing to accept our business without being offensive about it.

     

    Edit to add:  Think Geek responded to our issues, their latest response can be found here.

  • Larry Osterman's WebLog

    When you're analyzing the strength of a password, make sure you know what's done with it.

    • 20 Comments

    Every once in a while, I hear someone making comments about the strength of things like long passwords.

    For example, if you have a 255 character password that just uses the 26 roman upper and lower case letters, plus the numeric digits.  That means that your password has 62^255 possible values, if you can try a million million passwords per second, the time required would exceed the heat death of the universe.

     

    Wow, that's cool - it means that you can never break my password if I use a long enough password.

     

    Except...

    The odds are very good that something in the system's going to take your password and apply a one-way hash to that password - after all, it wouldn't do to keep that password lying around in clear text where an attacker could see it.  But the instant you take a hash of a secret, the strength of the secret degrades to the strength of the hash.

    It's another example of the pigeonhole principle in practice - if you put N+M items into N slots, you're going to have some slots with more than one entry.  The pigeonhole principle applies in this case as well.

     

    In other words, if the password database that holds your password uses a hash algorithm like SHA-1, your 62^255 possible character password just got reduced in strength to a 256^20 possible value hash[1]. That means that any analysis that you've done on your password doesn't matter, because all an attacker needs to do is to find a different password that hashes to the same value as your password and they've broken your password.  Since your password strength exceeds the strength of the hash code, you know that there MUST be a collision with a weaker password.

     

    The bottom line is that when you're calculating the strength of a  password, it's important that you understand what your password looks like to an attacker.  If your password is saved as an SHA-1 or MD5 hash, that's the true maximum strength of your password.

     

    [1]To be fair, 256^20 is something like 1.4E48, so even if you could still try a million million passwords per second, you're still looking at something like a million million years to brute force that database, but 256^20 is still far less than 62^255.

  • Larry Osterman's WebLog

    Chris Pirillo's annoyed by the Windows Firewall prompt

    • 63 Comments

    Yesterday, Chris Pirillo made a comment in one of his posts:

    And if you think you’re already completely protected in Windows with its default tools, think again. This morning, after months of regular Firefox use, I get this security warning from the Windows Vista Firewall. Again, this was far from the first time I had used Firefox on this installation of Windows. Not only is the dialog ambiguous, it’s here too late.

    I replied in a comment on his blog:

    The reason that the Windows firewall hasn’t warned you about FF’s accessing the net is that up until this morning, all of it’s attempts have been outbound. But for some reason, this morning, it decided that it wanted to receive data from the internet.

    The firewall is doing exactly what it’s supposed to do - it’s stopping FF from listening for an inbound connection (which a web browser probably shouldn’t do) and it’s asking you if it’s ok.

    Why has your copy of firefox suddenly decided to start receiving data over the net when you didn’t ask it to?

    Chris responded in email:

    Because I started to play XM Radio?  *shrug*

    My response to him (which I realized could be a post in itself - for some reason, whenever I respond to Chris in email, I end up writing many hundred word essays):

    Could be - so in this case, the firewall is telling you (correctly) exactly what happened.

    That's what firewalls do.

    Firefox HAS the ability to open the ports it needs when it installs (as does whatever plugin you're using to play XM radio (I documented the APIs for doing that on my blog about 3 years ago, the current versions of the APIs are easier to use than the ones I used)), but for whatever reason it CHOSE not to do so and instead decided that the correct user experience was to prompt the user when downloading.

    This was a choice made by the developers of Firefox and/or the developer of XM radio plugin - either by design, ignorance, schedule pressure or just plain laziness, I honestly don't know (btw, if you're using the WMP FF plugin to play from XM, my comment still stands - I don't know if this was a conscious decision or not).

    Blaming the firewall (or Vista) for this is pointless (with a caveat below). 

     

    The point of the firewall is to alert you that an application is using the internet in a way that's unexpected and ask you if it makes sense. You, the user, know that you've started playing audio from XM, so you, the user expect that it's reasonable that Firefox start receiving traffic from the internet. But the firewall can't know what you did (and if it was able to figure it out, the system would be so hideously slow that you'd be ranting on and on about how performance sucks).

    Every time someone opens an inbound port in the firewall, they add another opportunity for malware to attack their system. The firewall is just letting the user know about it. And maybe, just maybe, the behavior that's being described might get the user to realize that malware has infected their machine and they'll repair it.

    In your case, the system was doing you a favor. It was a false positive, yes, but that's because you're a reasonably intelligent person. My wife does ad-hoc tech support for a friend who isn't, and the anti-malware stuff in Windows (particularly Windows Defender) has saved the friends bacon at least three times this year alone.

     

    On the other hand, you DO have a valid point: The dialog that was displayed by the firewall didn't give you enough information about what was happening.  I believe that this is because you were operating under the belief that the Windows firewall was both an inbound and outbound firewall.  The Windows Vista firewall  IS both, but by default it's set to allow all outbound connections (you need to configure it to block outbound connections).  If you were operating under the impression that it was an outbound firewall, you'd expect it to prompt for outbound connections.

    People HATE outbound firewalls because of the exact same reason you're complaining - they constantly ask people "Are you sure you want to do that?" (Yes, dagnabbit, I WANT to let Firefox access the internet, are you stupid or something?).

    IMHO outbound firewalls are 100% security theater[1][2]. They provide absolutely no value to customers. This has been shown time and time again (remember my comment above about applications being able to punch holes in the firewall? Malware can do the exact same thing). The only thing an outbound firewall does is piss off customers. If the Windows firewall was enabled to block outbound connections by default, I guarantee you that within minutes of that release, the malware authors would simply add code to their tools to disable it.  Even if you were to somehow figure out how to block the malware from opening up outbound ports[3], the malware will simply hijack a process running in the context of the user that's allowed to access the web. Say... Firefox. This isn't a windows specific issue, btw - every other OS available has exactly the same issues (malware being able to inject itself into processes running in the same security context as the user running the malware).

    Inbound firewalls have very real security value, as do external dedicated firewalls. I honestly believe that the main reason you've NOT seen any internet worms since 2002 is simply because XP SP2 enabled the firewall by default. There certainly have been vulnerabilities found in Windows and other products that had the ability to be turned into a worm - the fact that nobody has managed to successfully weaponize them is a testament to the excellent work done in XP SP2.

     

    [1] I'm slightly overexaggerating here - there is one way in which outbound firewalls provide some level of value, and that's as a defense-in-depth measure (like ASLR or heap randomization). For instance, in Vista, every built-in service (and 3rd party services if they want to take the time to opt-in) defines a set of rules which describes the networking behaviors of the service (I accept inbound connections on UDP from port <foo>, and make outbound connections to port <bar>). The firewall is pre-configured with those rules and will prevent any access to the network from those services. The outbound firewall rules make it much harder for a piece of malware to make outbound connections (especially if the service is running in a restricted account like NetworkService or LocalService). It is important to realize this is JUST Defense-in-Depth measure and CAN be worked around (like all other defense-in-depth measures). 

    [2] Others disagree with me on this point - for example, Thomas Ptacek over at Matasano wrote just yesterday: "Outbound filtering is more valuable than inbound filtering; it catches “phone-home” malware. It’s not that hard to implement, and I’m surprised Leopard doesn’t do it."  And he's right, until the "phone-home" malware decides to turn off the firewall. Not surprisingly, I also disagree with him on the value of inbound filtering.

    [3] I'm not sure how you do that while still allowing the user to open up ports - functionality being undocumented has never stopped malware authors.

  • Larry Osterman's WebLog

    Every threat model diagram should tell a story.

    • 1 Comments

    Adam Shostack has another threat modeling post up on the SDL blog entitled "Threat Modeling Self Checks and Rules of Thumb".  In it, he talks about threat models and diagrams (and he corrects a mistake in my "rules of thumb" post (thanks Adam)).

    There's one thing he mentions that is really important (and has come up a couple of times as I've been doing threat model reviews of various components).  That a threat model diagram should tell a story.

    Not surprisingly, I love stories.  I really love stories :).  I love telling stories, I love listening to people telling stories.

     

    I look for stories in the most unlikely of places, including places that you wouldn't necessarily think of.

     

    So when I'm reviewing a threat model, I want to hear the story of your feature.  If you've done a good job of telling your story, I should be able to see what you've drawn and understand  what you're building - it might take a paragraph or two of text to provide surrounding context, but it should be clear from your diagram.

     

    What are the things that help to tell a good story?  Well, your story should be coherent.  Stories have beginnings, middles and ends.  Your diagram should also have a beginning (entrypoint), a middle (where the work is done) and an end (the output of the diagram).  For example, in my PlaySound threat model, the beginning is the application calling PlaySound, the end is the audio engine.  Obviously other diagrams have other inputs and outputs, but your code is always invoked by something, and it always does something.  Your diagram should reflect this.

    In addition, it's always a good idea to look for pieces that are missing.  For instance, if you're doing a threat model for a web browser, it's highly likely that the Internet will show up somewhere in the model.  If it doesn't, it just seems "wrong".  Similarly, if your feature name is something like "Mumblefrotz user experience", then I'd hope to find something that looks like a "user" and something that looks like a "Mumblefrotz". 

    Adam's post calls out other inconsistencies that interfere with the storytelling, as does my "rules of thumb" post.

     

    I really like the storytelling metaphor for threat model diagrams because if I can understand the story, it really helps me find the omissions - there's almost always something missed in the diagram, and a coherent story really helps to understand that.  In many ways, pictures do a far better job of telling stories than words do.

  • Larry Osterman's WebLog

    Some final thoughts on Threat Modeling...

    • 16 Comments

    I want to wrap up the threat modeling posts with a summary and some comments on the entire process.  Yeah, I know I should have done this last week, but I got distracted :). 

    First, a summary of the threat modeling posts:

    Part 1: Threat Modeling, Once again.  In which our narrator introduces the idea of a threat model diagram

    Part 2: Threat Modeling Again. Drawing the Diagram.  In which our narrator introduces the diagram for the PlaySound API

    Part 3: Threat Modeling Again, Stride.  Introducing the various STRIDE categories.

    Part 4: Threat Modeling Again, Stride Mitigations.  Discussing various mitigations for the STRIDE categories.

    Part 5: Threat Modeling Again, What does STRIDE have to do with threat modeling?  The relationship between STRIDE and diagram elements.

    Part 6: Threat Modeling Again, STRIDE per Element.  In which the concept of STRIDE/Element is discussed.

    Part 7: Threat Modeling Again, Threat Modeling PlaySound.  Which enumerates the threats against the PlaySound API.

    Part 8: Threat Modeling Again, Analyzing the threats to PlaySound.  In which the threat modeling analysis work against the threats to PlaySound is performed.

    Part 9: Threat Modeling Again, Pulling the threat model together.  Which describes the narrative structure of a threat model.

    Part 10: Threat Modeling Again, Presenting the PlaySound threat model.  Which doesn't need a pithy summary, because the title describes what it is.

    Part 11: Threat Modeling Again, Threat Modeling in Practice.  Presenting the threat model diagrams for a real-world security problem .[1]

    Part 12: Threat Modeling Again, Threat Modeling and the firefoxurl issue. Analyzing the real-world problem from the standpoint of threat modeling.

    Part 13: Threat Modeling Again, Threat Modeling Rules of Thumb.  A document with some useful rules of thumb to consider when threat modeling.

     

    Remember that threat modeling is an analysis tool. You threat model to identify threats to your component, which then lets you know where you need to concentrate your resources.  Maybe you need to encrypt a particular data channel to protect it from snooping.  Maybe you need to change the ACLs on a data store to ensure that an attacker can't modify the contents of the store.  Maybe you just need to carefully validate the contents of the store before you read it.  The threat modeling process tells you where to look and gives you suggestions about what to look for, but it doesn't solve the problem.  It might be that the only thing that comes out from your threat modeling process is a document that says "We don't care about any of the threats to this component".  That's ok, at a minimum, it means that you considered the threats and decided that they were acceptable.

    The threat modeling process is also a living process. I'm 100% certain that 2 years from now, we're going to be doing threat modeling differently from the way that we do it today.  Experience has shown that every time we apply threat modeling to a product, we realize new things about the process of performing threat modeling, and find new, more efficient ways of going about the process.   Even now, the various teams involved with threat modeling in my division have proposed new changes the process based on the experiences of our current round of threat modeling.  Some of them will be adopted as best practices across Microsoft, some of them will be dropped on the floor. 

     

    What I've described over these posts is the process of threat modeling as it's done today in the Windows division at Microsoft.  Other divisions use threat modeling differently - the threat landscape for Windows is different from the threat landscape for SQL Server and Exchange, which is different from the threat landscape for the various Live products, and it's entirely different for our internal IT processes.  All of these groups use threat modeling, and they use the core mechanisms in similar ways, but because each group that does threat modeling has different threats and different risks, the process plays out differently for each team.

    If your team decides to adopt threat modeling, you need to consider how it applies to your components and adopt the process accordingly.  Threat Modeling is absolutely not a one-size-fits-all process, but it IS an invaluable tool.

     

    EDIT TO ADD: Adam Shostak on the Threat Modeling Team at Microsoft pointed out that the threat modeling team has a developer position open.  You can find more information about the position by going to here:  http://members.microsoft.com/careers/search/default.aspx and searching for job #207443.

    [1] Someone posting a comment on Bruce Schneier's blog took me for task for using a browser vulnerability.  I chose that particular vulnerability because it was the first that came to mind.  I could have just as easily picked the DMG loading logic in OSX or the .ANI file code in Windows for examples (actually the DMG file issues are in several ways far more interesting than the firefoxurl issue - the .ANI file issue is actually relatively boring from a threat modeling standpoint).

  • Larry Osterman's WebLog

    Threat Modeling Again, Threat Modeling Rules of Thumb

    • 12 Comments

    I wrote this piece up for our group as we entered the most recent round of threat models.  I've cleaned it up a bit (removing some Microsoft-specific stuff), and there's stuff that's been talked about before, but the rest of the document is pretty relevant. 

     

    ---------------------------------------

    As you go about filling in the threat model threat list, it’s important to consider the consequences of entering threats and mitigations.  While it can be easy to find threats, it is important to realize that all threats have real-world consequences for the development team.

    At the end of the day, this process is about ensuring that our customer’s machines aren’t compromised. When we’re deciding which threats need mitigation, we concentrate our efforts on those where the attacker can cause real damage.

     

    When we’re threat modeling, we should ensure that we’ve identified as many of the potential threats as possible (even if you think they’re trivial). At a minimum, the threats we list that we chose to ignore will remain in the document to provide guidance for the future. 

     

    Remember that the feature team can always decide that we’re ok with accepting the risk of a particular threat (subject to the SDL security review process). But we want to make sure that we mitigate the right issues.

    To help you guide your thinking about what kinds of threats deserve mitigation, here are some rules of thumb that you can use while performing your threat modeling.

    1. If the data hasn’t crossed a trust boundary, you don’t really care about it.

    2. If the threat requires that the attacker is ALREADY running code on the client at your privilege level, you don’t really care about it.

    3. If your code runs with any elevated privileges (even if your code runs in a restricted svchost instance) you need to be concerned.

    4. If your code invalidates assumptions made by other entities, you need to be concerned.

    5. If your code listens on the network, you need to be concerned.

    6. If your code retrieves information from the internet, you need to be concerned.

    7. If your code deals with data that came from a file, you need to be concerned (these last two are the inverses of rule #1).

    8. If your code is marked as safe for scripting or safe for initialization, you need to be REALLY concerned.

     

    Let’s take each of these in turn, because there are some subtle distinctions that need to be called out.

    If the data hasn’t crossed a trust boundary, you don’t really care about it.

    For example, consider the case where a hostile application passes bogus parameters into our API. In that case, the hostile application lives within the same trust boundary as the application, so you can simply certify the threat. The same thing applies to window messages that you receive. In general, it’s not useful to enumerate threats within a trust boundary. [Editors Note: Yesterday, David LeBlanc wrote an article about this very issue - I 100% agree with what he says there.] 

    But there’s a caveat (of course there’s a caveat, there’s ALWAYS a caveat). Just because your threat model diagram doesn't have a trust boundary on it, it doesn't mean that the data being validated hasn't crossed a trust boundary on the way to your code.

    Consider the case of an application that takes a file name from the network and passes that filename into your API. And further consider the case where your API has an input validation bug that causes a buffer overflow. In that case, it’s YOUR responsibility to fix the buffer overflow – an attacker can use the innocent application to exploit your code. Before you dismiss this issue as being unlikely, consider CVE-2007-3670. The Firefox web browser allows the user to execute scripts passed in on the command line, and registered a URI handler named “firefoxurl” with the OS with the start action being “firefox.exe %1” (this is a simplification). The attacker simply included a “firefoxurl:<javascript>” in a URL and was able to successfully take ownership of the client machine. In this case, the firefox browser assumed that there was no trust boundary between firefox.exe and the invoker, but it didn’t realize that it introduced such a trust boundary when it created the “firefoxurl” URI handler.

    If the threat requires that the attacker is ALREADY running code on the client at your privilege level, you don’t really care about it.

    For example, consider the case where a hostile application writes values into a registry key that’s read by your component. Writing those keys requires that there be some application currently running code on the client, which requires that the bad guy first be able to get code to run on the client box.

    While the threats associated with this are real, it’s not that big a problem and you can probably state that you aren’t concerned by those threats because they require that the bad guy run code on the box (see Immutable Law #1: “If a bad guy can persuade you to run his program on your computer, it’s not your computer anymore”).

    Please note that this item has a HUGE caveat: it ONLY applies if the attacker’s code is running at the same privilege level as your code. If that’s not the case, you have the next rule of thumb:

    If your code runs with any elevated privileges, you need to be concerned.

    We DO care about threats that cross privilege boundaries. That means that any data communication between an application and a service (which could be an RPC, it could be a registry value, it could be a shared memory region) must be included in the threat model.

    Even if you’re running in a low privilege service account, you still may be attacked – one of the privileges that all services get is the SE_IMPERSONATE_NAME privilege. This is actually one of the more dangerous privileges on the system because it can allow a patient attacker to take over the entire box. Ken “Skywing” Johnson wrote about this in a couple of posts on his blog (1 and 2) on his excellent blog Nynaeve. David LeBlanc has a subtly different take on this issue (see here), but the reality is that both David and Ken agree more than they disagree on this issue. If your code runs as a service, you MUST assume that you’re running with elevated privileges. This applies to all data read – rule #2 (requiring an attacker to run code) does not apply when you cross privilege levels, because the attacker could be writing code under a low privilege account to enable an elevation of privilege attack.

    In addition, if your component has a use scenario that involves running the component elevated, you also need to consider that in your threat modeling.

    If your code invalidates assumptions made by other entities, you need to be concerned

    The reason that the firefoxurl problem listed above was such a big deal was that the firefoxurl handler invalidated some of the assumptions made by the other components of Firefox. When the Firefox team threat modeled firefox, they made the assumption that Firefox would only be invoked in the context of the user.  As such it was totally reasonable to add support for executing scripts passed in on the command line (see rule of thumb #1).  However, when they threat modeled the firefoxurl: URI handler implementation, they didn’t consider that they had now introduced a trust boundary between the invoker of Firefox and the Firefox executable.  

    So you need to be aware of the assumptions of all of your related components and ensure that you’re not changing those assumptions. If you are, you need to ensure that your change doesn’t introduce issues.

    If your code retrieves information from the internet, you need to be concerned

    The internet is a totally untrusted resource (no duh). But this has profound consequences when threat modeling. All data received from the Internet MUST be treated as totally untrusted and must be subject to strict validation.

    If your code deals with data that came from a file, then you need to be concerned.

    In the previous section, I talked about data received over the internet. Microsoft has issued several bulletins this year that required an attacker tricking a user into downloading a specially crafted file over the internet; as a consequence, ANY file data must be treated as potentially malicious. For example, MS07-047 (a vulnerability in WMP) required that the attacker force the user to view a specially crafted WMP skin. The consequence of this is that that ANY file parsed by our code MUST be treated as coming from a lower level of trust.

    Every single file parser MUST treat its input as totally untrusted –MS07-047 is only one example of an MSRC vulnerability, there have been others. Any code that reads data from a file MUST validate the contents. It also means that we need to work to ensure that we have fuzzing in place to validate our mitigations.

    And the problem goes beyond file parsers directly. Any data that can possibly be read from a file cannot be trusted. <A senior developer in our division> brings up the example of a codec as a perfect example. The file parser parses the container and determines that the container isn't corrupted. It then extracts the format information and finds the appropriate codec for that format. The parser then loads the codec and hands the format information and file data to the codec.

    The only thing that the codec knows is that the format information that’s been passed in is valid. That’s it. Beyond the fact that the format information is of an appropriate size and has a verifiable type, the codec can make no assumptions about the contents of the format information, and it can make no assumptions about the file data. Even though the codec doesn’t explicitly parse the file, it’s still dealing with untrusted data read from the file.

    If your code is marked as “Safe For Scripting” or “Safe for Initialization”, you need to be REALLY concerned.

    If your code is marked as “Safe For Scripting” (or if your code can be invoked from a control that is marked as Safe For Scripting), it means that your code can be executed in the context of a web browser, and that in turn means that the bad guys are going to go after your code. There have been way too many MSRC bulletins about issues with ActiveX controls.

    Please note that some of the issues with ActiveX controls can be quite subtle. For instance, in MS02-032 we had to issue an MSRC fix because one of the APIs exposed by the WMP OCX returned a different error code if a path passed into the API was a file or if it was a directory – that constituted an Information Disclosure vulnerability and an attacker could use it to map out the contents of the users hard disk.

    In conclusion

    Vista raised the security bar for attackers significantly. As Vista adoption spreads, attackers will be forced to find new ways to exploit our code. That means that it’s more and more important to ensure that we do a good job ensuring that they have as few opportunities as possible to make life difficult for our customers.  The threat modeling process helps us understand the risks associated with our features and understand where we need to look for potential issues.

  • Larry Osterman's WebLog

    Threat Modeling Again, Threat modeling and the fIrefoxurl issue.

    • 26 Comments

    Yesterday I presented my version of the diagrams for Firefox's command line handler and the IE/URLMON's URL handler.  To refresh, here they are again:

     Here's my version of Firefox's diagram:

     And my version of IE/URLMON's URL handler diagram:

     

    As  I mentioned yesterday, even though there's a trust boundary between the user and Firefox, my interpretation of the original design for the Firefox command line parsing says that this is an acceptable risk[1], since there is nothing that the user can specify via the chrome engine that they can't do from the command line.  In the threat model for the Firefox command line parsing, this assumption should be called out, since it's important.

     

    Now let's think about what happens when you add in the firefoxurl URL handler to the mix?

     

    For that, you need to go to the IE/URLMON diagram.  There's a clear trust boundary between the web page and IE/URLMON.  That trust boundary applies to all of the data passed in via the URL, and all of the data should be considered "tainted".  If your URL handler is registered using the "shell" key, then IE passes the URL to the shell, which launches the program listed in the "command" verb replacing the %1 value in the command verb with the URL specified (see this for more info)[2].  If, on the other hand, you've registered an asynchronous protocol handler, then IE/URLMON will instantiate your COM object and will give you the ability to validate the incoming URL and to change how IE/URLMON treats the URL.  Jesper discusses this in his post "Blocking the FIrefox".

    The key thing to consider is that if you use the "shell" registration mechanism (which is significantly easier than using the asynchronous protocol handler mechanism), IE/URLMON is going to pass that tainted data to your application on the command line.

     

    Since the firefoxurl URL handler used the "shell" registration mechanism, it means that the URL from the internet is going to be passed to Firefox's command line handler.  But this violates the assumption that the Firefox command line handler made - they assume that their command line was authored with the same level of trust as the user invoking firefox.  And that's a problem, because now you have a mechanism for any internet site to execute code on the browser client with the privileges of the user.

     

    How would a complete threat model have shown that there was an issue?  The Firefox command line threat model showed that there was a potential issue, and the threat analysis of that potential issue showed that the threat was an accepted risk.

    When the firefoxurl feature was added, the threat model analysis of that feature should have looked similar to the IE/URLMON threat model I called out above - IE/URLMON took the URL from the internet, passed it through the shell and handed it to Firefox (URL Handler above).  

     

    So how would threat modeling have helped to find the bug?

    There are two possible things that could have happened next.  When the firefoxurl handler team[3] analyzed their threat model, they would have realized that they were passing high risk data (all data from the internet should be treated as untrusted) to the command line of the Firefox application.  That should have immediately raised red flags because of the risk associated with the data.

    At this point in their analysis, the foxurl handler team needed to confirm that their behavior was safe, which they could do either by asking someone on the Firefox command line handling team or by consulting the Firefox command line handling threat model (or both).  At that point, they would have discovered the important assumption I mentioned above, and they would have realized that they had a problem that needed to be mitigated (the actual form of the mitigation doesn't matter - I believe that the Firefox command line handling team removed their assumption, but I honestly don't know (and it doesn't matter for the purposes of this discussion)).

     

    As I mentioned in my previous post, I love this example because it dramatically shows how threat modeling can help solve real world security issues.

    I don't believe that anything in my analysis above is contrived - the issues I called out above directly follow from the threat modeling process I've outlined in the earlier posts. 

    I've been involved in the threat modeling process here at Microsoft for quite some time now, and I've seen the threat model analysis process find this kind of issue again and again.  The threat model either exposes areas where a team needs to be concerned about their inputs or it forces teams to ask questions about their assumptions, which in turn exposes potential issues like this one (or confirms that in fact there is no issue that needs to be mitigated).

     

    Next: Threat Modeling Rules of thumb.

     

    [1] Obviously, I'm not a contributor to Firefox and as such any and all of my comments about Firefox's design and architecture are at best informed guesses.  I'd love it if someone who works on Firefox or has contributed to the security analysis of Firefox would correct any mistakes I'm making here.

    [2] Apparently IE/URLMON doesn't URLEncode the string that it hands to the URL handler - I don't know why it does that (probably for compatibility reasons), but that isn't actually relevant to this discussion (especially since all versions of Firefox before 2.0.0.6, seem to have had the same behavior as IE).  Even if IE had URL encoded the URL before handing it to the handler, Firefox is still being handed untrusted input which violates a critical assumption made by the Firefox command line handler developers.

    [3] Btw, I'm using the term "team" loosely.  It's entirely possible that the same one individual did both the Firefox command line handling work AND the firefoxurl protocol handler - it doesn't actually matter.

  • Larry Osterman's WebLog

    Threat Modeling Again, Threat Modeling in Practice

    • 11 Comments

    I've been writing a LOT about threat modeling recently but one of the things I haven't talked about is the practical value of the threat modeling process.

    Here at Microsoft, we've totally drunk the threat modeling cool-aid.  One of Adam Shostak's papers on threat modeling has the following quote from Michael Howard:

    "If we had our hands tied behind our backs (we don't) and could do only one thing to improve software security... we would do threat modeling every day of the week."

    I want to talk about a real-world example of a security problem where threat modeling would have hopefully avoided a potential problem.

    I happen to love this problem, because it does a really good job of showing how the evolution of complicated systems can introduce unexpected security problems.  The particular issue I'm talking about is known as CVE-2007-3670.  I seriously recommend people go to the CVE site and read the references to the problem, they provide a excellent background on the problem.

    CVE-2007-3670 describes a vulnerability in the Mozilla Firefox browser that uses Internet Explorer as an exploit vector. There's been a TON written about this particular issue (see the references on the CVE page for most of the discussion), I don't want to go into the pros and cons of whether or not this is an IE or a FireFox bug.  I only want to discuss this particular issue from a threat modeling standpoint.

    There are four components involved in this vulnerability, each with their own threat model:

    • The Firefox browser.
    • Internet Explorer.
    • The "firefoxurl:" URI registration.
    • The Windows Shell (explorer).

    Each of the components in question play a part in the vulnerability.  Let's take them in turn.

    • The Firefox browser provides a command line argument "-chrome" which allows you to load the chrome specified at a particular location.
    • Internet Explorer provides an extensibility mechanism which allows 3rd parties to register specific URI handlers.
    • The "firefoxurl:" URL registration, which uses the simplest form of URL handler registration which simply instructs the shell to execute "<firefoxpath>\firefox.exe -url "%1" -requestPending".  Apparently this was added to Firefox to allow web site authors to force the user to use Firefox when viewing a link.  I believe the "-url" switch (which isn't included in the list of firefox command line arguments above) instructs firefox to treat the contents of %1 as a URL.
    • The Windows Shell which passes on the command line to the firefox application.

    I'm going to attempt to draw the relevant part of the diagrams for IE and Firefox.  These are just my interpretations of what is happening, it's entirely possible that the dataflow is different in real life.

    Firefox:

    image

    This diagram shows the flow of control from the user into Firefox (remember: I'm JUST diagramming a small part of the actual component diagram).  One of the things that makes Firefox's chrome engine so attractive is that it's easy to modify the chrome because the Firefox chrome is simply javascript.  Since the javascript being run runs with the same privileges as the current user, this isn't a big deal - there's no opportunity for elevation of privilege there.  But there is one important thing to remember here: Firefox has a security assumption that the -chrome command switch is only provided by the user - because it executes javascript with full trust, it's effectively accepts executable code from the command line.

     

    Internet Explorer:

    image

    This diagram describes my interpretation of how IE (actually urlmon.dll in this case) handles incoming URLs.  It's just my interpretation, based on the information contained here (at a minimum, I suspect it's missing some trust boundaries).  The web page hands IE a URL, IE looks the URL up in the registry and retrieves a URL handler.  Depending on how the URL handler was registered, IE either invokes the shell on the path portion of the URL, or, if the URL handler was registered as an async protocol hander, it hands the URL to the async protocol handler.

    I'm not going to do a diagram for the firefoxurl handler or the shell, since they're either not interesting or are covered in the diagram above - in the firefoxurl handler case, the firefoxurl handler is registered as being handled by the shell.  In that case,  Internet Explorer will pass the URL into the shell, which will happily pass it onto the URL handler (which, in this case is FireFox).

     

    That's a lot of text and pictures, tomorrow I'll discuss what I think went wrong and how using threat modeling could have avoided the issue.  I also want to look at BOTH of the threat models and see what they indicate.

     

    Obviously, the contents of this post are my own opinion and in no way reflect the opinions of Microsoft.

  • Larry Osterman's WebLog

    Threat Modeling Again, Presenting the PlaySound Threat Model

    • 7 Comments

    It's been a long path, but we're finally at the point where I can finally present the threat model for PlaySound.  None of the information in this post is new, all the information is pulled from previous posts.

     ----------------

    PlaySound Threat Model

    The PlaySound API is a high level multimedia API intended to render system sounds ("dings").  It has three major modes of operation:

    • It can play the contents of a .WAV file passed in as a parameter to the API.
    • It can play the contents of a Win32 resource or other memory location passed in as a parameter to the API.
    • It can play the contents of a .WAV file referenced by an alias.  If this mode is chosen, it reads the filename from the registry under HKCU\AppEvents.

     For more information on the PlaySound API and its options, see: The MSDN documentation for PlaySound.

    PlaySound Diagram

    The PlaySound API's data flow can be represented as follows.

    PlaySound Elements

     

    1. Application: External Interactor - The application which calls the PlaySound API.
    2. PlaySound: Process - The code that represents the PlaySound API
    3. WAV file: Data Store - The WAV file to be played, on disk or in memory
    4. HKCU Sound Aliases: Data Store - The Windows Registry under HKCU\AppEvents which maps from aliases to WAV filenames
    5. Audio Playback APIs: External Interactor - The audio playback APIs used for PlaySound.  This could be MediaFoundation, waveOutXxxx, DirectShow, or any other audio rendering system.
    6. PlaySound Command (Application->PlaySound): DataFlow (Crosses Threat Boundary) - The data transmitted in this data flow represents the filename to play, the alias to look up or the resource ID in the current executable to play.
    7. WAVE Header (WAV file-> PlaySound) : DataFlow (Crosses Threat Boundary) - The data transmitted in this data flow represents the WAVEFORMATEX structure contained in the WAV file being played.
    8. WAV file Data (WAV file-> PlaySound) : DataFlow (Crosses Threat Boundary) - The data transmitted in this data flow represents the actual audio samples contained in the WAV file being played.
    9. WAV filename (HKCU Sound Aliases -> PlaySound) : DataFlow (Crosses Threat Boundary) - The data transmitted in this data flow represents the contents of the HKCU\AppEvents\Schemes\.Default\<sound>\.Current[1]
    10. WAV file Data (PlaySound -> Audio Playback APIs): DataFlow - The data transmitted in this data flow represents both the WAVEFORMATEX structure read from the WAV file and the audio samples read from the file.

    PlaySound Threat Analysis

    Data Flows

    WAVE Header (WAV file-> PlaySound) : DataFlow (Crosses Threat Boundary) - Tampering
    WAVE Header (WAV file-> PlaySound) : DataFlow (Crosses Threat Boundary) - Information Disclosure
    WAVE Header (WAV file-> PlaySound) : DataFlow (Crosses Threat Boundary) - Denial of Service
    WAV file Data (WAV file-> PlaySound) : DataFlow (Crosses Threat Boundary) - Tampering
    WAV file Data (WAV file-> PlaySound) : DataFlow (Crosses Threat Boundary) - Information Disclosure
    WAV file Data (WAV file-> PlaySound) : DataFlow (Crosses Threat Boundary) - Denial of Service
    WAV filename (HKCU Sound Aliases -> PlaySound) : DataFlow (Crosses Threat Boundary) - Tampering
    WAV filename (HKCU Sound Aliases -> PlaySound) : DataFlow (Crosses Threat Boundary) - Information Disclosure
    WAV filename (HKCU Sound Aliases -> PlaySound) : DataFlow (Crosses Threat Boundary) - Denial of Service
    WAV file Data (PlaySound -> Audio Playback APIs): DataFlow - Tampering
    WAV file Data (PlaySound -> Audio Playback APIs): DataFlow - Information Disclosure
    WAV file Data (PlaySound -> Audio Playback APIs): DataFlow - Denial of Service

    Because all the Data flows are all within a single process boundary, there are no meaningful threats to those dataflows - the Win32 process model protects against those threats.

    External Interactors

    Application: External Interactor - Spoofing

    It doesn't matter which application called the PlaySound API, so we don't care about spoofing threats to the application calling PlaySound.

    Application: External Interactor - Repudiation

    There are no requirements that the PlaySound API protect against Repudiation attacks.

    Audio Playback APIs: External Interactor - Spoofing

    The system default APIs are protected by Windows Resource Protection so they cannot be replaced.  If an attacker does successfully inject his logic (by overriding the COM registration for the audio APIs or by some other means, the attacker is running at the same privilege level as the user, so can do nothing that the user can't do.

    Audio Playback APIs: External Interactor - Repudiation

    There are no requirements that the PlaySound API protect against Repudiation attacks.

    Data Stores

    Since the data stores involved in the PlaySound API are under the control of the user, we must protect against threats to those data stores.

    WAV file: Data Store - Tampering

    An attacker can modify the contents of the WAV file data store.  To mitigate this attack, we will validate the WAVE header information; we're not going to check the actual WAV data, since it's just raw audio samples.  Bug #XXXX filed to validate this mitigation.

    WAV file: Data Store - Information Disclosure

    The WAV file is protected by NT's filesystem ACLs which prevent unauthorized users from reading the contents of the file.

    WAV file: Data Store - Repudiation

    Repudiation threats don't apply to this store.

    WAV file: Data Store - Denial of Service

    The PlaySound API will check for errors when reading from the store and will return an error indication to its caller (if possible). When PlaySound is running in the "resource or memory location" mode and the SND_ASYNC flag is specified, the caller may unmap the virtual memory associated with the WAV file.  In that case, the PlaySound may access violate while rendering the contents of the file[2].  Bug #XXXX filed to validate this mitigation.

    HKCU Sound Aliases: Data Store - Tampering

    An attacker can modify the contents of the sound aliases registry key.  To mitigate this attack, we will validate the contents of the key. Bug #XXXX filed to validate this mitigation.

    HKCU Sound Aliases: Data Store - Information Disclosure

    The aliases key is protected by the registry ACLs which prevent unauthorized users from reading the contents  of the key.

    HKCU Sound Aliases: Data Store - Repudiation

    Repudiation threats don't apply to this store.

    HKCU Sound Aliases: Data Store - Denial of service

    The PlaySound API will check for errors when reading from the store and will return an error indication to its caller (if possible).Bug #XXXX filed to validate this mitigation.

    Processes

    PlaySound: Process - Spoofing

    Since PlaySound is the component we're threat modeling, spoofing threats don't apply.

    PlaySound: Process - Tampering

    The only tampering that can happen to the PlaySound process directly involves modifying the PlaySound binary on disk, if the user has the rights to do that, we can't stop them.  For PlaySound, the file is protected by Windows Resource Protection, which should protect the file from tampering.

    PlaySound: Process - Repudiation

    We don't care about repudiation threats to the PlaySound API.

    PlaySound: Process - Information Disclosure

    The NT process model prevents any unauthorized entity from reading the process memory associated with the Win32 process playing Audio, so this threat is out of scope for this component.

    PlaySound: Process - Denial of Service

    Again, the NT process model prevents unauthorized entities from crashing or interfering with the process, so this threat is out of scope for this component.

    PlaySound: Process - Elevation of Privilege

    The PlaySound API runs at the same privilege level as the application calling PlaySound, so it is not subject to EoP threats.

    PlaySound: Process - "PlaySound Command" crosses trust boundary: Elevation of Privilege/Denial of Service / Tampering

    The data transmitted by the incoming "PlaySound Command" data flow comes from an untrusted source.  Thus the PlaySound API will validate the data contained in that dataflow for "reasonableness" (mostly checking to ensure that the string passed in doesn't cause a buffer overflow).  Bug #XXXX filed to validated this mitigation.

    PlaySound: Process - "WAV file Data" data flow crosses trust boundary: Information Disclosure

    It's possible that the contents of the WAV file might be private, so if some attacker can somehow "snoop" the contents of the data they might be able to learn information they shouldn't.  Another way that this "I" attack shows up is described in CVE-2007-0675 and here.  So how do we mitigate that threat (and the corresponding threat associated with someone spoofing the audio APIs)?

    The risk associated with CVE-2007-0675 is out-of-scope for this component (if the threat is to be mitigated, it's more appropriate to handle that either in the speech recognition engine or the audio stack), so the only risk is that we might be handing the audio stack data that can be misused. 

    Since the entire APIs purpose is to play the contents of the WAVE file, this particular threat is considered to be an acceptable risk.

    ---

    [1] The actual path is slightly more complicated because of SND_APPLICATION flag, but that doesn't materially change the threat model.

    [2] The DOS issues associated with this behavior are accepted risks.

    --------------

    Next: Let's look at a slightly more interesting case where threat modeling exposes an issue.

  • Larry Osterman's WebLog

    Threat Modeling Again, Pulling the threat model together

    • 9 Comments

    So I've been writing a LOT of posts about the threat modeling process and how one goes about doing the threat model analysis for a component.

    The one thing I've not talked about is what a threat model actually is.

    A threat model is a specification, just like your functional specification (a Program Management spec that defines the functional requirements of your component), your design specification (a development spec that defines the architecture that is required to implement the functional specification), and your test plan (a test spec that defines how you plan on ensuring that the design as implemented meets the requirements of the functional specification).

    Just like the functional, design and test specs, a threat model is a living document - as you change the design, you need to go back and update your threat model to see if any new threats have arisen since you started.

    So what goes into the threat model document?

    • Obviously you need the diagram and an enumeration and description of the elements in your diagram. 
    • You also need to include your threat analysis, since that's the core of the threat model.
    • For each mitigated threat that you call out in the threat analysis, you should include the bug # associated with the mitigation
    • You should probably have a one or two paragraph description of your component and what it does (it helps an outsider to understand your diagram), similarly, having a list of contacts for questions, etc are also quite useful.

    The third item I called out there reflects an important point about threat modeling that's often lost.

    Every time your threat model indicates that you have a need to mitigate a particular threat, you need to file at least one bug and potentially two.  The first bug goes to the developer to ensure that the developer implements the mitigation called out in the threat model, and the second bug goes to a tester to ensure that the tester either (a) writes tests to verify the mitigation or (b) runs existing tests to ensure that the mitigation is in place.

    This last bit is really important.  If you're not going to follow through on the process and ensure that the threats that you identified are mitigated, then your just wasting your time doing the threat model - except as an intellectual exercise, it won't actually help you improve the security of your product.

     

    Next: Presenting the PlaySound threat model!

  • Larry Osterman's WebLog

    Threat Modeling Again, Threat Modeling PlaySound

    • 7 Comments

    Finally it's time to think about threat modeling the PlaySound API.

    Let's go back to the DFD that I included in my earlier post, since everything flows from the DFD.

     

    This dataflow diagram contains a number of elements, they are:

    1. Application: External Interactor
    2. PlaySound: Process
    3. WAV file: Data Store
    4. HKCU Sound Aliases: Data Store
    5. Audio Playback APIs: External Interactor
    6. PlaySound Command (Application->PlaySound): DataFlow (Crosses Threat Boundary)
    7. WAVE Header (WAV file-> PlaySound) : DataFlow (Crosses Threat Boundary)
    8. WAV file Data (WAV file-> PlaySound) : DataFlow (Crosses Threat Boundary)
    9. WAV filename (HKCU Sound Aliases -> PlaySound) : DataFlow (Crosses Threat Boundary)
    10. WAV file Data (PlaySound -> Audio Playback APIs): DataFlow

    Now that we've enumerated the elements, we apply the STRIDE/Element methodology, which allows us to enumerate the threats that this component faces:

    1.  Application: External Interactor - Spoofing
    2.  Application: External Interactor - Repudiation
    3. PlaySound: Process - Spoofing
    4. PlaySound: Process - Tampering
    5. PlaySound: Process - Repudiation
    6. PlaySound: Process - Information Disclosure
    7. PlaySound: Process - Denial of Service
    8. PlaySound: Process - Elevation of Privilege
    9. WAV file: Data Store - Tampering
    10. WAV file: Data Store - Information Disclosure
    11. WAV file: Data Store - Repudiation
    12. WAV file: Data Store - Denial of Service
    13. HKCU Sound Aliases: Data Store - Tampering
    14. HKCU Sound Aliases: Data Store - Information Disclosure
    15. HKCU Sound Aliases: Data Store - Repudiation
    16. HKCU Sound Aliases: Data Store - Denial of service
    17. Audio Playback APIs: External Interactor - Spoofing
    18. Audio Playback APIs: External Interactor - Repudiation
    19. PlaySound Command (Application->PlaySound): DataFlow (Crosses Threat Boundary) - Tampering
    20. PlaySound Command (Application->PlaySound): DataFlow (Crosses Threat Boundary) - Information Disclosure
    21. PlaySound Command (Application->PlaySound): DataFlow (Crosses Threat Boundary) - Denial of Service
    22. WAVE Header (WAV file-> PlaySound) : DataFlow (Crosses Threat Boundary) - Tampering
    23. WAVE Header (WAV file-> PlaySound) : DataFlow (Crosses Threat Boundary) - Information Disclosure
    24. WAVE Header (WAV file-> PlaySound) : DataFlow (Crosses Threat Boundary) - Denial of Service
    25. WAV file Data (WAV file-> PlaySound) : DataFlow (Crosses Threat Boundary) - Tampering
    26. WAV file Data (WAV file-> PlaySound) : DataFlow (Crosses Threat Boundary) - Information Disclosure
    27. WAV file Data (WAV file-> PlaySound) : DataFlow (Crosses Threat Boundary) - Denial of Service
    28. WAV filename (HKCU Sound Aliases -> PlaySound) : DataFlow (Crosses Threat Boundary) - Tampering
    29. WAV filename (HKCU Sound Aliases -> PlaySound) : DataFlow (Crosses Threat Boundary) - Information Disclosure
    30. WAV filename (HKCU Sound Aliases -> PlaySound) : DataFlow (Crosses Threat Boundary) - Denial of Service
    31. WAV file Data (PlaySound -> Audio Playback APIs): DataFlow - Tampering
    32. WAV file Data (PlaySound -> Audio Playback APIs): DataFlow - Information Disclosure
    33. WAV file Data (PlaySound -> Audio Playback APIs): DataFlow - Denial of Service

     Phew.  You mean that the PlaySound API can be attacked in 33 different ways?  That's unbelievable.

    It's true.  There ARE 33 ways that you can attack the PlaySound API, however many of them are identical, and some of which are irrelevant.  That's the challenge of the next part of the process, which is the analysis phase.

    As I mentioned in the first STRIDE-per-element post, STRIDE-per-element is a framework for analysis.  That's where common sense and your understanding of the system comes into focus.

    And that's the next part in the series: Analyzing the threats enumerated by STRIDE-per-element.  This is the point at which all the previous articles come together.

Page 1 of 4 (85 items) 1234