Larry Osterman's WebLog

Confessions of an Old Fogey
Blog - Title

Why Threat Model?

Why Threat Model?

  • Comments 17

We're currently in the middle of the most recent round of reviews of the threat models we did for all the new features in Vista (these happen periodically as a part of the SDL).

As usually happens in these kinds of things, I sometimes get reflective, and I've spent some time thinking about what the reasons WHY we generate threat models for all the components in the system (and all the new features that are added to the system).

Way back in college, in my software engineering class, I was taught that there were two major parts to the design of a program.

You started with the functional specification.  This was the what of the program, it described the reasons for the program, what it had to do, and what it didn't have to do :).

Next came the design specification.  This was the how of the program, it described the data structures and algorithms that were going to be used to create the program, the file formats it would write, etc.

We didn't have to worry about testing our code, because we all wrote perfect code :).  More seriously, none of the programs we worked on were complicated enough to justify a separate testing organization - the developers would suffice to be the testers.

After coming to Microsoft, and (for the first time) having to deal with a professional testing organization (and components that were REALLY complicated), I learned about the 3rd major part, the test specification.

The test specification described how to test the program: What aspects were going to be tested, what were not, and it defined the release criteria for the program.

It turns out that a threat model is the fourth major specification, it's the one that tells how the bad guys are going to BREAK your program.  The threat model is a key part of what we call SD3 - Secure by Design, Secure by Default, and Secure in Deployment.  The threat model is a large part of how you ensure the "Design", it forces you to analyze the components of your program to see how it will react to an attacker.

Threat modeling is an invaluable tool because it forces you to consider what the bad guys are going to do to use your program to break into the system.  And don't be fooled, the bad guys ARE going to use your program to break into the system.

By forcing you to consider your program's design from the point of view of the attacker, it forces you to consider a larger set of failure cases than you'd normally consider.  How do you protect from a bad guy replacing one of your DLLs?  How do you protect against the bad guy snooping on the communications between your components?  How to you handle the bad guy corrupting a registry key or reading the contents of your files?

Maybe you don't care about those threats (they might not be relevant, it's entirely possible).  But for every irrelevant threat, there's another one that's going to cause you major amounts of grief down the line.  And it's way better to figure that out BEFORE the bad guys do :).

 

Now threat modeling doesn't help you with your code, it doesn't prevent buffer overflows or integer overflows, or heap underruns, or any of the other myriad ways that code can go wrong.  But it does help you know the areas where you need to worry about.  It may help you realize that you need to encrypt that communication, or set the ACLs on a file to prevent the bad guys from getting at them, etc.

 

Btw, before people comment on it, yes, I know I wrote a similar post last year :).  I had another aha related to it and figured I'd post again. Tomorrow, I want to go back and reflect on those early threat model posts :)

 

  • Things have moved on somewhat in CS courses and now they teach:

    - Specification
    - Design
    - Testing
    - Implementation
    - Evaluation
    (in that order)

    As the 'steps' of a software project. Testing definitely isn't left in the dust; it is a well graded area.

    They still don't teach much(anything?) about security / secure coding / best practises / testing for security purposes. At least not on my degree (and I'm a 2nd year).

    But then again academia always has run a little behind the industry...
  • The original version of this post included Implementation and Evaluation, but I removed it because it's Implementation and Evaluation aren't documentation.

    I know there's a HUGE issue about teaching security in college, Michael Howard and Microsoft have been working HARD to get schools to adopt a computer security course, but for whatever reason, they don't.

    Personally I think it's a huge shame.
  • At Leiden University we do have a "Seminar Security", but it deals more with cryptography and cipher theory than with secure coding practices. The one thing I did get from it is that bad guys will go far, very far, even to the extent of spending millions of Euros on it, to break your security if they think there's a profit.
  • [quote]
    They still don't teach much(anything?) about security / secure coding / best practises / testing for security purposes. At least not on my degree (and I'm a 2nd year).
    [/quote]
    I believe partly could be because the tutors are not experienced enough to teach it.

    One can really only teaching follow exactly the notes unless he/she have had experience to be the "bad guys" himself/herself.

    Actually, I think a "complete" security course should begin with teaching "how to break things" first, so the students would get a better idea on "how to protect things from being broken".
  • I have to say that we were taught about test specs when I was at Uni, and that was over ten years ago.

    Shouldn't it actually be other people, specialists, who make the threat models?
  • Moi,
     No, the specialists should NOT make the threat models.  Who do you think understands the code better, the person writing the code or some external consultant brought in as a Tiger Team?

     The members of the team need to be taught how to write a threat model, they don't need to be specialists.
  • Do you ever send out components (or small test programs with a feature) to someone who might represent your future user base? I don't mean full-blown betas, either.  I've know some average users who virtually break anything they touch without a lot of effort, and they actually turned out to be great people to "test" new ideas on (whether they knew I was doing it or not). :-)  Granted, they are not out to break your program/feature at all costs, but they may spot something that completely slipped your mind.
  • I'm not talking about knowing the code so much as having a better architectural view of things rather than just the component under review, knowing what holes might be open due to different/deeper knowledge of the spec., and/or knowing what threats are possible. By your argument there is no point in having testers because the coders know the code better, and we know that isn't true.
  • Moi,
     It's critical that the developer be involved in the development of the test plan, otherwise the tester won't know what to test.  

    The reality is that all the threats we've ever found fall into the 6 STRIDE categories (Spoofing, Tampering, Repudiation, Information disclosure, Denial of service and Elevation of privilege).  Each of those has a series of standard mitigations (I is mitigated with ACLs and/or encryption, Tampering is mitigated with ACLs and/or data validation, etc).  And of course "We don't care" is a valid mitigation.  The expert can't necessarily know which of these applies.

    It's critical that someone REVIEW the threat model that really understands security.  But it's not critical that they write the original threat model.

    Also, the expert should probably attend the "Big Threat Model Brainstorming Meeting".
  • Bill, not that I've ever seen.
  • Perhaps you could write an entry defining each of the STRIKE categories and even a small example? That'd would be very interesting, if you had the time. :-)
  • *STRIDE (Not 'STRIKE')

  • Agree with Manip, I'd love to see some examples of each of the categories (made up, if necessary), and mitigations. I think that would be not only interesting from a theortical PoV, but also helpful for when we poor mortals have to think about these things.

    Not sure I agree with you regarding having testers know the code. In my opinion their job is to make sure you have implemented X correctly according to the spec. So clearly, it is critical they know the specification, but if they know actually have knowledge of how you've implemented something then their test implementation can be affected by that.
  • By the way, are you aware that your "Suggestion Box" although far from full (compare with Raymond's :-) is closed, so we can't make suggestions?
  • PingBack from http://winblogs.security-feed.com/2006/02/13/why-threat-model/

Page 1 of 2 (17 items) 12