As I mentioned the other day, we had three huge big realizations as we've been doing more and more threat models. The first (which we've known all along) was that threats are permanent - the threats that apply to the elements of your component don't change over time (assuming that your diagram is accurate and that the design doesn't change). The second (again, which we've known all along) is that threats can be categorized according to each of the 6 STRIDE categories and that there are standard mitigations that can be applied for each of the STRIDE elements. The final piece of the puzzle was that for each type of element, there is a limited set of STRIDE categories that apply to that element.
When you put those pieces together, it turns out that you can build a modeling process based on your diagram (remember diagrams? I started out this series talking about them - there was a reason for it :))
Since each element in your diagram has a series of threats that apply to it, you can build your threat model simply by thinking about how each threat applies. That means that every dataflow in your document, has Tampering, Information Disclosure, and Denial of Service threats. It's entirely possible that you don't actually care about those threats, but they ARE at risk to those threats.
We call this process of considering the threats to each element in the DFD "STRIDE-per-Element" (it's not the most original name, but it IS concise and highly descriptive).
The way that STRIDE-per-element shines (and why it's been so effective here at Microsoft) is that it's a flexible paradigm.
A group can use STRIDE/element as a framework that can lead you to consider threats that you hadn't considered before, they can use STRIDE/element as a way of guiding a brainstorming session, or they can use STRIDE/element as a way of thinking about which mitigations might be appropriate for the various threats that their elements face. The STRIDE/element methodology allows for all of these.
And the best part about it (as far as I'm concerned) is that STRIDE/element allows people to produce competent threat models with relatively little training. Which, as I mentioned at the beginning of the series is important. At Microsoft, every person on the development team participates in the threat modeling process, because they understand their feature better than anyone else does.
The STRIDE/element methodology ends up creating a fair number of threats, even a component with a relatively tiny diagram like:
has at least 18 different threats according to the STRIDE/element methodology. The good news is that many/most of those threats aren't meaningful threats - for example, if "Component" in the example above is an API, the "spoofing" threat against "Application" isn't particularly meaningful - you don't care WHO calls your API, one caller is as good as another. On the other hand, if "Do Something" involved communicating data over the network, you probably DO care about the T, I and D threats associated with that data flow. It's the analysis of each threat that lets you know how to handle that.
This analysis is the core of the threat model, and where the real work associated with the process takes place (and where the value of the process shows up). The analysis of the threats associated with your component let you know (a) how you need to change the component to prevent against various classes of attacks and (b) what your test team needs to do to ensure that your mitigations are in place and are correct.
Next: Let's start building the PlaySound threat model!
 I'll come back to this particular point a couple of posts from now, it's important.
Finally it's time to think about threat modeling the PlaySound API. Let's go back to the DFD that I included
Adam Shostack here. I've been meaning to talk more about what I actually do, which is help the teams
I want to wrap up the threat modeling posts with a summary and some comments on the entire process. Yeah,
I just noticed that STRIDE perelement is remarkably similar to how physicians solve a (surprisingly) similar problem: reading EKGs.
The trick to successfully reading EKG's is to have a set pattern of things to look for, and then follow the pattern religiously; regardless of whether you are leisurely discussing an EKG in cardiology rounds or madly reviewing one in the middle of an emergency. I saw my intern's repeatedly try to shortcut the process and "Jump to the obvious answer" and miss significant, but nonobvious, abnormalities.
I have a few thoughts based on the analogy between the two processes which I am curious as to how much they apply in software engineering.
1. The first is don't try to streamline the process too much. The lesson I think we have both learned is that the process is the value -- each of the ideas needs to pass through your head sequentially and get accepted or rejected.
2. We are a lot less onerous in our documentation (in this respect) than you describe. I am permitted to simply write "EKG normal" and my collegues will accept that I checked the rate, rhythm, axis, etc because this is how any professional reads an EKG. If they were abnormal, I would have said so.
That allowance gives us speed at the expense of transparency. I can read an entire ekg, which includes checking about 20 different things in 12 different leads in about 1 minute. Discussing (or documenting) those same critera takes 5-10 times as long. I wonder if some of the pushback you describe is against STRIDE by example is against documentation and not the process.
In medicine we have a very personal model of responsibility and liability, where as software engineering is more team oriented. I am not suggesting our documentation requirements would work in software engineering. I mention it merely because I think the similarity and contrast is interesting.
Interesting questions and analogies--do you have a good intro site I can read up on EKG? Searching there's a lot of expert information out there.
Regarding "EKG normal," you have a different pre-cursor than we do. Not everyone threat modeling has been through years of training in software engineering, and so what's "normal" to one person may not be normal to another. Maybe in a few years we'll be a lot closer to what you descibe.
-- Adam Shostak
The classic text that _everybody_ uses to learn EKGs is Dubin's "Rapid Interpretations of EKG's." I found some of the reference sheets from dubin at http://www.emergencyekg.com/Reference_Sheets.pdf. The first page has an abreviated algorithm. Like most internists I have taken a special interest in EKGs, so my algorithm is a little bit longer. This is the one most of the medical students and interns use.
I don't think that software engineering is that much different than medicine with regard to having practitioners at many levels of skill. I often get asked to "look over an ekg" by docs who read fewer than I do. I frequently forward EKGs to the cardiologist I refer to when I have questions. So just like there are run of the mill developers, team security champions, and full time security specialists, we have just as much variation in skill with EKG interpretation as programmers do with software security.
As you mention, we have the advantage that it is very easy for us to agree about what "normal" means. We just do EKGs on 100 consecutive healthy 20 year olds you find on the street. Anything you see in more than 5 of them is normal. (This has actually been done, many times.) I am not sure I know what a "normal" threat model means. I am not sure I even know what a "mitigated" threat model or a "secured" threat model means. It is possible that in 5 years you will be able to tell me.