Testing in the SDL

Testing in the SDL

Rate This
  • Comments 9

James Whittaker here 

“You can’t test quality in.” It’s a truism coined long ago and an accepted fact of software development. Yet, for security, testing is arguably the most talked about aspect of the Security Development Lifecycle (SDL). When we get security wrong, the first criticism we almost always hear is, “Didn’t you guys test this thing?” It is no great stretch to say that many of the most famous industry security folks made their reputation by finding vulnerabilities (through, no doubt, testing). You simply can’t avoid the subject of testing when you talk about security, and you can’t be sure you’re secure without testing.

We often get questions about SDL-required security testing and too often these questions deal exclusively with fuzz testing. Equating fuzz testing and security testing couldn’t be further from the truth when it comes to how it is treated inside Microsoft. With this post, I want to shed some light on what Microsoft actually does for security testing.  In a follow up post on this blog, Rob Roberts will talk about our privacy testing practices.

To begin, it is difficult to confine testing activity within a single SDL phase. At Microsoft, we don’t try. Testers are involved in architecture review, security design reviews, threat modeling, code reviews and many other things that happen both before and after the actual testing phase. In each of these instances, testers bring a valuable how-I-would-break-this slant to these endeavors. This contribution has been valuable enough to spawn a big push around the company to move testing activity to earlier phases of the lifecycle and, though some might not agree, I think the practice of threat modeling can be ascribed to this movement. The idea of thinking through threats and understanding attack vectors has been our focus in security testing for years, and threat modeling represents the extraction of this process as its own standalone entity.

Our overall goal is clear - whenever an engineer designs or writes code, we want that person to think about how the code might be exploited. When attack scenarios, threats and test cases are swirling around in a developer’s mind as they architect, design or write code, chances are he or she will write more secure code and plan better defenses. Clearly there is an overwhelming amount of stuff to think about, requiring a healthy amount of due caution and discourse with teammates and outside experts. Being careful and consulting colleagues is rarely a bad thing!

But, no matter how successful we are in spreading testing wisdom throughout the SDL, at some point we need to check that such wisdom actually made its way into the shipping product. I trust developers to do the right thing, but as a tester myself, you better believe I’m going to check that they actually did it.

Microsoft uses a three-pronged approach to security testing. During these tests we may refer to a threat model or security design review document, but we may also choose to ignore these documents for an independent assessment of an application or service. This decision is at the discretion of the security test lead, and depends on how independent he or she wants the test team to be.

1.       Attacks against the application’s environment.

 

The environment, the sum total of all OS components, runtime libraries, environment variables, network activity, file system configurations, registry keys and so forth, is probably the biggest unknown when fielding an application. For some environments the application will work securely, for others it may fail miserably. We train our testers to map out the environment, identify components subject to modification or variation and test as many configurations of these as possible. These attack scenarios are recognition that our applications work in unpredictable environments where we have to work out the trust relationships very carefully. It takes only one insecure component to put an entire machine or network at risk. We need to ensure that our own applications work securely despite the presence of these environment insecurities.

 

2.       Direct attacks against the application itself.

 

Inputs are dangerous and inputs that cross trust boundaries are crucial targets of this class of security testing. Our testers must build and maintain lists of illegal, ill-formed and improper inputs that are consumed by their application’s interfaces. Code, scripts, SQL queries, special characters, long strings and the like must be gathered in large numbers and used to pummel the application under test mercilessly. Large scale automated testing comes into play here in a big way., Our goal is for our applications to be able withstand targeted and sustained attacks – whether it’s a regression suite of past and potential exploits or fuzz testing using both random or format-aware logic. These tests are crucial to prevent repeat exploits and to test against targeted attacks scenarios.

 

3.       Indirect attacks against the application’s functionality.

Application features need to be cataloged for potential bad effect. All features clearly have intended functionality for good effect or they wouldn’t be features, our concentration as security testers is to understand what ways those features can be misused to the misery or inconvenience of our users. We must look at our application’s functionality and ask whether any of it can be ‘turned against itself.’ Are there ways that the software can be easily misconfigured? Can security features be circumvented? Is there some function whose purpose is benign and even useful that under certain circumstances has undesirable consequences? A feature-by-feature assessment is necessary to ensure we’ve covered all the bases.

Security testing has been – and will always be – about assurance, that is, assuring that the product as built and shipped has been thoroughly tested for potential vulnerabilities. Bug detection and removal will continue to be important until a time in the future comes when we can deploy provably secure systems. However, we’re likely never to get to such a future without learning the lessons that testing is teaching right now. Anyone can write a system and call it secure – it’s only through watching real systems fail in real ways that we learn to get better. Moral of the story – testing is by far the best way to show us what we’re doing wrong in software development.

 

Comments
  • James - you raise some interesting points.  I'm a grail seeker - things that don't exist and so I wrote a little piece about getting hard software assurance and how difficult it is.  Your thoughts on the matter?

    http://securityretentive.blogspot.com/2007/06/questions-about-software-testing-and.html

  • Hey Andy,

    Nice piece. I hope your blog gets read by the SDL community...you make some good points and never called me a wanker once.

    I started out in Cleanroom software engineering. We weren't grail seekers then, we *knew* we had the answers. We called Cleanroom the silver bullet. And we were actually serious.

    We were wrong of course and I've grown up a lot since then. Let's just say that reality shoved its way into some uncomfortable orifices. But whether its a holy grail or a silver bullet, I have come to the conclusion that it will be neither holy, nor silver. I think the real answers will be more like brown bullets. They won't be shiny and they won't offer you paradise. They'll be ugly and will require hard work lots of double checking.

    And we'll still miss some things despite all that. So I am looking for a grail that has a hidden compartment with instructions about what to do when we screw up...and I hope those recovery procedures don't involve bullets.

  • I just hope the definition of a good post isn't simply not calling you a wanker. :)

  • Scott Lambert here. I work on the Security Engineering Tools team where we're responsible for researching,

  • Hey everyone, Jeremy Dallman here. One of the phrases I often hear during vision and strategy planning

  • Hey everyone, Jeremy Dallman here. One of the phrases I often hear during vision and strategy planning

  • James Whittaker here “You can’t test quality in.” It’s a truism coined long ago and an accepted fact of software development. Yet, for security, testing is arguably the most talked about aspect of the Security Development Lifecycle (SDL). When we get

  • James Whittaker here “You can’t test quality in.” It’s a truism coined long ago and an accepted fact of software development. Yet, for security, testing is arguably the most talked about aspect of the Security Development Lifecycle (SDL). When we get

  • [Nacsa Sándor, 2009. január 13. – február 3.]  A minőségbiztosítás kérdésköre szinte alig ismert

Page 1 of 1 (9 items)
Leave a Comment
  • Please add 1 and 3 and type the answer here:
  • Post