A recent comment on the IE Blog made it pretty apparent that not everybody is aware of Microsoft's efforts around security. Michael Howard has mentioned the Security Development Lifecyle before, but in case you don't want to read the entire document on MSDN, here's a quick introduction on the basics of what happens:
People need to be trained about security issues before they can effectively design, write, test, or document a software product. I am sure that some readers might chuckle to themselves here and make a snide remark about how "Everybody should know about buffer over-runs (BOs) by now!" but there are two things I want to say about that:
1) It's better to incorrectly assume your employees do not know about BOs and waste a few hours training them on it than it is to incorrectly assume they do know about BOs and have them write insecure code.
2) If you think BOs are the only kinds of security issues to be concerned about, you have a lot of learning ahead of you, young reader.
Training covers such things as poor use of cryptographic functions (or use of known-weak cryptographic functions), SQL injection, cross-site scripting (XSS), integer overflows, etc. and that's just the "basics" that everyone attends. Then we have advanced courses on topics such as fuzzing, threat modelling, and mitigation techniques that people can choose to "specialise" in.
Once you know about security issues and the kinds of threats that are "out there", you need to build secure architectures and use existing technology in a secure fashion to avoid introducing design-level flaws in your software. We do this by having high-level design reviews with subject matter experts (SMEs) for larger, riskier features; and by performing an in-depth threat modelling process (it's not just a threat model document!) for pretty much every component. (Clearly there is very little value to threat modelling the "Bold" toolbar button in Microsoft Word, so some things get by without a complete TM :-) ).
This is where I personally spend most of my time -- being an SME for technologies such as Internet Explorer hosting, ActiveX controls, managed code security, and so forth. I participate in a lot of design reviews for products all over Microsoft (pretty much everyone does something with a web browser these days :-) ) and in particular I spend a significant amount of time with the IE team doing threat analysis on features and making sure we've got all our bases covered with respect to security.
We have well-trained architects, well-trained program managers, well-trained developers, well-trained testers, and some great analysis tools to help us avoid, detect, and remove common code-level security flaws such as BOs or the use of "bad" (hard-to-use-correctly) APIs. We also do code reviews, threat-model-driven testing, and perform other activities to prevent (or remove) security defects from the code before it is released to customers (or in some cases, before it is ever checked in). Run-time analysis tools such as AppVerifier also help to catch potentially "bad" API usage or point out least-privilege violations (for example, asking for KEY_ALL_ACCESS when you only need KEY _READ).
One little-know fact is that Microsoft has done a lot of work to help deprecate the "bad" APIs commonly found in C and C++ runtimes and to replace them with less "dangerous" versions that can help prevent certain classes of bugs. For example, there is the "Safe String" library in Windows that is used both internally and externally to replace unbounded string copies (eg, strcpy) with bounded copies (StringCchCopy). What's more, the Visual C++ team has new "security enhanced" versions of many common CRT functions, and Microsoft is working to make these new functions a part of the ANSI C standard.
Microsoft innovating in security and working with standards bodies and sharing technology with the world? Who would have thunk it? :-)
You can read more about the new libraries in MSDN Magazine.
After we have designed, built, tested, documented, etc. the product, it gets released to customers. We try very hard to strike a balance between usability and security with our products -- the "Secure by Default" mantra -- and it's often a very hard thing to do. How do you make your product usable enough so that most customers can do what they want without getting frustrated or calling Product Support Services, yet still keep the less-used or higher-risk features turned off if they are not needed? IIS 6 in Windows 2003 was clearly a big step in one direction, and it's payed off big-time. But such an approach wouldn't work with, say, MSN Messenger where it's pretty clear that if you install the product you need it to talk on the network and receive messages from your buddies.
Despite all our hard work, security issues are still found in released products and this is where the Microsoft Security Response Centre (MSRC) comes in. The MSRC team is dedicated to receiving and investigating reports of security issues in Microsoft products, and then working with product teams to find, fix, and release patches for them on our now-famous "Patch Tuesday". Patching on a predictable basis is something that Microsoft did because customers requested it, and whilst initially Microsoft got a lot of heat for "trying to make it look like we had fewer issues", many in the industry have come to realise that it's "the right thing to do" and at least one analyst expects others to follow suit (eventually).
So that's it, in a nutshell. We've said for a long time that "Security is an Industry Problem" and I would love it if every vendor took a long, hard look at their development process and adopted something similar to the SDL. There will always be customers who choose not to run Microsoft software, but we still want them to be as secure as possible on their platform of choice.
I did a brief search on Apache's official site, and they ask that patches have performance optimisations but there is no mention of having a security review or any minimum standards for security. That's not to say people aren't performing security reviews of Apache, but the "many eyes" don't help if they're (i) not looking at the important bits or (ii) don't know what to look for.
Microsoft has a publicly-documented process for designing, building, releasing, and supporting software "that needs to withstand malicious attack," and although software will always have bugs we're definitely showing improvements.
Does your software vendor of choice have such a process? Or only rhetoric?