Software Engineering, Project Management, and Effectiveness
I like competitive studies. I'm usually more interested in the methodology than the outcome. The methodology acts as a blueprint for what's important in a particular problem space.
One of my favorite studies was the original @Stake study comparing .NET 1.1 vs. IBM's WebSphere security, not just because our body of guidance made a direct and substantial difference in the outcome, but because @Stake used a comprehensive set categories and an evaluation criteria matrix that demonstrated a lot of depth.
Because the information from the original report can be difficult to find and distill, I'm summarizing it below:
Overview of ReportIn June 2003, @Stake, Inc., an independent security consulting firm, released results of a Microsoft-commissioned study that found Microsoft's .Net platform to be superior to IBM's WebSphere for secure application development and deployment. @stake performed an extensive analysis comparing security in the .NET Framework 1.1, running on Windows Server 2003, to IBM WebSphere 5.0, running on both Red Hat Linux Advanced Server 2.1 and a leading commercial distribution of Unix..
FindingsOverall, @stake found that:
Approach@stake evaluated the level of effort required for developers and system administrators to create and deploy solutions that implement security best practices, and to reduce or eliminate most common attack surfaces.
Ratings for the Evaluation Criteria
Scorecard CategoriesThe scorecard was organized by application, Web server and platform categories. Each category was divided into smaller categories to test the evaluation criteria (best practice compliance, implementation complexity, quality of documentation, developer competence, and time to implement).
Application Server Categories
Host and Operating System Categories
Web Server Categories
More InformationFor more information on the original @stake report, see the eWeek.com article, .Net, WebSphere Security Tested.
Whenever I bring up the OpenHack 4 competition, most aren't ware of it. It was an interesting study because it was effectively an open "hack me with your best shot" competition.
I happened to know the folks on the MS side, like Erik Olson and Girish Chander, that helped secure the application, so it had some of the best available security engineering. In fact, customers commented that it's great that Microsoft can secure its applications ... but what about its customers? That comment was inspiration for our Improving Web Application Security:Threats and Countermeasures guide.
I've summarize OpenHack 4 here, so it's easier for me to reference.
Overview of OpenHack 4In October 2002, eWeek Labs launched its fourth annual OpenHack online security contest. It was designed to test enterprise security by exposing systems to the real-world rigors of the Web. Microsoft and Oracle were given a sample Web application by eWeek and were asked to redevelop the application using their respective technologies. Individuals were then invited to attempt to compromise the security of the resulting sites. Acceptable breaches included of cross-site scripting attacks, dynamic Web page source code disclosure, Web page defacement, posting malicious SQL commands to the databases, and theft of credit card data from the databases used.
Outcome of the CompetitionThe Web site built by Microsoft engineers using the Microsoft .NET Framework, Microsoft Windows 2000 Advanced Server, Internet Information Services 5.0, and Microsoft SQL Server 2000 successfully withstood over 82,500 attempted attacks to emerge from the eWeek OpenHack 4 competition unscathed.
For more information on implementation details of the Microsoft Web application and configuration used for the OpenHack competition, see "Building and Configuring More Secure Web Sites: Security Best Practices for Windows 2000 Advanced Server, Internet Information Services 5.0, SQL Server 2000, and the .NET Framework"
The Security Innovation Security Engineering study, Comparing Security in the Application Lifecycle - Microsoft and IBM Development Platforms Compared, is timely, given the emerging industry emphasis on integrating security in the life cycle.
My favorite quote in the study is "The patterns & practices security guidance covers the key security engineering activities better than any other resource we’ve found." I think this reflects the fact we have more than 2,500 pages of security guidance (see Security Guidance, Security Engineering, Threat Modeling, and Improving Web Application Security) , and we've integrated our guidance into MSF/VS 2005 (see MSF/VS 2005 and p&p Integration.)
The study was available from the MSDN Security DevCenter for a while but seems to have fallen off. I've summarized the study here for quick reference:
OverviewSecurity Innovation evaluated the guidance and tools of Microsoft's and IBM's development platforms. The study compared the support available to a development team via security guidance, documentation and security focused features in the life-cycle tool suites. Gartner reviewed the approach.
Results of the Study
First, here's a couple key points, then the summaries are below:
Quotes from the Study
More InformationFor more information, see Comparing Security in the Application Lifecycle - Microsoft and IBM Development Platforms Compared at Security Innovation's site. They created four documents that take you through the details and results: Executive Summary, Research Overview, Full Detailed Reports and Results, and Methodology.
In the book Flawless Execution, James D. Murphey, shares techniques used by fighter pilots to achieve peak performance, accelerate the learning curve, and make performance more predictable and repeatable.
The essence of the execution engine is a set of iterative steps:
Murphy connects the execution framework to the strategy. If they aren't aligned, you can win the battle, but lose the war. He distinguished strategy from tactices, by saying strategy is about four things:
Murphy is very prescriptive. For every technique, there's a set of steps and checkpoints. I've successfully scaled down some of the techniques, such as Future Picture, to meet my needs.
What I like about the overall execution framework is that its practices are drawn from life and death scenarios. Fighter pilots need to learn what works from their missions, and share it as quickly as possible. What I also like is that Murphey illustrates how ordinary people, are capable of execution excellence.
I met with Gabriel Torok and Sebastian Holst of PreEmptive Solutions, the other day. PreEmptive makes obfuscator products, including the Dotfuscator that comes with Visual Studio. Gabriel founded PreEmptive more than 10 years ago, and it was originally a code optimization company (The dual focus on performance and security resonates with me).
I was familiar with obfuscation and its limitations. I wasn't as familiar with some of the internals of specific obfuscation techniques, such as identifier renaming, control flow obfuscation, metadata removal, and string encryption, or how you can tweak or tune these. One surprise for me was that obfuscation for some scenarios could yield a 30-40% reduction in size (the result of shortening identifier names and "pruning" libraries that are never called.)
Gabriel's interested in creating obfuscation guidance for the community. I gave him my wish list:
Sebastion and I exchanged some metaphors. In reference to the limits of obfuscation, I said that just because door-locks don't prevent car thieves, that doesn't mean cars should come without locks. Sebastion related it to smoke alarms. In the grand schema of things, smoke alarms play a key role in saving lives and limiting damage, but to the individual, there's not a lot of value until a fire occurs. The fact that smoke alarms are low cost and simple helps justify their common use. He added, risk varies by context, so the value to hotels or restaurants may be more obvious.
It was an interesting and insightful meeting and I look forward to Gabriel's whitepaper.
I'm realizing more and more how stories help you drive a point home. It's one thing to make a point, it's another for your story to make the point for you. If your ideas aren't sticking, or you're not getting buy in, maybe a compelling story is the answer.
Crafting useful stories is an art, and, now, apparently a science. Srinath pointed me to Stories at Work on 50Lessons.com. The video shares a story about using stories as a catalyst for change and a recipe for good strategic stories:
The value of the stories is they help you engage people and they have a more powerful recall than slides, facts and figures.
When a patterns & practices deliverable would be ready to ship, our General Manager (GM) would ask me to sign off on the performance and security. I would usually be pulled thin so I needed a way to scale. To do so, I created a small checkpoint for performance and scalability. The checkpoint was simply a set of questions that are a forcing function to make sure you've addressed a lot of the basics (and avoid a lot of "do-overs"). Here's what we used internally:
Checkpoint: Performance and Scalability
The checkpoint helped the engineering team shape their approach and it simplfied my job when I had to review. You can imagine how some of these questions can shape your strategies. This is by no means exhuastive, but it was effective enough to tease out big project risks. For example, do you know when you're software's going to hit capacity? Did you completely ignore the customer's practical environment and use up all their resources? Do you have a way to intstrument for when things go bad and is this configurable? When your software is in trouble, what sort of support did you enable for troubleshooting and diagnostics?
While I think the original checkpoint was helpful, I think a pluggable set of checkpoints based on application types would be even more helpful and more precise. For example, if I'm building a Web application, what are the specific metrics or key instrumentation features I should have? If I'm building a smart client, what sort of instrumentation and metrics should I bake in? … etc. If and when I get to a point where I can do more checkpoints, I'll use a strategy of modular, type-specific, scenario-based checkpoints to supplement the baseline checkpoint above.