Software Engineering, Project Management, and Effectiveness
I'm working with the infamous Frank Heidt, George Gal and Jonathan Bailey to create a suite of modular, task-based security code examples. They happen to be experts at finding mistakes in code. Part of making good code is knowing what bad code looks like and more importantly what makes it bad, or what the trade-offs are. I've also pulled in Prashant Bansode from my core security team to help push the envelope on making the examples consumable. Prashant doesn't hold back when it comes to critical analysis and that's what we like about him.
For this exercise, I'm time-boxing the effort to see what value we produce within the time-box. We carved out a set of candidate code examples by identifying common mistakes in key buckets, including input/data validation, authentication, authorization, auditing and logging, exception management and a few others. We then prioritized the list and do daily drops of code. The outcome should be some useful examples and an approach for others to contribute examples.
Sharing a chunk of code is easy. We quickly learned that sharing insights with the code is not. Exposing the thinking behind the code is the real value. We want to make that repeatable. I think the key is a schema with test cases.
Here's our emerging schema and test cases ....
Code Example Schema (Short Form)
For more information on the schema and test cases, see Code Example Schema for Sharing Code Insights.
Today we had a deeply insightful review with Tom Hollander, Jason Taylor, and Paul Saitta. Jason and Paul are on site while we're solving another class of problems for customers. They each brought a lot to the table and collectively I think we have a much better understanding of what makes a good, reusable piece of code.
We made an important decision to optimize around "show me the code" and then explain it, versus a lot of build up and then the code. Our emerging schema has its limits and does not take the place of a How To or guidelines or a larger resuable block of code, but it will definitely help as we try to share more modular code examples that demonstrate proven practices.
In the book "How to Run Successful Projects III, The Silver Bullet", Fergus O'Connell identifies ten steps to structured project management:
These ten steps help make project management consistent, predictable, and repeatable. The first five steps are about planning your project. The last five are about implementing the plan and achieving the goal. These steps are based on 25 years of research into why some projects fail and others succeed.
I like to checkpoint any project I do against these steps. I find that when a project is off track, I can quickly pinpoint it to one of the steps above and correct course.
In the book, "How To Run Successful Projects III, The Silver Bullet", Fergus O'Connell uses a scoring system to predict project success.
What this means is that having clarity on what you want to accomplish and being able to identify the work to be done (steps 1 and 2) are the more significant indicators of project success.
After managing several projects over the years, I tend to agree. Step 2 is particularly interesting because it not only helps you calculate schedule and budget, but it helps you identify the right people for the jobs.
Today I had some interesting conversations with Loren Kohnfelder. Every now and then Loren and I play catch up. Loren is former Microsoft. If you don't know Loren, he designed the CLR security model and IE security zones. He created a model for more fine-grained control over security decisions and he's a constant advocate for simplifying security.
You might think two guys that do security stuff would talk about security. We didn't. We ended up talking about project management, blogging, social software, and where I think next generation guidance is headed. I'll share the project management piece.
I told Loren I changed my approach to projects. I use time boxes. Simply put, I set key dates and work backwards. I chunk up the work to deliver incremental value within the time boxes. This is a sharp contrast from the past where I'd design the end in mind and then do calculated guesstimates on when I'd be done, how much it would cost and the effort it would take.
I use rhythm for the time boxes. I use a set of questions to drive the rhythm … When do I need to see results? What would keep the team motivated with tangible results? When do customers need to see something? I realize that when some windows close, they are closed forever. The reality is, as a project stretches over time, risk goes up. People move, priorities change, you name it. When you deal with venture capitalists, a bird in hand today, gets you funding for two more in the bush.
Loren asked me how do I know the chunks add up to a bigger value. I told him I start with the end in mind and I use a combination of scenarios and axiomatic design. Simply put, I create a matrix of scenarios and features, and I check dependencies across features among the scenarios. What's my minimum set of scenarios my customers want to have something useful? Can I incrementally add a scenario? Can I take away scenarios at later points and get back time or money without breaking my design? Sounds simple, but you'd be surprised how revealing that last test is. With a coupled design, if you cut the wrong scenario you have a cascading impact on your design that costs you time and money.
We both agreed time boxed projects have a lot of benefits, where some are not obvious. Results breed motivation. By using a time box and rhythms, you change the game from estimating and promising very imprecise variables to a game of how much value can you deliver in a timebox. Unfortunately sometimes contracts or cultures work against this, but I find if I walk folks through it, and share the success stories, they buy in.
Around mid 2004, Randy Miller approached me with "I want to review MSF Agile with you with the idea of incorporating your work." I didn't know Randy or what to make of MSF Agile, but it sounded interesting.
Randy wanted a way to expose our security and performance guidance in MSF. Specifically he wanted to expose "Improving Web Application Security" and "Improving .NET Application Performance" through MSF. I was an immediate fan of the idea, because customers have always asked me to find more ways to integrate in the tools. I was also a fan because my two favorite mantras to use in the hallways are "bake security in the life cycle" and "bake performance in the life cycle". I saw this as a vehicle to bake security and performance in the life cycle and the tools.
We had several discussions over a period of time, which was a great forcing function. Ultimately, we had to figure out a pluggable channel for the guidance, the tools support and how to evolve over time. My key questions were:
These questions lead to a ton of insights around meta-models for software development life cycles, context-precision, organizing engineering practices and severl other concepts worth exploring in other posts.
My key philosophies were:
Randy agreed with the questions and the philosophies. We came up with some working models for pluggable guidance and integration. His job was to make the tools side happen. My job was to make the guidance side happen. I now had the challenge and opportunity of making guidance online and in the tools. This is how I ended up doing guidance modules for for .NET Framework 2.0. This also drove exposing p&p Security Engineering which is baked into MSF Agile by default.
Randy summarized our complimentary relationship best ...“The Patterns and Practices group produces an important, complimentary component to what we are building into MSF. In Visual Studio, our MSF processes can only go so deep on a topic. Our activities can provide the overview of the steps that a role should do but cannot provide all of the educational background necessary to accomplish the task.As many of the practices that we espouse in MSF (such as threat modeling) require this detailed understanding, we are building links into MSF to Patterns and Practice online material. Thus the activities in MSF and the Patterns and Practices enjoy a very complimentary relationship. The Patterns and Practices group continues to be very helpful and our relationship is one of very open communication.“
Mike Kropp, GM of our patterns & practices team, liked the results ...“– it was great to see the progress you’ve made over the past couple of months. here’s my takeaway on what you’ve accomplished:
I remember asking Randy at one point, why did you bet on our security and performance work? He told me it was because he knew we vetted and proved our work with customers and industry experts. He also knew we vetted internally across our field, support and the product teams. I told him if anybody wondered who we worked with, have them scroll down to the contributors and reviewers list for the security work as an example.
We have more work ahead of us, but I think we've accomplished a lot of what we set out to do and for that I'm grateful to Randy Miller, David Anderson and their respective team.
This is the emerging schema and test-cases we're using for code examples:
Code Example Schema (Template)
Code Example Schema (Template explained and test cases)
TitleInsert title that resonates from a findability perspective.
Summary Insert 1-2 lines max description of the intent.
Applies To Insert the key technologies/platform the code applies to.
Objectives Insert bulleted list of task-based outcomes for the code.
Solution Example Insert code example as a blob within a function. The blob allows quick reading of the code. It also allows quickly testing from a function, inlining within other code, or refactoring for a given context. The alternative is to factor up front, but this increases complexity and can negatively impact consumption. This leaves refactoring to the developer for their given scenario.
Problem Example List examples of common mistakes along with issues.
Test Case Insert relevant setup information. Write the code to call the functional blob from Solution Example.
Expected Results Insert what you expect to see when running the test case.
Scenarios Insert bulleted list of key usage scenarios.
More Information Optional. Insert more information as necessary. This could be background information or interesting additional details.
Additional Resources Optional. Insert bulleted list of descriptive links to resources that have direct value or relevancy.
We're using this schema for our Security Code Examples Project.
A few folks have asked me about Axiomatic design that I mentioned in my post on Time-boxes, Rhythm, and Incremental Value. I figure an example is a good entry point.
An associate first walked me through axiomatic design like this. You're designing a faucet where you have one knob for hot and one knob for cold. Why's it a bad design? He said because each knob controls both temperature and flow. He said a better design is one knob for temperature and one knob for flow. This allows for incremental changes in design because the two requirements (temperature and flow) have their independence. He then showed me a nifty matrix on the board that mathematically *proved* good design.
At the heart of Axiomatic Design are these two Axioms (self-evident truth):
For an interesting walkthrough of Axiomatic Design, see "A Comparison of TRIZ and Axiomatic Design".