J.D. Meier's Blog

Software Engineering, Project Management, and Effectiveness

March, 2006

  • J.D. Meier's Blog

    Code Example Schema for Sharing Code Insights


    This is the emerging schema and test-cases we're using for code examples:

    Code Example Schema (Template)

    • Title
    • Summary
    • Applies To
    • Objectives
    • Solution Example
    • Problem Example
    • Test Case
    • Expected Results
    • Scenarios 
    • More Information
    • Additional Resources

    Code Example Schema (Template explained and test cases)

    Insert title that resonates from a findability perspective.

    Test Cases:

    • Does the title distinguish it from related examples?
    • If technology is in the title, is the version included?

    Insert 1-2 lines max description of the intent.

    Test Cases:

    • Does the description crisply summarize the solution
    • Is the intent of the code clear?

    Applies To
    Insert the key technologies/platform the code applies to.

    Test Cases:

    • Versions are clear?

    Insert bulleted list of task-based outcomes for the code.

    Test Cases:

    • Is it clear why the solution example is preferred over the problem example?

    Solution Example
    Insert code example as a blob within a function.  The blob allows quick reading of the code.  It also allows quickly testing from a function, inlining within other code, or refactoring for a given context.  The alternative is to factor up front, but this increases complexity and can negatively impact consumption.  This leaves refactoring to the developer for their given scenario.

    Test Cases:

    • Is the code example organized as a blob within a function that can easily be tested or refactored?
    • Do the comments add insights around decisions?
    • Are the comments concise enough so they don't break the code flow?

    Problem Example
    List examples of common mistakes along with issues.

    Test Cases:

    • Are the mistakes clear?
    • Are the patterns and variations of the problems clear?

    Test Case
    Insert relevant setup information.  Write the code to call the functional blob from Solution Example.

    Test Cases:

    • Is setup Information included?
    • Does the example call the functional blob in the Solution example?

    Expected Results
    Insert what you expect to see when running the test case.

    Test Cases:

    • If you run the Test Case, do the Expected Results match?

    Insert bulleted list of key usage scenarios.

    Test Cases:

    • Usage scenarios are a flat list?
    • Usage scenarios are based on real-world?
    • Usage scenarios convey when to use the code?

    More Information
    Optional.  Insert more information as necessary.  This could be background information or interesting additional details.

    Additional Resources
    Optional.  Insert bulleted list of descriptive links to resources that have direct value or relevancy.

    Test Cases:

    • The links starts with the pattern "For more information on X, see ..."?
    • The links are directly relevant versus simply nice to have?

    We're using this schema for our Security Code Examples Project.

  • J.D. Meier's Blog

    Project Success Indicators (PSI)


    In the book, "How To Run Successful Projects III, The Silver Bullet", Fergus O'Connell uses a scoring system to predict project success.

    1. (20) Visualize the goal
    2. (20) Make a list of jobs
    3. (10) There must be one leader
    4. (10) Assign people to jobs
    5. (10) Manage expectations, allow a margin of error, have a fallback position
    6. (10) Use an appropriate leadership style
    7. (10) Know what's going on
    8. (10) Tell people what's going on
    9. (00) Repeat steps 1-8 until step 10
    10. (00) The Prize

    What this means is that having clarity on what you want to accomplish and being able to identify the work to be done (steps 1 and 2) are the more significant indicators of project success. 

    After managing several projects over the years, I tend to agree.  Step 2 is particularly interesting because it not only helps you calculate schedule and budget, but it helps you identify the right people for the jobs.

  • J.D. Meier's Blog

    Ten Steps for Structured Project Management


    In the book "How to Run Successful Projects III, The Silver Bullet", Fergus O'Connell identifies ten steps to structured project management:

    1. Visualize the goal
    2. Make a list of jobs
    3. There must be one leader
    4. Assign people to jobs
    5. Manage expectations, allow a margin of error, have a fallback position
    6. Use an appropriate leadership style
    7. Know what's going on
    8. Tell people what's going on
    9. Repeat steps 1-8 until step 10
    10. The Prize

    These ten steps help make project management consistent, predictable, and repeatable.  The first five steps are about planning your project.  The last five are about implementing the plan and achieving the goal.  These steps are based on 25 years of research into why some projects fail and others succeed.

    I like to checkpoint any project I do against these steps.  I find that when a project is off track, I can quickly pinpoint it to one of the steps above and correct course.

  • J.D. Meier's Blog

    Security Code Examples Project


    I'm working with the infamous Frank Heidt, George Gal and Jonathan Bailey to create a suite of modular, task-based security code examples.  They happen to be experts at finding mistakes in code.  Part of making good code is knowing what bad code looks like and more importantly what makes it bad, or what the trade-offs are.  I've also pulled in Prashant Bansode from my core security team to help push the envelope on making the examples consumable.  Prashant doesn't hold back when it comes to critical analysis and that's what we like about him.

    For this exercise, I'm time-boxing the effort to see what value we produce within the time-box.  We carved out a set of candidate code examples by identifying common mistakes in key buckets, including input/data validation, authentication, authorization, auditing and logging, exception management and a few others.  We then prioritized the list and do daily drops of code.  The outcome should be some useful examples and an approach for others to contribute examples.

    Sharing a chunk of code is easy.  We quickly learned that sharing insights with the code is not.  Exposing the thinking behind the code is the real value.  We want to make that repeatable.  I think the key is a schema with test cases.

    Here's our emerging schema and test cases ....

    Code Example Schema (Short Form)

    • Title
    • Summary
    • Applies To
    • Objectives
    • Solution Example
    • Problem Example
    • Test Case
    • Expected Results
    • More Information
    • Additional Resources

    For more information on the schema and test cases, see Code Example Schema for Sharing Code Insights.

    Today we had a deeply insightful review with Tom Hollander, Jason Taylor, and Paul Saitta.  Jason and Paul are on site while we're solving another class of problems for customers.  They each brought a lot to the table and collectively I think we have a much better understanding of what makes a good, reusable piece of code. 

    We made an important decision to optimize around "show me the code" and then explain it, versus a lot of build up and then the code.  Our emerging schema has its limits and does not take the place of a How To or guidelines or a larger resuable block of code, but it will definitely help as we try to share more modular code examples that demonstrate proven practices.

  • J.D. Meier's Blog

    Axiomatic Design


    A few folks have asked me about Axiomatic design that I mentioned in my post on Time-boxes, Rhythm, and Incremental Value.  I figure an example is a good entry point. 

    An associate first walked me through axiomatic design like this.  You're designing a faucet where you have one knob for hot and one knob for cold.  Why's it a bad design?  He said because each knob controls both temperature and flow.  He said a better design is one knob for temperature and one knob for flow.  This allows for incremental changes in design because the two requirements (temperature and flow) have their independence.  He then showed me a nifty matrix on the board that mathematically *proved* good design.

    At the heart of Axiomatic Design are these two Axioms (self-evident truth):

    • Axiom 1 (independence axiom): maintain the independence of the Functional Requirements (FRs).  
    • Axiom 2 (information axiom): minimize the information content of the design.

    For an interesting walkthrough of Axiomatic Design, see "A Comparison of TRIZ and Axiomatic Design".

  • J.D. Meier's Blog

    MSF/VS 2005 and patterns and practices Integration


    Around mid 2004, Randy Miller approached me with "I want to review MSF Agile with you with the idea of incorporating your work."  I didn't know Randy or what to make of MSF Agile, but it sounded interesting. 

    Randy wanted a way to expose our security and performance guidance in MSF.  Specifically he wanted to expose "Improving Web Application Security" and "Improving .NET Application Performance" through MSF.  I was an immediate fan of the idea, because customers have always asked me to find more ways to integrate in the tools.  I was also a fan because my two favorite mantras to use in the hallways are "bake security in the life cycle" and "bake performance in the life cycle".  I saw this as a vehicle to bake security and performance in the life cycle and the tools.

    We had several discussions over a period of time, which was a great forcing function.  Ultimately, we had to figure out a pluggable channel for the guidance, the tools support and how to evolve over time.  My key questions were:

    • how to create a pluggable channel for the p&p guidance?
    • what does a healthy meta-model for the life cycle look like?
    • how to integrate key engineering techniques such as threat modeling or performance modeling?
    • how to get a great experience outside the tool and a "better together" experience when you have the tools?

    These questions lead to a ton of insights around meta-models for software development life cycles, context-precision, organizing engineering practices and severl other concepts worth exploring in other posts.

    My key philosophies were:

    • life cycle options over one size fits all
    • pluggable techniques that fit any life cycle over MSF only
    • proven over good ideas
    • modular over monolithic

    Randy agreed with the questions and the philosophies.  We came up with some working models for pluggable guidance and integration.  His job was to make the tools side happen.  My job was to make the guidance side happen.  I now had the challenge and opportunity of making guidance online and in the tools.  This is how I ended up doing guidance modules for for .NET Framework 2.0.  This also drove exposing p&p Security Engineering which is baked into MSF Agile by default.

    Randy summarized our complimentary relationship best ...
    “The Patterns and Practices group produces an important, complimentary component to what we are building into MSF. In Visual Studio, our MSF processes can only go so deep on a topic. Our activities can provide the overview of the steps that a role should do but cannot provide all of the educational background necessary to accomplish the task.
    As many of the practices that we espouse in MSF (such as threat modeling) require this detailed understanding, we are building links into MSF to Patterns and Practice online material. Thus the activities in MSF and the Patterns and Practices enjoy a very complimentary relationship. The Patterns and Practices group continues to be very helpful and our relationship is one of very open communication.“

    Mike Kropp, GM of our patterns & practices team, liked the results ...
    “– it was great to see the progress you’ve made over the past couple of months.  here’s my takeaway on what you’ve accomplished:

    • you’ve agreed on the information model for process and engineering practices – I know this was tough but it’s critical to achieving alignment 
    • the plug-in architecture allows us to evolving the engineering practices and serve them up nicely for the customer – in context 
    • the model you’ve come up with for v1 allows us to adapt & evolve over time (schemas, types, etc..)
      ... this is a great example of how we can work together to help your customers and partners.”

    I remember asking Randy at one point, why did you bet on our security and performance work?  He told me it was because he knew we vetted and proved our work with customers and industry experts.  He also knew we vetted internally across our field, support and the product teams.  I told him if anybody wondered who we worked with, have them scroll down to the contributors and reviewers list for the security work as an example.

    We have more work ahead of us, but I think we've accomplished a lot of what we set out to do and for that I'm grateful to Randy Miller, David Anderson and their respective team.

  • J.D. Meier's Blog

    Time-boxes, Rhythm, and Incremental Value


    Today I had some interesting conversations with Loren Kohnfelder. Every now and then Loren and I play catch up. Loren is former Microsoft. If you don't know Loren, he designed the CLR security model and IE security zones. He created a model for more fine-grained control over security decisions and he's a constant advocate for simplifying security.

    You might think two guys that do security stuff would talk about security. We didn't. We ended up talking about project management, blogging, social software, and where I think next generation guidance is headed. I'll share the project management piece.

    I told Loren I changed my approach to projects. I use time boxes. Simply put, I set key dates and work backwards. I chunk up the work to deliver incremental value within the time boxes. This is a sharp contrast from the past where I'd design the end in mind and then do calculated guesstimates on when I'd be done, how much it would cost and the effort it would take.

    I use rhythm for the time boxes. I use a set of questions to drive the rhythm … When do I need to see results? What would keep the team motivated with tangible results? When do customers need to see something? I realize that when some windows close, they are closed forever. The reality is, as a project stretches over time, risk goes up. People move, priorities change, you name it. When you deal with venture capitalists, a bird in hand today, gets you funding for two more in the bush.

    Loren asked me how do I know the chunks add up to a bigger value. I told him I start with the end in mind and I use a combination of scenarios and axiomatic design. Simply put, I create a matrix of scenarios and features, and I check dependencies across features among the scenarios. What's my minimum set of scenarios my customers want to have something useful? Can I incrementally add a scenario? Can I take away scenarios at later points and get back time or money without breaking my design? Sounds simple, but you'd be surprised how revealing that last test is. With a coupled design, if you cut the wrong scenario you have a cascading impact on your design that costs you time and money.

    We both agreed time boxed projects have a lot of benefits, where some are not obvious. Results breed motivation. By using a time box and rhythms, you change the game from estimating and promising very imprecise variables to a game of how much value can you deliver in a timebox. Unfortunately sometimes contracts or cultures work against this, but I find if I walk folks through it, and share the success stories, they buy in.

Page 1 of 1 (7 items)