J.D. Meier's Blog

Software Engineering, Project Management, and Effectiveness

November, 2006

  • J.D. Meier's Blog

    Test-Driven Guidance


    When I last met with Rob Caron to walk him through Guidance Explorer, one of the concepts that peaked his interest was test-cases for content.   He suggested I blog it, since it's not common practice and could benefit others.  I agreed.

    If you're an author or a reviewer, this technique may help you.  You can create explicit test-cases for the content.  Simply put, these are the "tests for success" for a given piece of content.  Here's an example of a few test cases for a guideline:

    Test Cases for Guidelines


    • Does the title clearly state the action to take?
    • Does the title start with an action word (eg. Do something, Avoid something)?

    Applies To

    • Do you list technology and version? (e.g. ASP.NET 2.0)

    What to Do

    • Do you state the action to take?
    • Do you avoid stating more than the action to take?


    • Do you provide enough information for the user to make a decision?
    • Do you state the negative consequences of not following this guideline?


    • Do you state when the guideline is applicable?
    • Do you state when not to use this guideline?


    • Do you state enough information to take action?
    • Do you provide explicit steps that are repeatable?

    Problem Example

    • Do you show a real world example of the problem from experience?
    • If there are variations of the problem, do you show the most common?
    • If this is an implementation guideline, do you show code?

    Solution Example

    • Does the example show the resulting solution if the problem example is fixed?
    • If this is a design guideline is the example illustrated with images and text?
    • If this is an implementation guideline is the example in code?

    Additional Resources

    • Are the links from trusted sites?
    • Are the links correct in context of the guideline?

    Related Items

    • Are the correct items linked in the context of the guideline?

    Additional Tests to Consider When Writing a Guideline

    • Does the title clearly state the action to take?
    • Does the title start with an action word (eg. Do something, Avoid something)?
    • If the item is a MUST, meaning it is prevelant and high impact, is Priority = p1?
    • If the item is a SHOULD, meaning it has less impact or is only applicable in narrower circumstances, is Priority = p2?
    • If the item is a COULD, meaning it is nice to know about but isn't highly prevelant or impactful, is Priority = p3?
    • If this item will have cascading impact on application design, is Type = Design?
    • If this item should be followed just before deployment, is concerned with configuration details or runtime behavior, is Type = Deployment?
    • If this item is still in progress or not fully reviewed, is Status = Beta?

    Benefits to Authors and Reviewers
    The test-cases serve as checkpoints that help both authors and reviewers produce more effective guidance.  While you probably implicitly ask many of these questions, making them explicit makes them a repeatable practice for yourself or others.  I've found questions to be the best encapsulation of the test because they set the right frame of mind.  If you're an author, you can start writing guidance by addressing the questions.  If you're a reviewer, you can efficiently check for the most critical pieces of information.  How much developer guidance exists that does not answer the why or when?  Too much.  As I sift through the guidance I've produced over the years, I can't believe how many times I've missed making the why or when explicit.

    I'm a fan of the test-driven approach to guidance and here's my top reasons why:

    • I can tune the guidance across a team.  As I see patterns of problems in the quality, I can weed it out by making an explicit test case.
    • I can tailor test cases based on usage scenarios.  For example, in order to use our checklist items for tooling scenarios, our problem and solution examples need to have certain traits.  I can burn this into the test cases.
    • I can bound the information.  When is it done and what does "good enough" look like?  The test case sets a bar for the information.
    • I can improve the precision and accuracy of the information.  By precision, I mean filter out everything that's not relevant.  When it comes to technical information to do my job, I'm a fan of density (lots of useful information per square inch of text).  Verbosity is for story time.

    Examples of Test Cases for Guidance
    I've posted examples of our test-cases for guidance on Channel 9.

  • J.D. Meier's Blog

    MyLifeBits vs. Mental Snapshots


    Yesterday's snowfall in Redmond was interesting for me.  During my drive home, it was pretty dark, icy and cold.  As I came up Old Redmond Road, I saw an object coming towards me, moving somewhat erratically, that looked too small to be a car.

    It wasn't a car at all.  It was a cross-country skier making his way down the middle of the road, followed by a trail of cars.  I'm not sure at what point the street looked like good skiing and I don't know if he had an exit strategy, but he did seem to be having fun and going the speed limit.

    I had my digital camera with me, but I forgot to use it.  I was more focused on skating my car down the right side of the road.  By the time I got home, I had a bunch of "mental snapshots" of various scenes along the way home, but nothing in hand to share. 

    That got me thinking of the MyLifeBits project.  MyLifeBits is effectively software for "lifelogging" or archiving your life on disk.  Although it's a bit extreme for me, there are times where I wish I automatically had more than just the mental snapshots.

  • J.D. Meier's Blog

    238 New Items in Guidance Explorer


    Today we published 238 new guidance items in Guidance Explorer.  If you use the offline client, it should automatically synchronize to our online store.

    We're in the process of performing a guidance sweep.  The approach to the sweep is twofold:
    1.  Make existing guidance available in Guidance Explorer.
    2.  Identify user experience issues with the information models and tool design.

    Benefits in GE
    Making existing guidance available in Guidance Explorer involves re-factoring existing security guidance and performance guidance.  The benefits of having the guidance available in Guidance Explorer include:

    • you can view across topics (for example, you can see across the security and the performance guidance)
    • you can filter down to exactly the guidance items you need for a given scenario or task
    • you can build multiple custom views based on how you need to use the guidance
    • you can build guides on the fly (you can save a view as a Word doc or HTML files for example)
    • you can tailor the guidance to your scenario (e.g. save an item into your library in GE and edit the guidance to your liking)
    • you can supplement the guidance for your scenario (because GE is also an authoring environment, you can write your own guidance)

    How We Improve Our Guidance
    An underlying strategy in GE was to help support users quickly hunt and gather relevant items rather than try and guess your context and what you need.  In other words, it's a tool to help smart people versus a smart tool that might get in your way.  This was actually an important decision because we had to pick a problem we knew we could help directly solve and add value.

    The feedback from customers on existing guidance was that it was great stuff, but there were 3 key problems:
    1.  it's a copy+paste exercise to grab just the guidance you need
    2.  it's not atomic enough (monoliths over bite-sized chunks)
    3.  many of the items, while they read well, were not actionable enough

    That's why we took the following measures on our guidance:

    • split the guidelines and checklists into individual items (we chunked the guidance into units of action)
    • we cleaned up our templates for the various guidance types (we gave the chunked items a common look and feel)
    • made the schema explicitly include answers to "why" and "how", as well as include problem examples and solution examples (we made the chunks more actionable and verifiable)

    As we port existing guidance to our updated schemas, we often find guidance items lacking key information such as why or how, or example code.

    Guidance Explorer in Practice
    What's been great so far is that some folks in the field have let me know how they've been using it for customer engagments.  Apparently the ability to customize guidance has resonated very well.  One consultant in particular has used Guidance Explorer for several engagements to save time and effort.  He uses GE as a general purpose rules and guidelines store.  He's also tailored guidelines and checklists for different audience levels (executive, development leads, architects, developers, PMs) and for different activities (design reviews, code reviews, and deployment reviews).

    A few customers have let me know they are using the UNC share scenario to create guidance libraries for their team development.  They told me they like the idea that it is like a simple typed-wiki that you can act on.  The fact that they can create views and print out docs from the library has been the main appeal.

    The other benefit that more customers are appreciating is the templates for guidelines and checklists.  They like the fact that it starts to simplify authoring as well as sharing prescriptive guidance.  For anybody who has authored guidelines or checklists, they know that it's challenging to write actionable guidance that can be reused.  What we're sharing in Guidance Explorer is the benefit of experience and lessons learned over the years of producing resuable guidance for various audiences.

    R&D Project
    As a reminder and to keep things in perspective, Guidance Explorer is an R&D project.  While there are immediately tangible benefits, the real focus is on the learnings around user experience so that patterns & practices can improve it's ability to author and share guidance, and to make progress on helping debottleneck the creation of prescritive guidance for the software industry.

    You can send feedback on GE directly to the team at getool@microsoft.com

  • J.D. Meier's Blog

    Practices Checker for .NET Framework 1.1


    I've had to hunt down Practices Checker for .NET Framework 1.1 a few times, so now I'm posting it.  It was an R&D project to help automate the search and discovery of potential coding practices and configuration settings that do not adhere to the ASP.Net 1.1 Performance checklist

    It may seem a bit after the fact, given it is .NET 1.1, but there were a few reasons for this:  1)  our focus was more on testing how to codify our library of practices rather than a specific version;  2) we figured adding rules/versions would be easy once we understood the feasibility and work required; 3) our field was still performing code reviews for customers using .NET 1.1 so we could immediately test the impact.

    Key Links

    What You Need to Know

    • It was an R&D project to explore and test options for tooling support around patterns & practices guidelines.
    • Whereas Code analysis is focused on .NET Design Guidelines, Practices Checker is focused on patterns & practices guidelines.
    • The tool was designed for helping manual inspections and audit scenarios.  It was not designed for real-time analysis or during builds/check-ins.
    • It supplements manual inspection by helping you identify potential places in the code that require further analysis.
    • The user interface is a bit rough and the reports need work.

    Key Take Aways
    The take aways for me in this project were:

    • It's tough to see a bird's eye view across various "rules" libraries (Managed Code Analysis Warnings, patterns & practices guidelines, ... etc.).
    • It's important to know the types of rules your tool does or does not cover (policies, requirements, vulnerabilities, and best practices).
    • It's important to know your various tool options and usage scenarios (e.g. Managed code analysis plugs into check-in policies or part of a build process, custom validators would check deployment at design time, Microsoft Best Practices Analyzer would check deployment at deployment time, Practices Checker would be a manual inspection scenario ... etc.).
    • It's important to know the ecosystem around your "rules" library (e.g. how do you keep your "rules" library up to date).

    I'm continuing to explore various options to manage a library of building codes/practices/rules and then map out which tools can check these items, and where in the life cycle they should be checked.  I've been informally referring to this problem as "policy verification through the life cycle."

  • J.D. Meier's Blog

    Security Toolbar for VS.NET 2005


    I missed blogging this at time of release.  The Security Toolbar for VS.NET 2005 was a short R&D project to connect developers to security guidance on MSDN from within VS.NET.  The toolbar has direct links to our indexes for Security Engineering, threat modeling, how tos, and checklists.

    While this first version is simply a set of links, it's a stepping stone to adding additional functionality.  We wanted a way to deliver incremental functionality with a simple interface.  The toolbar can be notified when there's new content or an new version of the toolbar itself.  

    The most interesting learning for me was the trade-offs in user experience in terms of a designing a toolbar:

    • Who wants yet another toolbar in VS.NET?
    • Menus are great for dealing with multiple options, but there's something to be said for clicking a button.
    • Button can be more visible over a menu option, but who wants more buttons?
    • Icons or text?

    In the end, we went with a hybrid model to take the best of the best:

    • button + drop-down menu to avoid taking up room and maximize real estate usage
    • icon + text so the button is discoverable and the intent is clear

    Key Links:

Page 1 of 1 (5 items)