J.D. Meier's Blog

Software Engineering, Project Management, and Effectiveness

  • J.D. Meier's Blog

    How To Write Prescriptive Guidance Modules

    • 3 Comments

    Here's a quick reference for writing guidance modules.  Guidance modules are simply what I call the prescriptive guidance documents we write when creating guides such as Improving Web Application Security, Improving .NET Application Performance and .NET 2.0 Security Guidance.

     How To Write Prescriptive Guidance Modules

    Summary of Steps

    • Step 1. Identify and prioritize the tasks and scenarios.
    • Step 2. Identify appropriate content types.
    • Step 3. Create the guidance modules.

    Step 1. Identify and prioritize the tasks and scenarios.
    Task-based content is "executable".  You can take the actions prescribed in the guidance and produce a result.  Scenarios bound the guidance to a context. This helps for relevancy and for evaluating the appropriateness of the recommendations.

    Good candidates for tasks and scenarios include:

    • Problems and pain points
    • Key engineering decisions
    • Difficult tasks
    • Recurring engineering questions
    • Techniques

    In many ways, the value of your prescriptive guidance is a measure of the value of the problem multiplied by how many people share the problem.

    Step 2. Identify appropriate guidance module types.
    Choose the appropriate guidance module type for the job:

    • Guidelines - Use "guidelines" to convey the "what to do" and "why" w/minimal how (how tos do the deep how) ... in a concise way, bound against scenarios/tasks.
    • Checklist Items - Present a verification to perform ("what to check for", "how to check" and "how to fix")
    • How Tos - Use "how tos" to effectively communicate a set of steps to accomplish a task (appeals to the "show me how to get it done")

    For a list of guidance module types, templates, and examples see Templates for Writing Guidance at http://www.guidancelibrary.com

    Step 3. Create the guidance modules.
    Once you've identified the prioritized scenarios, tasks, and guidance types, you create the guidance.   This involves the following:

    • Identify specific objectives for your guidance module.  What will the user's of your guidance module be able to do?
    • Collect and verify your data points and facts.
    • Solve the problem your guidance module addresses.  This includes performing the solution and testing your solution.
    • Document your solution.
    • Checkpoint your solution with some key questions:  when would somebody use this?  why? how?
    • Review your solution with appropriate reviewers.

    Examples of Guidance Modules

  • J.D. Meier's Blog

    Writing Prescriptive Guidelines

    • 2 Comments


    These are some practices we learned in the guidance business to write more effective guidelines:

    • Follow the What To Do, Why and How Pattern
    • Keep it brief and to the point
    • Start with Principle Based Recommendations
    • Provide Context For Recommendations
    • Make the Guidelines Actionable
    • Consider Cold vs. Warm Handoffs
    • Create Thread Killers

    Follow the What to Do, Why and How Pattern
    Start with the action to take -- the "what to do".  This should be pinned against a context.  Context includes which technologies and situations the guidance applies. Be sure to address "why" as well, which exposes the rationale. Rationale is key for the developer audience.  It's easy to find many guidelines, missing context or rationale.  Some of the worst guidelines leave you wondering what to actually do.

    Keep It Brief and to the Point
    Avoid "blah, blah, blah". Say what needs to be said as succinctly as possible. Ask "why, why, why?" to everything you write - every paragraph and every sentence. Does it add value? Does it help the reader? Is it actionable?  Often the answer is no. It's hard to do, but it keeps the content "lean and mean".

    Start with Principle-Based Recommendations
    A good principle-based recommendation addresses the question: "What are you trying to accomplish?". Expose guidance based on engineering versus implementation or technology of the day. This makes the guidance less volatile and arguably more useful.  An example principle-based recommendation would be: Validate Input for Length, Range, Format, and Type.  You can then build more specific guidelines for a technology or scenario from this baseline recommendation.

    Provide Context for Recommendations
    Avoid blanket recommendations. Recommendations should have enough context to be prescriptive.  Sometimes this can be as simple as prefixing your guideline with an "if" condition.

    Make the Guidelines Actionable
    Be prescriptive, not descriptive. The guideline should be around actionable vs. just interesting information. Note that considerations are actions provided you tell the reader what to consider, when and why. As a general rule, avoid providing too much background or conceptual information. Point off to primer articles, books etc for background.

    Choose Warm vs. Cold Handoffs
    If you are sending the reader to another link for a specific piece of information, be explicit.  It's as simple as adding "For more information on xyz, see ..." before your link.  That's a warm hand off.  A cold hand off is simply having a list of links and expecting the reader to follow the links and figure out why you sent them there.  The worst is when the links are irrelevant and you simply added links because you could.

    Create Thread Killers
    A "thread killer" is a great piece of information that when quoted or referred to can stop a technical discussion or alias question (a discussion thread) with no further comments. Look at the alias, understand the questions being asked, and tackle the root causes and underlying problems that cause these questions. (Think of the questions as symptoms). Make guidance that nails the problem. A great endorsement is when your "thread killer" is used to address FAQs on discussion aliases and forums.

    Where to See This in Practice
    The following are examples of prescriptive guidelines based on these practices:

  • J.D. Meier's Blog

    Performance Guideline: Use Generics To Eliminate the Cost Of Boxing, Casting and Virtual calls

    • 2 Comments

    Here's the next .NET Framework 2.0 performance guideline for review from Prashant Bansode, Bhavin Raichura, Girisha Gadikere and Claudio Caldato.

    Use Generics To Eliminate the Cost Of Boxing, Casting and Virtual calls

    Applies to

    • .NET 2.0

    What to Do

    • Use Generics to eliminate cost of boxing, casting and virtual calls

    Why
    Generics can be used to improve the performance by avoiding runtime boxing, casting and virtual calls.

    List<Object> class gives better performance over ArrayList.  An example benchmark of a quick-sort of an array of one million integers may show the generic method is 3 times faster than the non-generic equivalent. This is because boxing of the values is avoided completely. In another example, the quick-sort of an array of one million string references with the generic method was 20 percent faster due to the absence of a need to perform type checking at run time.   You results will depend on your scenario. Other benefits of using Generics are compile-time type checking, binary code reuse and clarity of the code.


    When
    Use Generics feature for defining a code (class, structure, interface, method, or delegate) which has to be used by different consumers with different types.

    Consider replacing a generic code (class, structure, interface, method, or delegate), which does an implicit casting of any type to System.Object and forces the consuming code to cast between Object references to actual data types.

    Only if there is a considerable number (>500) to store consider having a special class, otherwise just use the default List<Object> class.
    List<Object> gives better performance over ArrayList as List<Object> has a better internal implementation for enumeration.


    How
    The following steps show how to use generics for various types.

    Define a generic class as follows.
        public class List<T>

    Define methods and variables in generic class as follows.
        public class List<T>
       {
          private T[] elements;
          private int count;  
          public void Add(T element)
          {
                  if (count == elements.Length)
                     Resize(count * 2);
                  elements[count++] = element;
          }  
          public T this[int index]
         {     
               get { return elements[index]; }     
               set { elements[index] = value; }  
          }
        }

    Access generic class with required type as follows
        List<int> intList = new List<int>();
        intList.Add(1);
     ........
        int i = intList[0];

    Note The .NET Framework 2.0 provides a suite of generic collection classes in the class library. Your applications can further benefit from generics by defining your own generic code

    Problem Example
    An Order Management Application stores the domain data Item, Price, etc in Cache using ArrayList. ArrayList accepts any type of data and implicitly casts into objects. Numeric data like Order number, customer id, etc are wrapped to object type from primitive types (boxing) while storing in the ArrayList. Consumer code has to explicitly cast the data from Object type to specific data type while retrieving from the ArrayList. Boxing and un-boxing requires lot of operations like memory allocation, memory copy & garbage collection which in turn reduces the performance of the application.

    Example code snippet to add items to ArrayList or to get/set items from the ArrayList

        ArrayList intList = new ArrayList();


        //Cache data in to array


        intList.Add(45672);           // Argument is boxed
        intList.Add(45673);           // Argument is boxed


        //Retrieve data from cache
        int orderId = (int)intList.Item[0];  // Explicit un-boxing & casting required

    Solution Example
    An Order Management Application stores the domain data Item, Price, etc in Cache. Using Generics feature avoids necessity of run time boxing and casting requirements and makes sure of compile time type checking

    Implement the defining code for generic class. Generic class can be implemented only if required, else default List<T> class can be used
        //Use  to allow consumer code to specify the required type


        class OrderList{


        //Consumer specific type array to hold the data
       T[] elements;
        int count;


        //No implicit casting while adding to array
        public void Add(T element) {


        //Add data to array as an object
        elements[count++] = element;
        }


        // Method to set or get data
        public T this[int index] {


        //Returns data as T type
          get { return elements[index]; }


        //Set the data as T type
          set { elements[index] = value; }
          }
        }

       Initiate the class with specifying as int data type
        OrderList intList = new OrderList();


        //Cache data in to array
        intList.Add(45672);           // No boxing required
        intList.Add(45673);           // No boxing required


        //Retrieve data from cache
        int orderId = intList[0];  // Un-boxing & casting not required

    Additional Resources

  • J.D. Meier's Blog

    Performance Guideline: Use Token Handle Resolution API to get the Metadata for Reflection

    • 0 Comments

    Here's the next .NET Framework 2.0 performance guideline in the series from Prashant Bansode, Bhavin Raichura, Girisha Gadikere and Claudio Caldato.

     ...

    Use Token Handle Resolution API to get the Metadata for Reflection

    Applies to

    • .NET 2.0

    What to Do
    Use the new .NET 2.0 token handle resolution API RuntimeMethodHandle to get the Metadata of members while using Reflection.

    Why
    The RuntimeMethodHandle is a small, lightweight structure that defines the identity of a member. RuntimeMethodHandle is a trimmed down version of MemberInfos, which provides the metadata for the methods and data without consuming .NET 2.0 back-end cache.

    The .NET Framework also provides GetXxx type of API methods for e.g.. GetMethod, GetProperty, GetEvent to determine the metadata of given Type at runtime. There are two forms of these APIs, the non-plural, which return one MemberInfo (such as GetMethod), and the plural APIs (such as GetMethods).

    The .NET framework implements a back-end cache for the MemberInfo metadata to improve the performance of GetXxx API.

    The caching policy is implemented irrespective of plural or non-plural GetXxx API call. Such eager caching policy degrades the performance on calls to or non-plural GetXxx API calls.

    RuntimeMethodHandle works approximately twice faster than compared to equivalent GetXxx API call, if the MemberInfo is not present in the back-end .NET cache.

    When
    If it is required to get the metadata of a given Type at runtime, use new .NET 2.0 token handle resolution API RuntimeMethodHandle for better performance than traditional GetXxx API calls.

    How
    The following code snippet shows how to get the RuntimeMethodHandle:

     ...
        // Obtaining a Handle from an MemberInfo
        RuntimeMethodHandle handle = typeof(D).GetMethod("MyMethod").MethodHandle;
     ...

    The following code snippet shows how to get the MemeberInfo metadata from the handle:

     ...
        // Resolving the Handle back to the MemberInfo
        MethodBase mb = MethodInfo.GetMethodFromHandle(handle);
     ...

    Problem Example
    A Windows Forms based application needs to dynamically load the plug-in Assemblies and available Types. The application also needs to determine the metadata of a given Type (methods, members etc) at runtime to execute Reflection calls.

    The plug-in exposes a Type CustomToolBar, which is derived from Type BaseToolBar. The CustomToolBar Type has 2 methods - PrepareCommand, ExecuteCommand. The BaseToolBar Type has 3 methods - Initialize, ExecuteCommand and CleanUp. To execute the ExecuteCommand method of type CustomToolBar at runtime, it gets the metadata of that method using GetXxx API as shown in the following code snippet.

    Since the .NET Framework implements eager caching policy, the call to get the metadata for a single ExecuteCommand method will also get the metadata of all the five methods of CustomToolBar and BaseToolBarTypes.

        MethodInfo mi = typeof(CustomToolBar).GetMethod("ExecuteCommand");

    The .NET framework implements a back-end cache for the MemberInfo metadata to improve the performance of GetXxx API. The implemented caching policy caches all members by default, irrespective of plural or non-plural API call. Such eager caching policy degrades the performance on calls to or non-plural GetXxx API calls.

    Solution Example
    A Windows Forms based application needs to dynamically load the plug-in Assemblies and available Types. The application also needs to determine the metadata of a given Type (methods, members etc) at runtime to execute Reflection calls.

    The plug-in exposes a Type CustomToolBar, which is derived from Type BaseToolBar. The CustomToolBar Type has 2 methods - PrepareCommand, ExecuteCommand. The BaseToolBar Type has 3 methods - Initialize, ExecuteCommand and CleanUp. To execute the ExecuteCommand method of type CustomToolBar at runtime, it gets the metadata of that method using RuntimeMethodHandle is used, as shown in the following code snippet. This can improve the performance of the application. :

     ...
        // Obtaining a Handle from an MemberInfo
        RuntimeMethodHandle handle = typeof(CustomToolBar).GetMethod("ExecuteComand").MethodHandle;
        // Resolving the Handle back to the MemberInfo
        MethodBase mb = MethodInfo.GetMethodFromHandle(handle);
     ...

    If the appropriate MemberInfo is already in the back-end .NET cache, the cost of going from a handle to a MemberInfo is about the same as using one of the GetXxx API call. If the MemberInfo is not available in the cache RuntimeMethodHandle is approximately twice faster than the GetXxx API call.

    Additional Resources

  • J.D. Meier's Blog

    Performance Guideline: Use TryParse Method to Avoid Unnecessary Exceptions

    • 2 Comments

    Prashant Bansode, Bhavin Raichura, and Girisha Gadikere teamed up with Claudio Caldato (CLR team) to create some new performance guidelines for .NET Framework 2.0.  The guidelines use our new guideline template.

     ...


    Use TryParse Method to Avoid Unnecessary Exceptions

    Applies to

    • NET 2.0

    What to Do
    Use TryParse Method instead of Parse Method for converting string input to a valid .Net data type. For example, use TryParse method before converting a string Input to integer data type.

    Why
    The Parse method will throw exception - ArgumentNullexception or FormatException or OverflowException, if the string representation cannot be converted to the respective data type.

    Unnecessary Throwing Exceptions and Handling the same such as above has a negative impact on the performance of the application. The TryParse method does not throw an exception if the conversion fails instead it returns false, and hence saves exception handling related performance hit.

    When
    If it is required to convert a string representation of a data type to a valid .Net data type, use TryParse method instead of calling the Parse method to avoid unnecessary exception.

    How
    The following code snippet illustrates how to use TryParse method :

        ...
        Int32 intResult;
        if (Int32.TryParse(strData, intResult))
        {
           // process intResult result
        }
        else
        {
          //error handling
        }
        ...

    Problem Example
    Consider a Windows Forms application for creating an Invoice. The application takes user inputs for multiple items as product name, quantity, price per unit and date of purchase. The user provides these inputs in the text boxes. The user can enter multiple items in an invoice at a given time and then finally submit the data for automatic billing calculation. The application internally needs to convert the string input data to integer (assume for simplicity). If the user enters invalid data in the text box, the system will throw an exception. This has adverse impact on performance of the application.

        ...
        private Int32 ConvertToInt(string strData)
        {
            try
            {
                  return Int32.Parse(strData);
            }
            catch (exception ex)
            {
                  return 0; //some default value
            }
        }
        ...

    Solution Example
    Consider a Windows Forms application for creating an Invoice. The application takes user inputs for multiple items as product name, quantity, price per unit and date of purchase. The user provides these inputs in the text boxes. The user can enter multiple items in an invoice at a given time and then finally submit the data for automatic billing calculation. The application internally needs to convert the string input data to integer (assume for simplicity). If the user enters invalid data in the text box, the system will not throw unnecessary exceptions and hence improves the application performance.

        ...
        private Int32 ConvertToInt(string strData)
        {
            Int32 intResult;
            if (Int32.TryParse(strData, intResult))
            {
                return intResult;
            }
             return o;  //some default value
        }
        ...

    Additional Resources

     

  • J.D. Meier's Blog

    Guidelines Template

    • 1 Comments

    Sometimes the guidelines in our guidance such as Improving Web Application Security, Improving .NET Application Performance and .NET 2.0 Security Guidance are missing some of the important details such as when, why or how.  To correct that, we've created a template that explicitly captures the details.  We use this template in Guidance Explorer.  I'll also be posting .NET Framework 2.0 performance examples that use this new template.

    Guideline Template

    • Title
    • Applies To
    • What to Do
    • Why
    • When
    • How
    • Problem Example
    • Solution Example
    • Related Items

    Test Cases

    The test cases are simply questions we use to help improve the guidance.  The guidelines author can use the test cases to check that they are putting the right information into the template.  Reviewers use the test cases as a check against the content to make sure it's useful. 

    Title

    • Does the title clearly state the action to take?
    • Does the title start with an action word (eg. Do something, Avoid something)?

    Applies To

    • Versions are clear?
       

    What to Do

    • Do you state the action to take?
    • Do you avoid stating more than the action to take?

    Why

    • Do you provide enough information for the user to make a decision?
    • Do you state the negative consequences of not following this guideline?

    When

    • Do you state when the guideline is applicable?
    • Do you state when not to use this guideline?

    How

    • Do you state enough information to take action?
    • Do you provide explicit steps that are repeatable?

    Problem Example

    • Do you show a real world example of the problem from experience?
    • If there are variations of the problem, do you show the most common?
    • If this is an implementation guideline, do you show code?

    Solution Example

    • Does the example show the resulting solution if the problem example is fixed?
    • If this is a design guideline is the example illustrated with images and text?
    • If this is an implementation guideline is the example in code?

    Additional Resources

    • Are the links from trusted sites?
    • Are the links correct in context of the guideline?

    Related Items

    • Are the correct items linked in the context of the guideline?

    Additional Tests to Consider When Writing a Guideline

    • Does the title clearly state the action to take?
    • Does the title start with an action word (eg. Do something, Avoid something)?
    • If the item is a MUST, meaning it is prevelant and high impact, is Priority = p1?
    • If the item is a SHOULD, meaning it has less impact or is only applicable in narrower circumstances, is Priority = p2?
    • If the item is a COULD, meaning it is nice to know about but isn't highly prevelant or impactful, is Priority = p3?
    • If this item will have cascading impact on application design, is Type = Design?
    • If this item should be followed just before deployment, is concerned with configuration details or runtime behavior, is Type = Deployment?
    • If this item is still in progress or not fully reviewed, is Status = Beta?
  • J.D. Meier's Blog

    Guidance Explorer Beta 2 Release

    • 2 Comments

    We released Guidance Explorer Beta 2 on CodePlex.  Guidance Explorer is a patterns & practices R&D project to improve finding, sharing and creating prescriptive guidance.  Guidance Explorer features modular, actionable guidance in the form of checklists, guidelines, how tos, patterns … etc.

    What's New with This Release

    • Guidance Explorer now checks for updated guidance against an online guidance store.
    • Source code is available on CodePlex so you can shape or extend Guidance Explorer for your scenario.
    • Guidance Explorer Web Edition is available for quick browsing of the online guidance store.

    Learn More

    Resources

    Feedback
    Send your feedback to getool@microsoft.com.

  • J.D. Meier's Blog

    ASP.NET 2.0 Internet Security Reference Implementation

    • 8 Comments

    The ASP.NET 2.0 Internet Security Reference Implementation is a sample application complete with code and guidance.  Our purpose was to show patterns & practices security guidance in the context of an application scenario. We used Pet Shop 4 as the baseline application and tailored it for an internet facing scenario.  The application uses forms authentication with users and roles stored in SQL.

    Home Page/Download

    3 Parts
    The reference implementation contains 3 parts:

    1. VS 2005 Solution and Code 
    2. Reference Implemenation Document
    3. Scenario and Solution Document 

    The purpose of each part is as follows:

    1. VS 2005 Solution and Code - includes the Visual Studio 2005 solution, the reference implementation doc, and the scenario and solution doc.
    2. Reference Implemenation Document (ASP.NET 2.0 Internet Security Reference Implementation.doc) - is the reference implementation walkthrough document containing implementation details and key decisions we made along the way.  Use this document as a fast entry point into the relevant decisions and code.
    3. Scenario and Solution Document (Scenario and Solution - Forms Auth to SQL, Roles in SQL.doc) - is the more general scenario and solution document containing key decisions that apply to all applications in this scenario.

    Key Engineering Decisions Addressed
    We grouped the key problems into the following buckets:

    • Authentication
    • Authorization
    • Input and Data Validation
    • Data Access
    • Exception Management
    • Sensitive Data
    • Auditing and Logging

    These are actionable, potential high risk categories.  These buckets represent some of the more important security decisions you need to make that can have substantial impact on your design.  Using these buckets made it easier to both review the key security decisions and to present the decisions for fast consumption.

    Getting Started

    1. Download and install the ASP.NET 2.0 Internet Security Reference Implementation.
    2. Use ASP.NET 2.0 Internet Security Reference Implementation.doc to identify the code you want to explore
    3. Open the solution, Internet Security Reference Implementation.sln, and look into the details of the implementation
    4. If you're interested in testing SSL, then follow the instructions in  SSL Instructions.doc.

     

  • J.D. Meier's Blog

    patterns & practices Guiding Principles

    • 1 Comments

    As part of our technical strategy on the patterns & practices team, we created a set of guiding principles for our product development teams:

    • Long-term customer success
    • Help customers balanced tension between new product features and application portfolio stability
    • Collaborative, transparent execution
    • Explicit intent and rigorous prioritization
    • Quality over scope
    • Context precision over one-size fits all
    • Framework for evolution and innovation over fixed solutions
    • Skeletal over full-featured
    • Modular over monolithic
    • Easy to adopt; easy to adapt, easy to consume incrementally

    If you're familiar with Stephen Covey, he espoused using principles as a way to govern actions without needing an explicit rule for every situation.  An advantage of this is that you leave flexibility in the tactics while still guiding the outcome.

    To create the princples, we used the approach for the values in the Agile Manifesto which contrasts one value over another (e.g. Customer collaboration over contract negotiation.) Using this approach, we thought of the outcomes we wanted to move away from and outcomes we wanted to achieve.

  • J.D. Meier's Blog

    Test Our patterns and practices Guidance Explorer

    • 3 Comments

    I've been relatively quiet these past few weeks, getting ready to release our patterns & practices Guidance Explorer. Guidance Explorer is a new, experimental tool from the patterns & practices team that radically changes the way you consume guidance as well as the way we create it. If you’ve felt overwhelmed looking across multiple sources for good security or performance guidance then Guidance Explorer is the tool for you. You can use one tool to access a comprehensive, up to date, collection of modular guidance that will help you with your tough development tasks and design decisions. Guidance Explorer will allow you to create and distribute a set of standard best-practices that your team can adhere to for performance and security. The project includes the tool, Guidance Explorer, and a library of guidance for developers, Guidance Library. The Guidance Library will be updated weekly, ensuring you always have the most up to date information.

    What's In It For You

    • If you build software with the .NET Framework, use Guidance Explorer to find the "building codes" for the .NET technologies, in terms of security and performance. They are complimentary to FX Cop rules.
    • If you want to set development standards and best practices for your team, use Guidance Explorer views to build and then distribute your team’s standard rule-set.
    • If you author guidance for development teams, use Guidance Explorer to create guidance for your teams in a more efficient and effective way by leveraging our templates, information models, key concepts, and tooling support.

    What is Guidance Explorer
    Guidance Explorer is a client-side tool that lets you find, filter, and sort guidance. You can organize custom guidance collections into persistent views and share these views with others. You can also save these custom views of the guidance as indexed Word or HTML documents. You can browse guidance by source, such as the patterns & practices team. You can also browse by topic, such as security or performance, or by technology, such as ASP.NET 1.1 or ASP.NET 2.0. Within a given topic or technology, you can then browse guidance within more fine-grained categories. For example, within security, you can browse by input/data validation, authentication, authorization .. etc.

    Guidance Explorer was designed to simplify the creation and distribution of custom guidance. To author guidance, Guidance Explorer, includes a simple editor that uses templates for guidance. Each template includes a schema and test cases. For example, each guideline item should include what to do, why, how, a problem example, and solution example, as well as related items and where to go for more information. We created these templates by analyzing what's working and not working from our several thousands of pages of guidance over the past several years, around security and performance.

    What is Guidance Library
    Guidance Library is the collection of knowledge that is viewable by Guidance Explorer. It's organized by types, such as guidelines and checklists. Each type has a specific schema and test cases against that schema to help enforce quality. The library is also organized by topics, such as security and performance. The library is extensible by design so that we can add new types and new topics that prove to be useful.

    Not every type of guidance goes into the guidance library. For example, you don't use it to find monolithic guides or PDFs. The most important criteria for the modules in the library is that they are atomic units of action. They can directly be tested for relevancy. They can also be tested for the results they produce and how repeatable those results are.

    How To Get Started 

    1. Join the Guidance Explorer project
    2. Download Guidance Explorer
    3. Watch the video tutorials

    The key to getting started is getting the tool up and running so you can play with it, and watching the short videos (1-2 minute long) to learn the main features and usage scenarios.

    Your First Experiment with Guidance Explorer
    For your first test with Guidance Explorer, try creating a Word doc that has just the guidelines you want. 

    To run your first experiment:

    1. Create a custom view of the guidance
    2. Save the view fo the guidance as a Word doc

    How To Get Involved

    1. Join the CodeGallery workspace.  To join the workspace, sign up at the Guidance Explorer Home on Codegallery
    2. Subscribe to the RSS feeds.   To subscribe to the feeds, use the RSS buttons in each section of Guidance Explorer Home on Codegallery
    3. Participate in the newsgroups.  To participate in the newsgroups, use the Guidance Explorer Message Boards on Codegallery
    4. Provide feedback on the alias.  To do so, send email to GETOOL at Microsoft.com.

    What's Next
    These are some of the ideas we'd like to implement:

    • VS.NET integration
    • refactoring additional guidance (e.g. existing patterns & practices guidance such as the data access, exception management, and caching guidance)
    • New guidance types (such as "test cases", "code examples", "project patterns", "whiteboard solutions")
    • New topics (such as reliability, manageability, … etc.)
      integrating bodies of guidance and ecosystems (such as integration with product documentation)

    I also hope to create a model for "Guidance Feeds", where you can subscribe to relevant guidance, as well as integrate many of the emerging social software concepts, such as allowing the network/community to rate the guidance, rate the raters and contributors, and create community-driven, shareable custom views.

    About Our Team
    Our core team consists of:

    • Prashant Bansode.   Prashant was a core member of my Whidbey Security Guidance Project, so he's very seasoned.   I chose him specifically because of his unmatched ability to execute, and because he is one of the best customer champions I know.  What surprised me about Prashant is his ability to not only manage his own work, but help guide others, and he really gets how to deliver incremental value.
    • Diego Gonzalez.  Diego is a coding machine.  He's also capable of bridging dreams and reality with working models.   Usually, by the time you've finished your sentence on what you'd like to see, Diego's already implementing it. 
    • Ed Jezierski.   Ed's simply brilliant.  I've never seen a more impressive mix of people focus and technical expertise.  If you can dream it up, Ed can make it happen.  If you can't dream it up, Ed can dream it up for you.  Just insert a random wish and Ed can turn it into a working prototype, and incredible slideware to match.  Ed brings to the table a ton of social software concepts and ideas around taking guidance to the next level.  I've worked with Ed for many years, but it's been a while since we've partnered up on the same project.  I look forward to many brainstorms, whiteboard sessions, and off the deep end conversations over lunch.
    • Ariel Neisen.  Ariel is a developer on the team.  Ariel works for Lagash with Diego and has been Diego's coding partner.
    • Mike Reinstein.   Mike works for Security Innovation.  He's a Web application security expert.  He not only brings security development and design experience, but strong technical writing skills that have contributed to exceptional content.
    • Paul Saitta.   Paul is previously a member of IO Active, now working for Security Innovation.  He's an expert in Web applications and white-box security audits.  He's been able to distill thousands of hours of customer audits into prescriptive guidance that illuminates common mistakes in the real world.
    • Jason Taylor.   I first met Jason during my Whidbey Security Guidance Project.  He impressed me with his ability to think on his feet, execute at a rate faster than most people can ever imagine, and his ability to distill and document expertise at a level few individuals can go.  Jason has 7 years Microsoft experience under his belt, and was one of Microsoft's first test-architects.  Now he's a V.P. for Security Innovation's security consulting group.  Aside from bringing a wealth of security experience to the table,  Jason has a lot of ideas around how to improve guidance for customers in very practical ways.

    Key Links


  • J.D. Meier's Blog

    Security WebCast - Using Security Code Reviews to Quickly and Effectively Improve the Security of Your Applications

    • 2 Comments

    Rudolph Araujo (or Rudy as we call him), from Foundstone, is doing a Web Cast on performing Security Code Reviews, Using Security Code Reviews to Quickly and Effectively Improve the Security of Your Applications .

    In his Web Cast, Rudy will accomplish the following:

    • Show you key effective strategies for security code reviews
    • Briefly discuss threat modeling and its benefits
    • Discuss how security code review and threat modeling are critical yet just part of an overall software security engineering process

    One of the most important things Rudy will show you is how to use control flow analysis and data flow analysis to analyze application security.  Rudy will also show you how to chunk up your security analysis using security categories such as authentication, authorization, input/data validation ... etc., to perform incremental and iterative analysis.

    Rudy has worked closely with our patterns & practices security team over the years so he's intimately familiar with our approaches around security code review approach and Security Engineering (short-cut: http://msdn.com/SecurityEngineering).  In fact, Rudy played a key role during the development of our How To: Perform a Security Code Review for Managed Code (Baseline Activity), where you can see Rudy listed as a contributing author.

    Event Information

    • Title: Using Security Code Reviews to Quickly and Effectively Improve the Security of Your Applications 
    • When: May 24th 
    • Time: 11:00 AM - 12:00 PM (Pacific)
    • Event Registration page 
  • J.D. Meier's Blog

    Performance and Scalability Checkpoint to Improve Your Software Engineering

    • 1 Comments

    When a patterns & practices deliverable would be ready to ship, our General Manager (GM) would ask me to sign off on the performance and security.  I would usually be pulled thin so I needed a way to scale.  To do so, I created a small checkpoint for performance and scalability.  The checkpoint was simply a set of questions that are a forcing function to make sure you've addressed a lot of the basics (and avoid a lot of "do-overs").  Here's what we used internally:

    Checkpoint: Performance and Scalability

    Customer Validation

    1. List 3-5 customers that say performance/scalability for the product is a "good deal" (e.g. they pay for play)

    Product Alignment

    1. Do you have alignment w/the future directions of the product team?
    2. Who from the product team agrees?

    Supportability

    1. Has Product Support Services (PSS) reviewed and signed off?
    2. Which newsgroups would a customer go to if performance and scalability problems occur?

    Performance Planning

    1. Performance model created (performance modeling template)?
    2. Budget.  These are performance and scalability constraints.  What are the maximum acceptable values for the following?
      1. Response time?
      2. Maximum operating CPU usage?
      3. Maximum network bandwidth usage?
      4. Maximum memory usage?
    3. Performance Objectives
      1. Workload
      2. % of overhead over relevant baselines (e.g. Within 5% performance degradation from version 1 to version 2)
      3. Response Time
      4. Throughput
      5. Resource Utilization (CPU, Network, Disk, Memory)

    Hardware/Software Requirements

    1. List requirements for customer installation.
      1. Hardware?
      2. Minimum hardware requirements?
      3. Ideal hardware requirements?
      4. Minimum software required:

    Performance Testing

    1. Lab Environment.  What is the deployment scenario configuration you used for testing
      1. Hardware?
      2. CPU?
      3. # of CPUs?
      4. RAM?
      5. Network Speed?
    2. Peak conditions.  What does peak condition look like?
      1. How many users?
      2. Response time?
      3. Resource Utilization?
      4. Memory?
      5. CPU?
      6. Network I/O?
    3. Capacity.  How many users until your response time or resource utilization budget is exceeded?
      1. What is the glass ceiling? (e.g. the breaking point)
      2. Number of users?
      3. Response time?
      4. Resource Utilization?
      5. CPU?
      6. Network?
      7. Memory?
    4. Failure.  What does failure look like in terms of performance and scalability?
      1. Does the application fail gracefully?
      2. What fails and how do you know?
      3. Response time exceeds a threshold?
      4. Resource utilization exceeds thresholds? (CPU too much?)
      5. What diagnostic/monitoring clues do you see:
      6. Exceptions?
      7. Event Log entries?
      8. Performance counters to watch?
    5. Stress Scenarios.  What does stress look like and how do you respond?
      1. Contention?
      2. Memory Leaks?
      3. Deadlocks?
      4. What are the first bad signs under stress?

    Instrumentation

    1. What is the technology/approach used for instrumenting the codebase?
      1. Are the key performance scenarios instrumented?
      2. Is the instrumentation configurable (On/off? Levels of granularity?)?

    The checkpoint helped the engineering team shape their approach and it simplfied my job when I had to review.  You can imagine how some of these questions can shape your strategies.  This is by no means exhuastive, but it was effective enough to tease out big project risks.  For example, do you know when you're software's going to hit capacity?  Did you completely ignore the customer's practical environment and use up all their resources?  Do you have a way to intstrument for when things go bad and is this configurable?  When your software is in trouble, what sort of support did you enable for troubleshooting and diagnostics?

    While I think the original checkpoint was helpful, I think a pluggable set of checkpoints based on application types would be even more helpful and more precise.  For example, if I'm building a Web application, what are the specific metrics or key instrumentation features I should have?  If I'm building a smart client, what sort of instrumentation and metrics should I bake in? … etc.  If and when I get to a point where I can do more checkpoints, I'll use a strategy of modular, type-specific, scenario-based checkpoints to supplement the baseline checkpoint above.

  • J.D. Meier's Blog

    Strategic Stories

    • 1 Comments

    I'm realizing more and more how stories help you drive a point home.  It's one thing to make a point, it's another for your story to make the point for you.  If your ideas aren't sticking, or you're not getting buy in, maybe a compelling story is the answer. 

    Crafting useful stories is an art, and, now, apparently a science.  Srinath pointed me to Stories at Work on 50Lessons.com.    The video shares a story about using stories as a catalyst for change and a recipe for good strategic stories:

    1. Make stories short (1 -2 minutes) so they can be retained
    2. Limit stories to no more than 2 or 3 characters, so it's easy to follow
    3. Build your story around a singular message.
    4. Tell your story in the present tense so participants can relate
    5. Use powerful images to tie to a theme
    6. Repeat a phrase or word that is the essence of your message

    The value of the stories is they help you engage people and they have a more powerful recall than slides, facts and figures.

  • J.D. Meier's Blog

    Security Innovation Security Engineering Study

    • 3 Comments


    The Security Innovation Security Engineering study,  Comparing Security in the Application Lifecycle - Microsoft and IBM Development Platforms Compared, is timely, given the emerging industry emphasis on integrating security in the life cycle. 

    My favorite quote in the study is "The patterns & practices security guidance covers the key security engineering activities better than any other resource we’ve found."  I think this reflects the fact we have more than 2,500 pages of security guidance (see Security Guidance, Security Engineering, Threat Modeling, and Improving Web Application Security) , and we've integrated our guidance into MSF/VS 2005 (see MSF/VS 2005 and p&p Integration.) 

    The study was available from the MSDN Security DevCenter for a while but seems to have fallen off.  I've summarized the study here for quick reference:

    Overview
    Security Innovation evaluated the guidance and tools of Microsoft's and IBM's development platforms.  The study compared the support available to a development team via security guidance, documentation and security focused features in the life-cycle tool suites.  Gartner reviewed the approach.


    Evaluation Criteria

    • CoverageHow well do the provided tools and guidance cover the key set of security areas? 
    • QualityHow effective and accurate are the tools and guidance?
    • VisibilityHow easy is it to find the tools and guidance and then apply it to your security needs?
    • UsabilityAre the tools and guidance precise, comprehensive and easy to use?

    Ratings

    • Outstanding: 81-100%
    • Good: 61-80%
    • Average: 41-60%
    • Below Average: 21-40%
    • Poor: 0-20%

    Scorecard Categories

    • Basic Platform Security.  When used in accordance with its documentation, a platform should be inherently secure.
    • Platform Security Services.  A mature platform should include services that make it easier for developers to implement security features in their applications.
    • Platform Security Guidance. A secure platform is much less useful if it lacks proper guidance.
    • Software Security Engineering Guidance.  It is not possible to develop a secure application unless security is a focus during every phase of the development lifecycle.
    • Security Tools.  A secure platform should include tools that make it easier to define, design, implement, test, and deploy a secure application.

    Results of the Study

    First, here's a couple key points, then the summaries are below:

    • Microsoft beat IBM in every category around guidance.
    • Microsoft beat IBM in three out of four categories around tools.


    IBM

    1. Platform Overall
      1. Overall: 36%
      2. Coverage: 62%
      3. Quality: 70%
      4. Visibility: 17%
      5. Usability: 72%
    2. Platform Security Guidance
      1. Overall: 50%
      2. Coverage: 81%
      3. Quality: 85%
      4. Visibility: 17%
      5. Usability: 84%
    3. Security Engineering Guidance
      1. Overall: 25%
      2. Coverage: 50%
      3. Quality: 64%
      4. Visibility: 17%
      5. Usability: 69%
    4. Security Tools
      1. Overall: 32%
      2. Coverage: 55%
      3. Quality: 59%
      4. Visibility: 56%
      5. Usability: 63%

    Microsoft

    1. Platform Overall
      1. Overall: 67%
      2. Coverage: 88%
      3. Quality: 85%
      4. Visibility: 61%
      5. Usability: 80%
    2. Platform Security Guidance
      1. Overall: 76%
      2. Coverage: 93%
      3. Quality: 85%
      4. Visibility: 67%
      5. Usability: 91%
    3. Security Engineering Guidance
      1. Overall: 78%
      2. Coverage: 100%
      3. Quality: 89%
      4. Visibility: 67%
      5. Usability: 79%
    4. Security Tools
      1. Overall: 47%
      2. Coverage: 71%
      3. Quality: 78%
      4. Visibility: 50%
      5. Usability: 68%

    Quotes from the Study

    • Microsoft’s overall rating of 67% reflects the impressive level of focus Microsoft has applied to application security in the past several years.
    • IBM’s overall score of 36% is the result of a more disjointed approach to security.  Security guidance is spread throughout the IBM web site and is difficult to discover.
    • The patterns & practices security guidance covers the key security engineering activities better than any other resource we’ve found.

    More Information
    For more information, see Comparing Security in the Application Lifecycle -
    Microsoft and IBM Development Platforms Compared
    at Security Innovation's site.  They created four documents that take you through the details and results: Executive Summary, Research Overview, Full Detailed Reports and Results, and Methodology.
      

  • J.D. Meier's Blog

    OpenHack 4 (eWeek Labs): Web Application Security

    • 6 Comments

    Whenever I bring up the OpenHack 4 competition, most aren't ware of it.  It was an interesting study because it was effectively an open "hack me with your best shot" competition. 

    I happened to know the folks on the MS side, like Erik Olson and Girish Chander, that helped secure the application, so it had some of the best available security engineering.  In fact, customers commented that it's great that Microsoft can secure its applications ... but what about its customers?  That comment was inspiration for our Improving Web Application Security:Threats and Countermeasures guide.

    I've summarize OpenHack 4 here, so it's easier for me to reference.

    Overview of OpenHack 4
    In October 2002, eWeek Labs launched its fourth annual OpenHack online security contest.  It was designed to test enterprise security by exposing systems to the real-world rigors of the Web.  Microsoft and Oracle were given a sample Web application by eWeek and were asked to redevelop the application using their respective technologies. Individuals were then invited to attempt to compromise the security of the resulting sites.  Acceptable breaches included of cross-site scripting attacks, dynamic Web page source code disclosure, Web page defacement, posting malicious SQL commands to the databases, and theft of credit card data from the databases used.

    Outcome of the Competition
    The Web site built by Microsoft engineers using the Microsoft .NET Framework, Microsoft Windows 2000 Advanced Server, Internet Information Services 5.0, and Microsoft SQL Server 2000 successfully withstood over 82,500 attempted attacks to emerge from the eWeek OpenHack 4 competition unscathed.

    More Information

    For more information on implementation details of the Microsoft Web application and configuration used for the OpenHack competition, see "Building and Configuring More Secure Web Sites: Security Best Practices for Windows 2000 Advanced Server, Internet Information Services 5.0, SQL Server 2000, and the .NET Framework"

  • J.D. Meier's Blog

    @Stake Security Study: .NET 1.1 vs. WebSphere 5.0

    • 7 Comments

    I like competitive studies.  I'm usually more interested in the methodology than the outcome.  The methodology acts as a blueprint for what's important in a particular problem space. 

    One of my favorite studies was the original @Stake study comparing .NET 1.1 vs. IBM's WebSphere security, not just because our body of guidance made a direct and substantial difference in the outcome, but because @Stake used a comprehensive set categories and an evaluation criteria matrix that demonstrated a lot of depth.

    Because the information from the original report can be difficult to find and distill, I'm summarizing it below:

    Overview of Report
    In June 2003, @Stake, Inc., an independent security consulting firm, released results of a Microsoft-commissioned study that found Microsoft's .Net platform to be superior to IBM's WebSphere for secure application development and deployment.  @stake performed an extensive analysis comparing security in the .NET Framework 1.1, running on Windows Server 2003, to IBM WebSphere 5.0, running on both Red Hat Linux Advanced Server 2.1 and a leading commercial distribution of Unix..


    Findings
    Overall, @stake found that:

    • Both platforms provide infrastructure and effective tools for creating and deploying secure applications
    • The .NET Framework 1.1 running on Windows Server 2003 scored slightly better with respect to conformance to security best practices 
    •  The Microsoft solution scored even higher with respect to the ease with which developers and administrators can implement secure solutions

    Approach
    @stake evaluated the level of effort required for developers and system administrators to create and deploy solutions that implement security best practices, and to reduce or eliminate most common attack surfaces.


    Evaluation Criteria

    • Best practice compliance.  For a given analysis topic, to what degree did the platform permit implementation of best practices?
    • Implementation complexity.   How difficult was it for the developer to implement the desired feature?
    • Documentation and examples.  How appropriate was the documentation? 
    • Implementor competence.  How skilled did the developer need to be in order to implement the security feature?
    • Time to implement.  How long did it take to implement the desired security feature or behavior? 


    Ratings for the Evaluation Criteria

    1. Best Practice Compliance Ratings
      1. Not possible
      2. Developer implement
      3. Developer extend
      4. Wizard
      5. Transparent
    2. Implementation Complexity Ratings
      1. Large amount of code
      2. Medium amount of code
      3. Small amount of code
      4. Wizard +
      5. Wizard
    3. Quality of Documentation and Sample Code Ratings
      1. Incorrect or Insecure
      2. Vague or Incomplete
      3. Adequate
      4. Suitable
      5. Best Practice Documentation
    4. Developer/Administrator Competence Ratings
      1. Expert (5+ years of experience
      2. Expert/intermediate (3-5 years of experience)
      3. Intermediate
      4. Intermediate/novice
      5. Novice (0-1 years of experience)
    5. Time to Implement
      1. High (More than 4 hours)
      2. Medium to High (1 to 4 hours)
      3. Medium (16-60 minutes)
      4. Low to Medium  (6-15 minutes )
      5. Low (5 minutes or less )


    Scorecard Categories
    The scorecard was organized by application, Web server and platform categories.  Each category was divided into smaller categories to test the evaluation criteria (best practice compliance, implementation complexity, quality of documentation, developer competence, and time to implement).

    Application Server Categories

    1. Application Logging Services
      1. Exception Management
      2. Logging Privileges
      3. Log Management
    2. Authentication and Access Control
      1. Login Management
      2. Role Based Access Control
      3. Web Server Integration
    3. Communications
      1. Communication Security
      2. Network Accessible Services
    4. Cryptography
      1. Cryptographic Hashing
      2. Encryption Algorithms
      3. Key Generation
      4. Random Number Generation
      5. Secrets Storage
      6. XML Cryptography
    5. Database Access
      1. Database Pool Connection Encryption
      2. Data Query Safety
    6. Data Validation
      1. Common Validators
      2. Data Sanitization
      3. Negative Data Validation
      4. Output Filtering
      5. Positive Data Validation
      6. Type Checking
    7. Information Disclosure
      1. Error Handling
      2. Stack Traces and Debugging
    8. Runtime Container Security
      1. Code Security
      2. Runtime Account Privileges
    9. Web Services
      1. Credentials Mapping
      2. SOAP Router Data Validation

    Host and Operating System Categories

    1. IP Stack Hardening
      1. Protocol Settings
    2. Service Minimization
      1. Installed Packages
      2. Network Services

    Web Server Categories

    1. Architecture
      1. Security Partitioning
    2. Authentication
      1. Authentication Input Validation
      2. Authentication Methods
      3. Credential Handling
      4. Digital Certificates
      5. External Authentication
      6. Platform Integrated Authentication
    3. Communication Security
      1. Session Encryption
    4. Information Disclosure
      1. Error Messages and Exception Handling
      2. Logging
      3. URL Content Protection
    5. Session Management
      1. Cookie Handling
      2. Session Identifier
      3. Session Lifetime

    More Information
    For more information on the original @stake report, see the eWeek.com article, .Net, WebSphere Security Tested.

  • J.D. Meier's Blog

    Flawless Execution

    • 2 Comments

    In the book Flawless Execution, James D. Murphey, shares techniques used by fighter pilots to achieve peak performance, accelerate the learning curve, and make performance more predictable and repeatable.

    The essence of the execution engine is a set of iterative steps:

    • Plan.  Evaluate multiple courses of action, evaluate them, and take the best parts.
    • Brief.  Tell everybody how we're going to carry out the plan and what we're going to do today.
    • Execute.  Act out the script you created in the brief.
    • Debrief.  Evaluate execution errors and successes.

    Murphy connects the execution framework to the strategy.  If they aren't aligned, you can win the battle, but lose the war.  He distinguished strategy from tactices, by saying strategy is about four things:

    1. Where are we going to be?
    2. What are we going to apply resources for or against?
    3. How are we going to do this?
    4. When are we going to stop doing this?

    Murphy is very prescriptive.  For every technique, there's a set of steps and checkpoints.  I've successfully scaled down some of the techniques, such as Future Picture, to meet my needs. 

    What I like about the overall execution framework is that its practices are drawn from life and death scenarios.  Fighter pilots need to learn what works from their missions, and share it as quickly as possible.  What I also like is that Murphey illustrates how ordinary people, are capable of execution excellence.

  • J.D. Meier's Blog

    Meeting with Gabriel and Sebastian from PreEmptive

    • 1 Comments

    I met with Gabriel Torok and Sebastian Holst of PreEmptive Solutions, the other day.  PreEmptive makes obfuscator products, including the Dotfuscator that comes with Visual Studio.  Gabriel founded PreEmptive more than 10 years ago, and it was originally a code optimization company (The dual focus on performance and security resonates with me).

    I was familiar with obfuscation and its limitations.  I wasn't as familiar with some of the internals of specific obfuscation techniques, such as identifier renaming, control flow obfuscation, metadata removal, and string encryption, or how you can tweak or tune these.  One surprise for me was that obfuscation for some scenarios could yield a 30-40% reduction in size (the result of shortening identifier names and "pruning" libraries that are never called.)

    Gabriel's interested in creating obfuscation guidance for the community.  I gave him my wish list:

    • Scenarios.  Illustrate how obfuscation fits in, including high level, when to use or not to use, as well as lower-level choices among obfuscation techniques.
    • Anatomy of obfuscation.  From a developer mindset, this means knowing how things work.  Walk me through the bits and pieces and the execution flow.
    • Impact and surprises.  If I introduce obfuscation, tell me things I might run into.  For example, is there impact to my build process?  What about servicing my codebase? Is there a difference between obfuscationg with ASP.NET using dynamic compilation vs. pre-compilation?  What if you need to GAC?  What if you sign your code with a 509 cert?
    • Performance considerations.  Take me through trade-offs and scenarios where performance can be improved or degraded.  Give me insight into how to influence or tweak my obfuscation approach.

    Sebastion and I exchanged some metaphors.  In reference to the limits of obfuscation, I said that just because door-locks don't prevent car thieves, that doesn't mean cars should come without locks.  Sebastion related it to smoke alarms.  In the grand schema of things, smoke alarms play a key role in saving lives and limiting damage, but to the individual, there's not a lot of value until a fire occurs.  The fact that smoke alarms are low cost and simple helps justify their common use.  He added, risk varies by context, so the value to hotels or restaurants may be more obvious.

    It was an interesting and insightful meeting and I look forward to Gabriel's whitepaper.

  • J.D. Meier's Blog

    Code Example Schema for Sharing Code Insights

    • 1 Comments

    This is the emerging schema and test-cases we're using for code examples:

    Code Example Schema (Template)

    • Title
    • Summary
    • Applies To
    • Objectives
    • Solution Example
    • Problem Example
    • Test Case
    • Expected Results
    • Scenarios 
    • More Information
    • Additional Resources


    Code Example Schema (Template explained and test cases)

    Title
    Insert title that resonates from a findability perspective.

    Test Cases:

    • Does the title distinguish it from related examples?
    • If technology is in the title, is the version included?

    Summary 
    Insert 1-2 lines max description of the intent.

    Test Cases:

    • Does the description crisply summarize the solution
    • Is the intent of the code clear?

    Applies To
    Insert the key technologies/platform the code applies to.

    Test Cases:

    • Versions are clear?

    Objectives 
    Insert bulleted list of task-based outcomes for the code.

    Test Cases:

    • Is it clear why the solution example is preferred over the problem example?

    Solution Example
    Insert code example as a blob within a function.  The blob allows quick reading of the code.  It also allows quickly testing from a function, inlining within other code, or refactoring for a given context.  The alternative is to factor up front, but this increases complexity and can negatively impact consumption.  This leaves refactoring to the developer for their given scenario.

    Test Cases:

    • Is the code example organized as a blob within a function that can easily be tested or refactored?
    • Do the comments add insights around decisions?
    • Are the comments concise enough so they don't break the code flow?

    Problem Example
    List examples of common mistakes along with issues.

    Test Cases:

    • Are the mistakes clear?
    • Are the patterns and variations of the problems clear?

    Test Case
    Insert relevant setup information.  Write the code to call the functional blob from Solution Example.

    Test Cases:

    • Is setup Information included?
    • Does the example call the functional blob in the Solution example?

    Expected Results
    Insert what you expect to see when running the test case.

    Test Cases:

    • If you run the Test Case, do the Expected Results match?

    Scenarios
    Insert bulleted list of key usage scenarios.

    Test Cases:

    • Usage scenarios are a flat list?
    • Usage scenarios are based on real-world?
    • Usage scenarios convey when to use the code?

    More Information
    Optional.  Insert more information as necessary.  This could be background information or interesting additional details.

    Additional Resources
    Optional.  Insert bulleted list of descriptive links to resources that have direct value or relevancy.

    Test Cases:

    • The links starts with the pattern "For more information on X, see ..."?
    • The links are directly relevant versus simply nice to have?

    We're using this schema for our Security Code Examples Project.

  • J.D. Meier's Blog

    Project Success Indicators (PSI)

    • 1 Comments

    In the book, "How To Run Successful Projects III, The Silver Bullet", Fergus O'Connell uses a scoring system to predict project success.

    1. (20) Visualize the goal
    2. (20) Make a list of jobs
    3. (10) There must be one leader
    4. (10) Assign people to jobs
    5. (10) Manage expectations, allow a margin of error, have a fallback position
    6. (10) Use an appropriate leadership style
    7. (10) Know what's going on
    8. (10) Tell people what's going on
    9. (00) Repeat steps 1-8 until step 10
    10. (00) The Prize

    What this means is that having clarity on what you want to accomplish and being able to identify the work to be done (steps 1 and 2) are the more significant indicators of project success. 

    After managing several projects over the years, I tend to agree.  Step 2 is particularly interesting because it not only helps you calculate schedule and budget, but it helps you identify the right people for the jobs.

  • J.D. Meier's Blog

    Ten Steps for Structured Project Management

    • 1 Comments

    In the book "How to Run Successful Projects III, The Silver Bullet", Fergus O'Connell identifies ten steps to structured project management:

    1. Visualize the goal
    2. Make a list of jobs
    3. There must be one leader
    4. Assign people to jobs
    5. Manage expectations, allow a margin of error, have a fallback position
    6. Use an appropriate leadership style
    7. Know what's going on
    8. Tell people what's going on
    9. Repeat steps 1-8 until step 10
    10. The Prize

    These ten steps help make project management consistent, predictable, and repeatable.  The first five steps are about planning your project.  The last five are about implementing the plan and achieving the goal.  These steps are based on 25 years of research into why some projects fail and others succeed.

    I like to checkpoint any project I do against these steps.  I find that when a project is off track, I can quickly pinpoint it to one of the steps above and correct course.

  • J.D. Meier's Blog

    Security Code Examples Project

    • 2 Comments

    I'm working with the infamous Frank Heidt, George Gal and Jonathan Bailey to create a suite of modular, task-based security code examples.  They happen to be experts at finding mistakes in code.  Part of making good code is knowing what bad code looks like and more importantly what makes it bad, or what the trade-offs are.  I've also pulled in Prashant Bansode from my core security team to help push the envelope on making the examples consumable.  Prashant doesn't hold back when it comes to critical analysis and that's what we like about him.

    For this exercise, I'm time-boxing the effort to see what value we produce within the time-box.  We carved out a set of candidate code examples by identifying common mistakes in key buckets, including input/data validation, authentication, authorization, auditing and logging, exception management and a few others.  We then prioritized the list and do daily drops of code.  The outcome should be some useful examples and an approach for others to contribute examples.

    Sharing a chunk of code is easy.  We quickly learned that sharing insights with the code is not.  Exposing the thinking behind the code is the real value.  We want to make that repeatable.  I think the key is a schema with test cases.

    Here's our emerging schema and test cases ....

    Code Example Schema (Short Form)

    • Title
    • Summary
    • Applies To
    • Objectives
    • Solution Example
    • Problem Example
    • Test Case
    • Expected Results
    • More Information
    • Additional Resources

    For more information on the schema and test cases, see Code Example Schema for Sharing Code Insights.

    Today we had a deeply insightful review with Tom Hollander, Jason Taylor, and Paul Saitta.  Jason and Paul are on site while we're solving another class of problems for customers.  They each brought a lot to the table and collectively I think we have a much better understanding of what makes a good, reusable piece of code. 

    We made an important decision to optimize around "show me the code" and then explain it, versus a lot of build up and then the code.  Our emerging schema has its limits and does not take the place of a How To or guidelines or a larger resuable block of code, but it will definitely help as we try to share more modular code examples that demonstrate proven practices.

  • J.D. Meier's Blog

    Axiomatic Design

    • 1 Comments

    A few folks have asked me about Axiomatic design that I mentioned in my post on Time-boxes, Rhythm, and Incremental Value.  I figure an example is a good entry point. 

    An associate first walked me through axiomatic design like this.  You're designing a faucet where you have one knob for hot and one knob for cold.  Why's it a bad design?  He said because each knob controls both temperature and flow.  He said a better design is one knob for temperature and one knob for flow.  This allows for incremental changes in design because the two requirements (temperature and flow) have their independence.  He then showed me a nifty matrix on the board that mathematically *proved* good design.

    At the heart of Axiomatic Design are these two Axioms (self-evident truth):

    • Axiom 1 (independence axiom): maintain the independence of the Functional Requirements (FRs).  
    • Axiom 2 (information axiom): minimize the information content of the design.


    For an interesting walkthrough of Axiomatic Design, see "A Comparison of TRIZ and Axiomatic Design".

  • J.D. Meier's Blog

    MSF/VS 2005 and patterns and practices Integration

    • 3 Comments

    Around mid 2004, Randy Miller approached me with "I want to review MSF Agile with you with the idea of incorporating your work."  I didn't know Randy or what to make of MSF Agile, but it sounded interesting. 

    Randy wanted a way to expose our security and performance guidance in MSF.  Specifically he wanted to expose "Improving Web Application Security" and "Improving .NET Application Performance" through MSF.  I was an immediate fan of the idea, because customers have always asked me to find more ways to integrate in the tools.  I was also a fan because my two favorite mantras to use in the hallways are "bake security in the life cycle" and "bake performance in the life cycle".  I saw this as a vehicle to bake security and performance in the life cycle and the tools.

    We had several discussions over a period of time, which was a great forcing function.  Ultimately, we had to figure out a pluggable channel for the guidance, the tools support and how to evolve over time.  My key questions were:

    • how to create a pluggable channel for the p&p guidance?
    • what does a healthy meta-model for the life cycle look like?
    • how to integrate key engineering techniques such as threat modeling or performance modeling?
    • how to get a great experience outside the tool and a "better together" experience when you have the tools?

    These questions lead to a ton of insights around meta-models for software development life cycles, context-precision, organizing engineering practices and severl other concepts worth exploring in other posts.

    My key philosophies were:

    • life cycle options over one size fits all
    • pluggable techniques that fit any life cycle over MSF only
    • proven over good ideas
    • modular over monolithic

    Randy agreed with the questions and the philosophies.  We came up with some working models for pluggable guidance and integration.  His job was to make the tools side happen.  My job was to make the guidance side happen.  I now had the challenge and opportunity of making guidance online and in the tools.  This is how I ended up doing guidance modules for for .NET Framework 2.0.  This also drove exposing p&p Security Engineering which is baked into MSF Agile by default.

    Randy summarized our complimentary relationship best ...
    “The Patterns and Practices group produces an important, complimentary component to what we are building into MSF. In Visual Studio, our MSF processes can only go so deep on a topic. Our activities can provide the overview of the steps that a role should do but cannot provide all of the educational background necessary to accomplish the task.
    As many of the practices that we espouse in MSF (such as threat modeling) require this detailed understanding, we are building links into MSF to Patterns and Practice online material. Thus the activities in MSF and the Patterns and Practices enjoy a very complimentary relationship. The Patterns and Practices group continues to be very helpful and our relationship is one of very open communication.“


    Mike Kropp, GM of our patterns & practices team, liked the results ...
    “– it was great to see the progress you’ve made over the past couple of months.  here’s my takeaway on what you’ve accomplished:

    • you’ve agreed on the information model for process and engineering practices – I know this was tough but it’s critical to achieving alignment 
    • the plug-in architecture allows us to evolving the engineering practices and serve them up nicely for the customer – in context 
    • the model you’ve come up with for v1 allows us to adapt & evolve over time (schemas, types, etc..)
      ... this is a great example of how we can work together to help your customers and partners.”

    I remember asking Randy at one point, why did you bet on our security and performance work?  He told me it was because he knew we vetted and proved our work with customers and industry experts.  He also knew we vetted internally across our field, support and the product teams.  I told him if anybody wondered who we worked with, have them scroll down to the contributors and reviewers list for the security work as an example.

    We have more work ahead of us, but I think we've accomplished a lot of what we set out to do and for that I'm grateful to Randy Miller, David Anderson and their respective team.

  • J.D. Meier's Blog

    Time-boxes, Rhythm, and Incremental Value

    • 5 Comments

    Today I had some interesting conversations with Loren Kohnfelder. Every now and then Loren and I play catch up. Loren is former Microsoft. If you don't know Loren, he designed the CLR security model and IE security zones. He created a model for more fine-grained control over security decisions and he's a constant advocate for simplifying security.

    You might think two guys that do security stuff would talk about security. We didn't. We ended up talking about project management, blogging, social software, and where I think next generation guidance is headed. I'll share the project management piece.

    I told Loren I changed my approach to projects. I use time boxes. Simply put, I set key dates and work backwards. I chunk up the work to deliver incremental value within the time boxes. This is a sharp contrast from the past where I'd design the end in mind and then do calculated guesstimates on when I'd be done, how much it would cost and the effort it would take.

    I use rhythm for the time boxes. I use a set of questions to drive the rhythm … When do I need to see results? What would keep the team motivated with tangible results? When do customers need to see something? I realize that when some windows close, they are closed forever. The reality is, as a project stretches over time, risk goes up. People move, priorities change, you name it. When you deal with venture capitalists, a bird in hand today, gets you funding for two more in the bush.

    Loren asked me how do I know the chunks add up to a bigger value. I told him I start with the end in mind and I use a combination of scenarios and axiomatic design. Simply put, I create a matrix of scenarios and features, and I check dependencies across features among the scenarios. What's my minimum set of scenarios my customers want to have something useful? Can I incrementally add a scenario? Can I take away scenarios at later points and get back time or money without breaking my design? Sounds simple, but you'd be surprised how revealing that last test is. With a coupled design, if you cut the wrong scenario you have a cascading impact on your design that costs you time and money.

    We both agreed time boxed projects have a lot of benefits, where some are not obvious. Results breed motivation. By using a time box and rhythms, you change the game from estimating and promising very imprecise variables to a game of how much value can you deliver in a timebox. Unfortunately sometimes contracts or cultures work against this, but I find if I walk folks through it, and share the success stories, they buy in.

Page 43 of 44 (1,087 items) «4041424344