J.D. Meier's Blog

Software Engineering, Project Management, and Effectiveness

  • J.D. Meier's Blog

    Manage Energy, Not Time

    • 5 Comments

    Manage energy, not time, to get more things done ...  This concept really resonates with me.  I also like it because it can be counter intuitive or non-obvious.

    One way to try and get more things done is to, jam more in your schedule.  Yuck!  Unfortunately, that's a fairly common practice.

    I actually have lots of practices for managing time (outcome-based work breakdown structures, managing outcomes vs. activities, prioritizing outcomes based on usage and value, avoiding over-managing minutia, using outcome-based agendas for meetings, distinguishing getting results vs. building connections in meetings, using time-boxes to deliver incremental results in projects, "zero-mail in the inbox" practice … etc.)   While I'm always open to new time management practices, I think I was getting diminishing returns from yet more time management techniques.

    So stepping back, here's the situation … I was using a full arsenal of time management techniques, I was known for getting results, and yet I wanted to reach the next level.  What happened next was, I noticed a common thread among a few very different trainings and books around leadership and results.  Energy was a recurring theme.

    Of course, then it made total sense (the beauty of 20/20 hindsight!).  We've all had that great hour of brilliance or that unproductive work week.  I did a reality check against several past projects.  It was easy for me to see the connection of energy and results, when all else was equal.  The problem was, I didn't have an arsenal of practices for managing energy.  It turns out, I didn't really need to.  Simply by knowing what drains me or catalyzes me helped a lot.

    Now that I've been aware of this underlying concept for a while, I have learned a few practices along the way.  One practice I use is I explicitly ask the team when and how often do they want to deliver customer results (i.e. how often do they want to see the fruits of their effort?).   I balance this with capability, customer demand, project constraints and a bunch of other drivers, but the fact that I explicitly try to leverage energy and rhythm, helps crank the energy up a notch (and, as a bonus, results).

  • J.D. Meier's Blog

    User Experience, Tech Feasibility and Business Value

    • 2 Comments

    I found a way to explore more and churn less on incubation (i.e. R&D) projects.   It helps to think of your project experiments and key risks in terms of these three categories and in this order:
    1. user experience
    2. technical feasibility
    3. business value

    Sequence matters.  If you don't get the user experience right first, who cares if it's technically feasible?  Once you get the user experience right, meaning customers get value, the business value will follow.

    Here's how I learned this the hard way ...

    My project was time-boxed and budget constrained.  To keep our stakeholders happy, my strategy was to deliver incremental value.  This translated to short ship cycles to test with customers.   We used a rhythm of shipping every two weeks.  This let us track whether we were trending towards or away from the right solutions.

    While this was a relatively short feedback cycle, it wasn't actually efficient.  Most of our prototyping was around exploring user experiences, although we didn't know this at the time.  We were focused on shipping prioritized customer scenarios and features.  Delivering these scenarios and features, mixed exploring user experience, tech feasibility and business value.  It's not a bad mix -- it just wasn't the most efficient.

    Necessity is the mother of invention.  When we weren't "learning' at the pace we expected, we had to find a better way.  We moved to rapid prototyping user experience with slideware and walkthroughs.  This meant faster feedback and less do-overs than our software prototypes.  It also meant, in our software prototypes, we would consciously and explicitly focus on technical feasibility

    User experience was the real challenge and the most value.  Spending a week to build a software prototype to test technical feasibility and identify engineering risks makes sense.  Spending a week to build a software prototype to test user experience, sucks.   In other words, what previously took a week or more to build out and test (the user experience), we could now do in a few hours.

    In hindsight, it's easy to see that incubation was about user experience, tech feasibility and business value, even though I didn't realize it at the time.  It's also easy to see now that the dominant challenge was usually user experience.

    The moral of the story isn't that you can use slideware for all your user experience testing.  Instead, the lesson I would pass along is be aware of whether you are really testing user experience, tech feasibility or business value.  By knowing which category you're exploring, you can then pick the right approach.

  • J.D. Meier's Blog

    Timing Managed Code in .NET 2.0

    • 5 Comments

    In .NET 1.1, we timed managed code by wrapping QueryPerformanceCounter and QueryPerformanceFrequency.  The following How To shows how:

    In .NET 2.0, you can use the Stopwatch Class.  I found the following references useful:

  • J.D. Meier's Blog

    Scenario Evaluations for Product Design and Feedback

    • 1 Comments


    When I need to quickly analyze a product and give actionable feeback, I use scenario evaluations.  Scenario evaluations are basically an organized set of scenarios and criteria I use to test and evaluate against.  It's a pretty generic approach so you can tailor it for your situation.  Here's an example of the frame I used to evaluate the usage of Code Analysis (FX Cop) in some security usage scenarios:

    Scenario Evaluation Matrix
    Development life cycle  

    • Scenario: Dev lead integrates FX Cop in build process.  
    • Scenario: Dev lead integrates FX Cop in design process.  
    • Scenario: Developer uses FX Cop in their development process.  
    • Scenario: Tester integrates FX Cop in testing process.  
    • Scenario: Developer integrates FX Cop in deployment process.  
    • Scenario: Dev lead creates a new FX Cop rule to support custom policies.

    Application type  

    • Scenario: Developer uses FX Cop to evaluate security of web applications.  
    • Scenario: Developer uses FX Cop to evaluate security of desktop applications.  
    • Scenario: Developer uses FX Cop to evaluate security of components.  
    • Scenario: Developer uses FX Cop to evaluate security of web services.

    Input and Data Validation  

    • Scenario: Identify database input that is not validated  
    • Scenario: Input data is constrained and validated for type, length, format, and range.  
    • Scenario: Identify output sent to untrusted sources that are not encoded fields

    Sensitive Data  

    • Scenario: Check secrets are not hard coded  
    • Scenario: Check plain text secrets are not stored in memory for extended periods of time  
    • Scenario: Check sensitive data is not serialized.

    ... etc.


    In this case, I organized the scenarios by life cycle, app type, and security categories.  This makes a pretty simple table.  Explicitly listing the scenarios out helps see where the solution fits in and where it does not, as well as identify opportunities.  A key aspect for effective scenario evaluation is finding the right matrix of scenarios. For this exercise, some of the scenarios are focused on the user experience of using the tool, while others are focused on how well the tool addresses recommendations.  What's not shown here is that I also list personas and priorities next to each scenario, which are also extremely helpful for scoping. 

    Criteria

    What becomes interesting is when I applied criteria to the scenarios above.  For example:

    • Recommended practice compliance
    • Implementation complexity
    • Quality of documentation/code
    • Developer competence
    • Time to implement

    I then walked the scenarios, testing and evaluating against the criteria.  This produced a nicely organized set of actionable feedback against how well the solution is working (or not).  I think part of today's product development challenge isn't a lack of feedback, but rather a lack of actionable feedback that's organized and prioritized.

    The beauty of this approach is that you can use this to evaluate your own solutions as well as others.  If you're evaluating somebody else's solution, this actually helps quite a bit because you can avoid making it personal and argue the data.

    The other beauty is that you can scale this approach along your product line.  Create the frames that organize the tests and "outsource" the execution of the scenario evaluations to people you trust.

    I've seen variations of this approach scale down to customer applications and scale up to full-blown platform evaluations for analysts.  Personally, I've used it mostly for performance and security evaluations of various technologies and it helps me quickly find holes I might otherwise miss and it helps me communicate what I find.

  • J.D. Meier's Blog

    Be the Software

    • 1 Comments

    When you're working on an R&D project, how do you shorten the cycles around testing your user experience models?
    ... Be the Software

    That's the advise John Socha-Leialoha, father of Norton Commander, gave me and it worked like a champ.


    We faced a lot of user experience design issues early in our R&D project.  For example ....

    • how to filter a large picklist of items
    • how to optimize views based on type of item
    • how to integrate social software features (tagging, rating, ...)

    Initially, we did a bunch of whiteboard modeling, talk-throughs, and prototyping.  The problem was the prototypes weren't efficient.  I had a distributed team so it was tough to paint a good picture of the prototype, even when we all agreed to the scenarios and requirements.  The other problem was customer reviews were tough because it was easy to rat-hole or get distracted by partial implementations.  The worst case was when we would finish a prototype and it would be a do-over.

    We experimented with two techniques:

    1. Build modular slideware for visual walkthroughs of task-based features.
    2. Be the software.

    This radically improved customer verification of the user experience and kept our dev team building out the right experience.

    Mocking up in slides is nothing new.  The trick was making it efficient and effective:

    1. We prioritized scenarios that were the most risk for user experience.
    2. We created modular slide decks.  Each deck focused on exactly one scenario-based task (and scenarios were outcome based).  Modular slide decks are easier to build, review and update.  Our average deck was around six slides.
    3. Each slide in a deck was a single step in the task from the user's perspective.
    4. Each slide had a visual mock up of what the user would see
    5. To paint some of the bigger stories, we did larger wrapper decks, but only after getting the more fine-grained scenarios right.  Our house was made of stone instead of straw.  In practice, I see a lot of beautiful end-to-end scenarios decks that are too big, too fragile and too make believe.

    For example, here's the slide list for one deck:

    1. scenario - User subscribes to a guidance feed
    2. summary of steps (flat list of the steps)
    3. Step 1.  user finds a relevant item
    4. Step 2.  user subscribes to view
    5. Step 3.  user displays view in RSS reader

    What originally took a week to prototype, we could mock up in an hour if not minutes.  Do-overs were no longer a problem.  In fact, mocking up alternate solutions was a breeze.  The beauty was we could keep our release rhythm of every two weeks, while we explored solution paths in the background, with less disruption to the dev team. 

    The other beauty was we could use the same deck to walkthrough with customers and the dev team.  The customers would bang on the user experience.  The developers would bang on the technical feasibility.    For example, show a catalog to customers and they evaluated the the best way to browse and filter.  Sow the same screen to the devs and they would evalute the performance of the catalog.  We would also brainstorm the "what-ifs", such as how will the catalog perform when there's a billion items in it ... etc.  We got better at teasing out the key risks before we hit them.

    Building the software became more an exercise of instantiating the user experience versus leaving too much to be made up on the fly.

    To "be the software", it's as simple as letting the user walk through the user experience of performing that task (via the slides), and, as John put it, "you be the software ... you simply state how the software would respond."   If slides are too heavy, draw on paper or use a whiteboard.  The outcome is the user gets a good sense of what it's like to use your solution, while you get a sense of the user's more specific needs.  The interactive approach produces way more benefits than a simple spec review or 1/2-baked prototype.

  • J.D. Meier's Blog

    MyLifeBits vs. Mental Snapshots

    • 2 Comments

    Yesterday's snowfall in Redmond was interesting for me.  During my drive home, it was pretty dark, icy and cold.  As I came up Old Redmond Road, I saw an object coming towards me, moving somewhat erratically, that looked too small to be a car.

    It wasn't a car at all.  It was a cross-country skier making his way down the middle of the road, followed by a trail of cars.  I'm not sure at what point the street looked like good skiing and I don't know if he had an exit strategy, but he did seem to be having fun and going the speed limit.

    I had my digital camera with me, but I forgot to use it.  I was more focused on skating my car down the right side of the road.  By the time I got home, I had a bunch of "mental snapshots" of various scenes along the way home, but nothing in hand to share. 

    That got me thinking of the MyLifeBits project.  MyLifeBits is effectively software for "lifelogging" or archiving your life on disk.  Although it's a bit extreme for me, there are times where I wish I automatically had more than just the mental snapshots.

  • J.D. Meier's Blog

    Test-Driven Guidance

    • 8 Comments

    When I last met with Rob Caron to walk him through Guidance Explorer, one of the concepts that peaked his interest was test-cases for content.   He suggested I blog it, since it's not common practice and could benefit others.  I agreed.

    If you're an author or a reviewer, this technique may help you.  You can create explicit test-cases for the content.  Simply put, these are the "tests for success" for a given piece of content.  Here's an example of a few test cases for a guideline:

    Test Cases for Guidelines

    Title

    • Does the title clearly state the action to take?
    • Does the title start with an action word (eg. Do something, Avoid something)?

    Applies To

    • Do you list technology and version? (e.g. ASP.NET 2.0)

    What to Do

    • Do you state the action to take?
    • Do you avoid stating more than the action to take?

    Why

    • Do you provide enough information for the user to make a decision?
    • Do you state the negative consequences of not following this guideline?

    When

    • Do you state when the guideline is applicable?
    • Do you state when not to use this guideline?

    How

    • Do you state enough information to take action?
    • Do you provide explicit steps that are repeatable?

    Problem Example

    • Do you show a real world example of the problem from experience?
    • If there are variations of the problem, do you show the most common?
    • If this is an implementation guideline, do you show code?

    Solution Example

    • Does the example show the resulting solution if the problem example is fixed?
    • If this is a design guideline is the example illustrated with images and text?
    • If this is an implementation guideline is the example in code?

    Additional Resources

    • Are the links from trusted sites?
    • Are the links correct in context of the guideline?

    Related Items

    • Are the correct items linked in the context of the guideline?

    Additional Tests to Consider When Writing a Guideline

    • Does the title clearly state the action to take?
    • Does the title start with an action word (eg. Do something, Avoid something)?
    • If the item is a MUST, meaning it is prevelant and high impact, is Priority = p1?
    • If the item is a SHOULD, meaning it has less impact or is only applicable in narrower circumstances, is Priority = p2?
    • If the item is a COULD, meaning it is nice to know about but isn't highly prevelant or impactful, is Priority = p3?
    • If this item will have cascading impact on application design, is Type = Design?
    • If this item should be followed just before deployment, is concerned with configuration details or runtime behavior, is Type = Deployment?
    • If this item is still in progress or not fully reviewed, is Status = Beta?

    Benefits to Authors and Reviewers
    The test-cases serve as checkpoints that help both authors and reviewers produce more effective guidance.  While you probably implicitly ask many of these questions, making them explicit makes them a repeatable practice for yourself or others.  I've found questions to be the best encapsulation of the test because they set the right frame of mind.  If you're an author, you can start writing guidance by addressing the questions.  If you're a reviewer, you can efficiently check for the most critical pieces of information.  How much developer guidance exists that does not answer the why or when?  Too much.  As I sift through the guidance I've produced over the years, I can't believe how many times I've missed making the why or when explicit.

    I'm a fan of the test-driven approach to guidance and here's my top reasons why:

    • I can tune the guidance across a team.  As I see patterns of problems in the quality, I can weed it out by making an explicit test case.
    • I can tailor test cases based on usage scenarios.  For example, in order to use our checklist items for tooling scenarios, our problem and solution examples need to have certain traits.  I can burn this into the test cases.
    • I can bound the information.  When is it done and what does "good enough" look like?  The test case sets a bar for the information.
    • I can improve the precision and accuracy of the information.  By precision, I mean filter out everything that's not relevant.  When it comes to technical information to do my job, I'm a fan of density (lots of useful information per square inch of text).  Verbosity is for story time.

    Examples of Test Cases for Guidance
    I've posted examples of our test-cases for guidance on Channel 9.

  • J.D. Meier's Blog

    238 New Items in Guidance Explorer

    • 2 Comments

    Today we published 238 new guidance items in Guidance Explorer.  If you use the offline client, it should automatically synchronize to our online store.

    We're in the process of performing a guidance sweep.  The approach to the sweep is twofold:
    1.  Make existing guidance available in Guidance Explorer.
    2.  Identify user experience issues with the information models and tool design.

    Benefits in GE
    Making existing guidance available in Guidance Explorer involves re-factoring existing security guidance and performance guidance.  The benefits of having the guidance available in Guidance Explorer include:

    • you can view across topics (for example, you can see across the security and the performance guidance)
    • you can filter down to exactly the guidance items you need for a given scenario or task
    • you can build multiple custom views based on how you need to use the guidance
    • you can build guides on the fly (you can save a view as a Word doc or HTML files for example)
    • you can tailor the guidance to your scenario (e.g. save an item into your library in GE and edit the guidance to your liking)
    • you can supplement the guidance for your scenario (because GE is also an authoring environment, you can write your own guidance)

    How We Improve Our Guidance
    An underlying strategy in GE was to help support users quickly hunt and gather relevant items rather than try and guess your context and what you need.  In other words, it's a tool to help smart people versus a smart tool that might get in your way.  This was actually an important decision because we had to pick a problem we knew we could help directly solve and add value.

    The feedback from customers on existing guidance was that it was great stuff, but there were 3 key problems:
    1.  it's a copy+paste exercise to grab just the guidance you need
    2.  it's not atomic enough (monoliths over bite-sized chunks)
    3.  many of the items, while they read well, were not actionable enough

    That's why we took the following measures on our guidance:

    • split the guidelines and checklists into individual items (we chunked the guidance into units of action)
    • we cleaned up our templates for the various guidance types (we gave the chunked items a common look and feel)
    • made the schema explicitly include answers to "why" and "how", as well as include problem examples and solution examples (we made the chunks more actionable and verifiable)

    As we port existing guidance to our updated schemas, we often find guidance items lacking key information such as why or how, or example code.

    Guidance Explorer in Practice
    What's been great so far is that some folks in the field have let me know how they've been using it for customer engagments.  Apparently the ability to customize guidance has resonated very well.  One consultant in particular has used Guidance Explorer for several engagements to save time and effort.  He uses GE as a general purpose rules and guidelines store.  He's also tailored guidelines and checklists for different audience levels (executive, development leads, architects, developers, PMs) and for different activities (design reviews, code reviews, and deployment reviews).

    A few customers have let me know they are using the UNC share scenario to create guidance libraries for their team development.  They told me they like the idea that it is like a simple typed-wiki that you can act on.  The fact that they can create views and print out docs from the library has been the main appeal.

    The other benefit that more customers are appreciating is the templates for guidelines and checklists.  They like the fact that it starts to simplify authoring as well as sharing prescriptive guidance.  For anybody who has authored guidelines or checklists, they know that it's challenging to write actionable guidance that can be reused.  What we're sharing in Guidance Explorer is the benefit of experience and lessons learned over the years of producing resuable guidance for various audiences.

    R&D Project
    As a reminder and to keep things in perspective, Guidance Explorer is an R&D project.  While there are immediately tangible benefits, the real focus is on the learnings around user experience so that patterns & practices can improve it's ability to author and share guidance, and to make progress on helping debottleneck the creation of prescritive guidance for the software industry.

    Feedback
    You can send feedback on GE directly to the team at getool@microsoft.com

  • J.D. Meier's Blog

    Practices Checker for .NET Framework 1.1

    • 1 Comments

    I've had to hunt down Practices Checker for .NET Framework 1.1 a few times, so now I'm posting it.  It was an R&D project to help automate the search and discovery of potential coding practices and configuration settings that do not adhere to the ASP.Net 1.1 Performance checklist

    It may seem a bit after the fact, given it is .NET 1.1, but there were a few reasons for this:  1)  our focus was more on testing how to codify our library of practices rather than a specific version;  2) we figured adding rules/versions would be easy once we understood the feasibility and work required; 3) our field was still performing code reviews for customers using .NET 1.1 so we could immediately test the impact.

    Key Links

    What You Need to Know

    • It was an R&D project to explore and test options for tooling support around patterns & practices guidelines.
    • Whereas Code analysis is focused on .NET Design Guidelines, Practices Checker is focused on patterns & practices guidelines.
    • The tool was designed for helping manual inspections and audit scenarios.  It was not designed for real-time analysis or during builds/check-ins.
    • It supplements manual inspection by helping you identify potential places in the code that require further analysis.
    • The user interface is a bit rough and the reports need work.

    Key Take Aways
    The take aways for me in this project were:

    • It's tough to see a bird's eye view across various "rules" libraries (Managed Code Analysis Warnings, patterns & practices guidelines, ... etc.).
    • It's important to know the types of rules your tool does or does not cover (policies, requirements, vulnerabilities, and best practices).
    • It's important to know your various tool options and usage scenarios (e.g. Managed code analysis plugs into check-in policies or part of a build process, custom validators would check deployment at design time, Microsoft Best Practices Analyzer would check deployment at deployment time, Practices Checker would be a manual inspection scenario ... etc.).
    • It's important to know the ecosystem around your "rules" library (e.g. how do you keep your "rules" library up to date).

    I'm continuing to explore various options to manage a library of building codes/practices/rules and then map out which tools can check these items, and where in the life cycle they should be checked.  I've been informally referring to this problem as "policy verification through the life cycle."

  • J.D. Meier's Blog

    Security Toolbar for VS.NET 2005

    • 0 Comments

    I missed blogging this at time of release.  The Security Toolbar for VS.NET 2005 was a short R&D project to connect developers to security guidance on MSDN from within VS.NET.  The toolbar has direct links to our indexes for Security Engineering, threat modeling, how tos, and checklists.

    While this first version is simply a set of links, it's a stepping stone to adding additional functionality.  We wanted a way to deliver incremental functionality with a simple interface.  The toolbar can be notified when there's new content or an new version of the toolbar itself.  

    The most interesting learning for me was the trade-offs in user experience in terms of a designing a toolbar:

    • Who wants yet another toolbar in VS.NET?
    • Menus are great for dealing with multiple options, but there's something to be said for clicking a button.
    • Button can be more visible over a menu option, but who wants more buttons?
    • Icons or text?

    In the end, we went with a hybrid model to take the best of the best:

    • button + drop-down menu to avoid taking up room and maximize real estate usage
    • icon + text so the button is discoverable and the intent is clear


    Key Links:

  • J.D. Meier's Blog

    Performance Guideline: Use ExcludeSchema Serialization Mode while Exchanging Typed DataSet Over Network

    • 1 Comments

    Here's the next .NET Framework 2.0 performance guideline for review from Prashant Bansode, Bhavin Raichura, Girisha Gadikere and Claudio Caldato. 

    Use ExcludeSchema Serialization Mode while Exchanging Typed DataSet Over Network

    Applies to

    • .NET 2.0

    What to Do
    Set the Typed DataSet property SchemaSerializationMode to ExcludeSchema while transferring the typed DataSet over network for better performance.

    Why
    The serialization of a typed DataSet can be optimized by setting the SchemaSerializationMode property value to ExcludeSchema. When ExcludeSchema is used, the serialized payload does not contain schema information, tables, relations and constraints. This results in a smaller payload for network transfer, which provides better performance.

    When
    If it is required to send the typed DataSet over network, use ExcludeSchema Serialization Mode by setting the the typed DataSet property SchemaSerializationMode to ExcludeSchema.

    ExcludeSchema is supported only for a typed DataSet. ExcludeSchema should only be used in cases where the schema information of the underlying typed DataTables, DataRelations and Constraints do not get modified.

    How
    The typed DataSet has a new property called SchemaSerializationMode in .NET Framework 2.0. Set the SchemaSerializationMode property of typed DataSet to ExcludeSchema before returning for network transfer as follows:

     ...
     typedDataSet.EnforceConstraints = false;
     typedDataSet.SchemaSerializtionMode = SchemaSerializationMode.ExcludeSchema;
     ...

    Problem Example
    A .NET 2.0 Windows Forms based application for Order Management, gets the list of Sales Orders by a Web Service call. The Web Service internally makes a business logic call to get the list of Sales Order in a Typed DataSet. The Web Service returns the Typed DataSet to the smart client application. The code implementation does not exclude schema information, while serializing the DataSet and therefore has a larger size. The larger content can take more time to transfer across network and have a negative impact on performance.

     ...
     //WebService Method
     public SalesOrdersTypedDataSet GetSalesDetailDataset()
     {
        ...
        SalesOrdersTypedDataSet salesOrdersDS = BusinessLogic.GetSalesOrders();
        ...              
        return salesOrdersDS;
     }
     ...

    Solution Example
    A .NET 2.0 Windows Forms based application for Order Management, gets the list of Sales Orders by a Web Service call. The Web Service internally makes a business logic call to get the list of Sales Order in a Typed DataSet. The Web Service returns the Typed DataSet to the smart client applicaiton. The code implementation uses SchemaSerializationMode property of the Sales Order Typed DataSet to reduce the size of the serialized content. It gives performance benefit while transferring over network:

     ...
     //WebService Method
     public SalesOrdersTypedDataSet GetSalesOrdersDataset()
     {
        ...
        SalesOrdersTypedDataSet salesOrdersDS = BusinessLogic.GetSalesOrders();
        salesOrdersDS.EnforceConstraints = false;
        salesOrdersDS.SchemaSerializationMode = SchemaSerializationMode.ExcludeSchema;
        return salesOrdersDS;
     }
     ...

    Additional Resources

  • J.D. Meier's Blog

    Using MadLibs to Create Actionable Principles/Practices

    • 2 Comments


    MadLibs can be an interesting approach to capturing or identifying principles, practices, values ... etc.

    Here's an example attempting to encapsulate principles of agile development:

    • Write code to pass tests over __________
    • Measure progress by working software over __________
    • Improve internal consistency and clarity of code through re-factoring over __________
    • Leverage real-time communication over written documentation __________
    • Build releasable software in short time periods over __________
    • Close cooperation between the business and developers over __________
    • Respond to changing and emerging requirements over __________

    Here's one example:

    • Leverage real-time communication over written documentation

    Of course, you can also leave off the front and contrast with the behavior you'd like to change
    _________ ... over static and stale communication approaches.

    Notice the ..x "over" ...y approach for the practices/principles above.  Sometimes laws/principles/rules come across as common sense or "yeah, I already mostly do that" until you sharply contrast.  It's a challenge to both the principle/practice author (am I striking the precise and accurate chord?) and the principle/practice follower (am I really changing behavior?)

    This approach works for values too: 

    • Action-centric or code-centric over document centric.

    This is the approach used in the Agile Manifesto http://www.agilemanifesto.org/

    So if you're responsible for identifying/documenting your organizations principles/practices/values, you might try using a MadLibs approach and a Wiki.

  • J.D. Meier's Blog

    Performance Guideline: Use HandleCollector API when Managing Expensive Unmanaged Resource Handles

    • 2 Comments

    Here's the next .NET Framework 2.0 performance guideline for review from Prashant Bansode, Bhavin Raichura, Girisha Gadikere and Claudio Caldato.

    Use HandleCollector API when Managing Expensive Unmanaged Resource Handles

    Applies to

    • .NET 2.0

    What to Do
    Use HandleCollector API when managing expensive Unmanaged Resource Handles from Managed Code using COM Interop.

    Why
    The HandleCollector API helps to optimize Garbage Collector efficiency while working with expensive unmanaged resource handles. The garbage collector cannot track the Memory allocated by Unmanaged Code, it can lead to un-optimized memory management by garbage collector. The HandleCollector can force garbage collection if the threshold number of handles are reached, thereby improves performance of the application.

    When
    If it is required to manage multiple Unmanaged Resource Handles in the application, it is recommended to use HandleCollector API to improve GC efficiency by ensuring the objects are destroyed seamlessly on-time.

    How
    Create an instance of HandleCollector by providing three parameters Handle Name (string), Initial Threshold (int) and Maximum Threshold (int). The initial threshold is the the point at which GC can start performing garbage collection. The maximum threshold is the point at which GC must perform garbage collection.

     ...
     static readonly HandleCollector appGuiHandleCollector = new HandleCollector( “ApplicationGUIHandles”, 5, 30);
     ...

    When application creates an expensive handle, application should increase the total handle count by invoking Add method:

     ...
     static IntPtr CreateBrush()
     {
            IntPtr handle = CreateSolidBrush(...);
            appGuiHandleCollector.Add();
           return handle;
     }
     ...

    When application destroys an expensive handle, application should decrese the total handle count by invoking Remove method:

     ...
     internal static void DeleteSolidBrush(IntPtr handle)
     {
            DeleteBrush(handle);
            appGuiHandleCollector.Remove();
     }
     ...

    Problem Example
    A .Net 2.0 Windows Forms based application needs to provide lot of GUI features to facilitate Paint functionality. The code internally uses many unmanaged GUI handles to manage various Bitmaps. If the developer misses out to destroy the handle by calling appropriate Dispose method at appropriate time, the object remains in memory till the time garbage collection is performed. Also, if memory allocation is performed within unmanaged resource, the GC may not be even aware of it to force collection. At runtime, it might be required to force garbage collection to optimize GC memory management for unmanaged allocations.

    Solution Example
    A .Net 2.0 Windows Forms based application needs to provide lot of GUI features to facilitate Paint functionality. The code internally uses many unmanaged GUI handles to manage various Bitmaps. The application creates an instance of HandleCollector by providing Handle Name, Initial Threshold and Maximum Threshold. When application creates an expensive handle, it increases the total handle count by invoking the Add method. When handle count reaches to the maximum threshold limit, the GC will force garbage collection automatically. Also, if the handle is destroyed, it will automatically reduce the total handle count in the following code:

     ...
     class UnmanagedHandles
     {
           // Create a new HandleCollector
             static readonly HandleCollector appHandleCollector = new HandleCollector("ApplicationUnmanagedHandles", 5, 30);
           public UnmanagedHandles ()
           {
                // Increment Handle Count
                 myExpensiveHandleCollector.Add();
           }
           ~ UnmanagedHandles ()
           {
               // Decrement Handle Count
                myExpensiveHandleCollector.Remove();
           }
     }
     ...

    Additional Resources

  • J.D. Meier's Blog

    Performance Guideline: Use Promotable Transactions when Working with SQL Server 2005

    • 1 Comments

    Here's the next .NET Framework 2.0 performance guideline for review from Prashant Bansode, Bhavin Raichura, Girisha Gadikere and Claudio Caldato. 

    Use Promotable Transactions when Working with SQL Server 2005

    Applies to

    • .NET 2.0

    What to Do
    Use new .Net 2.0 System.Transactions API for controlling transactions in managed code when working with SQL Server 2005

    Why
    System.Transactions API gives flexibility to shift between local database server transactions and distributed database server transactions. The Systems.Transactions API, when used with SQL Server 2005, uses the Promotable Transactions feature through the Lightweight Transactions Manager. It does not create a distributed transaction when not required, resulting in improved performance.

    Note If System.Transactions API is used to manage local transactions on SQL Server 2000, the local transaction is automatically promoted to a distributed transaction managed by MSDTC. SQL Server 2000 does not support Promotable Transactions.

    When
    If it is required to control transactions in managed code while working with SQL Server 2005, use Systems.Transactions API for improving performance and flexibility.

    This guideline should not be used when working with SQL Server 2000.

    How
    The following information is for using "Promotable Transactions".

    While using the System.Transaction API is to define Transaction Scope using TransactionScope Class, it defines the boundary for the required transactions.

     ...
     using (TransactionScope scope = new TransactionScope())
     {
        ...
        // Open Connection and Execute Statements
        ...
        scope.Complete();
     }
     ...

    Within Transaction Scope block use normal ADO.NET code for executing the statements using Connection, Command and Execute methods. If the transaction is successful, invoke TransactionScope.Complete method. If the transaction is unsuccessful, the transaction will be automatically rolled back as it will not execute TransactionScope.Complete in the program flow.

    Problem Example
    A web application for Online Shopping, provides a user interface to purchase items. Once the items are purchased, the item entry should be added for billing in a Billing database table in a SQL server 2005 database. At the same time, the stock of the item should be reduced by the number of units sold in an Item Quantity database table. The entire operation needs to be performed in single transaction to maintain data integrity.

    The application follows a traditional approach of using SqlTransaction API which enforces only local transactions. If distributed database transactions are required, the code has to be changed and compiled again. This breaks the principle of flexibility and agility in the design. The following code illustrates the problem, which forces local transactions and compromises flexibility to change it to distributed transactions:

     ...
     using (SqlConnection conn = new SqlConnection(dbConnStr))
     {
         conn.Open();


         SqlCommand cmd = conn.CreateCommand();
         SqlTransaction trans;


         // Start a local transaction.
         trans = conn.BeginTransaction("NewTransaction");


         cmd.Connection = conn;
         cmd.Transaction = transaction;


         try
         {
              cmd.CommandText = "Insert Statement...";
              cmd.ExecuteNonQuery();
              cmd.CommandText = "Update Statement...";
              cmd.ExecuteNonQuery();


              // Attempt to commit the transaction.
              trans.Commit();
          }
          catch (Exception ex)
          {
                // Attempt to roll back the transaction.
                try
                {
                    trans.Rollback();
                }
                catch (Exception ex2)
                {
                    // handle any errors that may have occurred            
                }
            }
     }
     ...

    Instead, if the new System.Transactions API is used, it supports distributed transactions also.

    Solution Example
    A web application for Online Shopping, provides a user interface to purchase items. Once the items are purchased, the item entry should be added for billing in the Billing database table on SQL server 2005. At the same time, the stock of the item should be reduced by the number of units sold in the Item Quantity database table. The entire operation needs to be performed in single transaction to maintain data integrity. The new System.Transactions API is used to provide flexibility without compromise on performance. System.Transactions API has a feature of Promotable Transactions when used with SQL Server 2005. It determines the need of using distributed transactions or local transactions at runtime for improved performance:

     ...
     using (TransactionScope scope = new TransactionScope())
     {
        using (SqlConnection conn = new SqlConnection(dbConnStr))
        {
            SqlCommand cmd1 = conn.CreateCommand();
            cmd1.CommandText = "Insert Statement....";


            SqlCommand cmd2 = conn.CreateCommand();
            cmd2.CommandText = "Update Statement....";


            conn.Open();
            cmd1.ExecuteNonQuery();


            cmd2.ExecuteNonQuery():
            conn.Close();
        }
         scope.Complete();
     }
     ...

    Additional Resources

  • J.D. Meier's Blog

    Performance Guideline: Use AddMemoryPressure while consuming Unmanaged Objects through COM Interop

    • 2 Comments

    Here's the next .NET Framework 2.0 performance guideline for review from Prashant Bansode, Bhavin Raichura, Girisha Gadikere and Claudio Caldato. 

    Use AddMemoryPressure while consuming Unmanaged Objects through COM Interop

    Applies to

    • .NET 2.0

    What to Do
    Use .NET 2.0 CLR API AddMemoryPressure and RemoveMemoryPressure while consuming the Unmanaged Objects from Managed Code through COM Interop.

    Why
    The garbage collector cannot track the Memory allocated by Unmanaged Code, it only tracks the Memory allocated by Managed Code.

    If there is a large amount of memory allocation (such as images or video data) in the unmanaged code, the GC will be able to see only the reference of unmanaged objects, but not the size of the memory occupied by the unmanaged object references.

    Since GC may be unaware of the large memory allocation within unmanaged code, the GC will not know that a collection should be executed, as it does not realize any "memory pressure" to cause a collection.

    The AddMemoryPressure method can be used to inform GC about how much unmanaged memory a managed object will be referencing, when it consumes the Unmanaged Objects from Managed Code. The pro-active indication to GC improves the GC heuristics and collection algorithm, which improves memory management and performance.

    When
    If the Unmanaged Objects invoked from Managed Code allocates a large amount of unmanaged memory at runtime, the GC should be informed about the total memory consumed by the managed and unmanaged code, by invoking AddMemoryPressure method, to improve memory management of GC.

    How
    Applying memory pressure is the technique using which the GC can be informed about the memory allocation that might be performed by the unmanaged code within a managed code wrapper.

    The AddMemoryPressure should be used to inform GC about probable memory allocation by the unmanaged code:

     Bitmap (string path )
     {
        _size = new FileInfo(path).Length;
        GC.AddMemoryPressure(_size);
        // other work
     }

    The RemoveMemoryPressure should be used to inform GC to remove memory pressure while destroying the wrapper managed object which was consuming the unmanaged objects:

     ~Bitmap()
     {
        GC.RemoveMemoryPressure(_size);
        // other clean up code
     }

    Note For every AddMemeoryPressure call, there must be a matching RemoveMemoryPressure call which will remove exactly the same amount of memory pressure as added earlier. Failing to do so can adversely affect the performance of the system in applications that run for long periods of time.

    Problem Example
    A .NET 2.0 application uses unmanaged objects that allocate large amount of memory at runtime to load images and video data.

    It actually uses unmanaged memory and occupies large amount of system memory. Since the garbage collector cannot track the memory allocated by unmanaged code, the application might run low on memory without triggering garbage collection, degrading the performance of the application.

     class Bitmap
     {
       private long _size;
       Bitmap (string path )
       {
          _size = new FileInfo(path).Length;
           // other work
       }
       ~Bitmap()
       {
          // other work
       }
     }

    Solution Example
    A .NET 2.0 application uses unmanaged objects that allocate large amount of memory at runtime to load images and video data.

    Use of AddMemoryPressure and RemoveMemoryPressure informs about the memory allocation and deallocation that might be performed by the unmanaged code within a managed code wrapper. The GC will know to perform a collection when there is a memory pressure and will work efficiently.

     class Bitmap
     {
       private long _size;
       Bitmap (string path )
       {
          _size = new FileInfo(path).Length;
          GC.AddMemoryPressure(_size);
          // other work
       }
       ~Bitmap()
       {
          GC.RemoveMemoryPressure(_size);
          // other work
       }
     }

    Additional Resources

  • J.D. Meier's Blog

    How To Write Prescriptive Guidance Modules

    • 3 Comments

    Here's a quick reference for writing guidance modules.  Guidance modules are simply what I call the prescriptive guidance documents we write when creating guides such as Improving Web Application Security, Improving .NET Application Performance and .NET 2.0 Security Guidance.

     How To Write Prescriptive Guidance Modules

    Summary of Steps

    • Step 1. Identify and prioritize the tasks and scenarios.
    • Step 2. Identify appropriate content types.
    • Step 3. Create the guidance modules.

    Step 1. Identify and prioritize the tasks and scenarios.
    Task-based content is "executable".  You can take the actions prescribed in the guidance and produce a result.  Scenarios bound the guidance to a context. This helps for relevancy and for evaluating the appropriateness of the recommendations.

    Good candidates for tasks and scenarios include:

    • Problems and pain points
    • Key engineering decisions
    • Difficult tasks
    • Recurring engineering questions
    • Techniques

    In many ways, the value of your prescriptive guidance is a measure of the value of the problem multiplied by how many people share the problem.

    Step 2. Identify appropriate guidance module types.
    Choose the appropriate guidance module type for the job:

    • Guidelines - Use "guidelines" to convey the "what to do" and "why" w/minimal how (how tos do the deep how) ... in a concise way, bound against scenarios/tasks.
    • Checklist Items - Present a verification to perform ("what to check for", "how to check" and "how to fix")
    • How Tos - Use "how tos" to effectively communicate a set of steps to accomplish a task (appeals to the "show me how to get it done")

    For a list of guidance module types, templates, and examples see Templates for Writing Guidance at http://www.guidancelibrary.com

    Step 3. Create the guidance modules.
    Once you've identified the prioritized scenarios, tasks, and guidance types, you create the guidance.   This involves the following:

    • Identify specific objectives for your guidance module.  What will the user's of your guidance module be able to do?
    • Collect and verify your data points and facts.
    • Solve the problem your guidance module addresses.  This includes performing the solution and testing your solution.
    • Document your solution.
    • Checkpoint your solution with some key questions:  when would somebody use this?  why? how?
    • Review your solution with appropriate reviewers.

    Examples of Guidance Modules

  • J.D. Meier's Blog

    Writing Prescriptive Guidelines

    • 2 Comments


    These are some practices we learned in the guidance business to write more effective guidelines:

    • Follow the What To Do, Why and How Pattern
    • Keep it brief and to the point
    • Start with Principle Based Recommendations
    • Provide Context For Recommendations
    • Make the Guidelines Actionable
    • Consider Cold vs. Warm Handoffs
    • Create Thread Killers

    Follow the What to Do, Why and How Pattern
    Start with the action to take -- the "what to do".  This should be pinned against a context.  Context includes which technologies and situations the guidance applies. Be sure to address "why" as well, which exposes the rationale. Rationale is key for the developer audience.  It's easy to find many guidelines, missing context or rationale.  Some of the worst guidelines leave you wondering what to actually do.

    Keep It Brief and to the Point
    Avoid "blah, blah, blah". Say what needs to be said as succinctly as possible. Ask "why, why, why?" to everything you write - every paragraph and every sentence. Does it add value? Does it help the reader? Is it actionable?  Often the answer is no. It's hard to do, but it keeps the content "lean and mean".

    Start with Principle-Based Recommendations
    A good principle-based recommendation addresses the question: "What are you trying to accomplish?". Expose guidance based on engineering versus implementation or technology of the day. This makes the guidance less volatile and arguably more useful.  An example principle-based recommendation would be: Validate Input for Length, Range, Format, and Type.  You can then build more specific guidelines for a technology or scenario from this baseline recommendation.

    Provide Context for Recommendations
    Avoid blanket recommendations. Recommendations should have enough context to be prescriptive.  Sometimes this can be as simple as prefixing your guideline with an "if" condition.

    Make the Guidelines Actionable
    Be prescriptive, not descriptive. The guideline should be around actionable vs. just interesting information. Note that considerations are actions provided you tell the reader what to consider, when and why. As a general rule, avoid providing too much background or conceptual information. Point off to primer articles, books etc for background.

    Choose Warm vs. Cold Handoffs
    If you are sending the reader to another link for a specific piece of information, be explicit.  It's as simple as adding "For more information on xyz, see ..." before your link.  That's a warm hand off.  A cold hand off is simply having a list of links and expecting the reader to follow the links and figure out why you sent them there.  The worst is when the links are irrelevant and you simply added links because you could.

    Create Thread Killers
    A "thread killer" is a great piece of information that when quoted or referred to can stop a technical discussion or alias question (a discussion thread) with no further comments. Look at the alias, understand the questions being asked, and tackle the root causes and underlying problems that cause these questions. (Think of the questions as symptoms). Make guidance that nails the problem. A great endorsement is when your "thread killer" is used to address FAQs on discussion aliases and forums.

    Where to See This in Practice
    The following are examples of prescriptive guidelines based on these practices:

  • J.D. Meier's Blog

    Performance Guideline: Use Generics To Eliminate the Cost Of Boxing, Casting and Virtual calls

    • 2 Comments

    Here's the next .NET Framework 2.0 performance guideline for review from Prashant Bansode, Bhavin Raichura, Girisha Gadikere and Claudio Caldato.

    Use Generics To Eliminate the Cost Of Boxing, Casting and Virtual calls

    Applies to

    • .NET 2.0

    What to Do

    • Use Generics to eliminate cost of boxing, casting and virtual calls

    Why
    Generics can be used to improve the performance by avoiding runtime boxing, casting and virtual calls.

    List<Object> class gives better performance over ArrayList.  An example benchmark of a quick-sort of an array of one million integers may show the generic method is 3 times faster than the non-generic equivalent. This is because boxing of the values is avoided completely. In another example, the quick-sort of an array of one million string references with the generic method was 20 percent faster due to the absence of a need to perform type checking at run time.   You results will depend on your scenario. Other benefits of using Generics are compile-time type checking, binary code reuse and clarity of the code.


    When
    Use Generics feature for defining a code (class, structure, interface, method, or delegate) which has to be used by different consumers with different types.

    Consider replacing a generic code (class, structure, interface, method, or delegate), which does an implicit casting of any type to System.Object and forces the consuming code to cast between Object references to actual data types.

    Only if there is a considerable number (>500) to store consider having a special class, otherwise just use the default List<Object> class.
    List<Object> gives better performance over ArrayList as List<Object> has a better internal implementation for enumeration.


    How
    The following steps show how to use generics for various types.

    Define a generic class as follows.
        public class List<T>

    Define methods and variables in generic class as follows.
        public class List<T>
       {
          private T[] elements;
          private int count;  
          public void Add(T element)
          {
                  if (count == elements.Length)
                     Resize(count * 2);
                  elements[count++] = element;
          }  
          public T this[int index]
         {     
               get { return elements[index]; }     
               set { elements[index] = value; }  
          }
        }

    Access generic class with required type as follows
        List<int> intList = new List<int>();
        intList.Add(1);
     ........
        int i = intList[0];

    Note The .NET Framework 2.0 provides a suite of generic collection classes in the class library. Your applications can further benefit from generics by defining your own generic code

    Problem Example
    An Order Management Application stores the domain data Item, Price, etc in Cache using ArrayList. ArrayList accepts any type of data and implicitly casts into objects. Numeric data like Order number, customer id, etc are wrapped to object type from primitive types (boxing) while storing in the ArrayList. Consumer code has to explicitly cast the data from Object type to specific data type while retrieving from the ArrayList. Boxing and un-boxing requires lot of operations like memory allocation, memory copy & garbage collection which in turn reduces the performance of the application.

    Example code snippet to add items to ArrayList or to get/set items from the ArrayList

        ArrayList intList = new ArrayList();


        //Cache data in to array


        intList.Add(45672);           // Argument is boxed
        intList.Add(45673);           // Argument is boxed


        //Retrieve data from cache
        int orderId = (int)intList.Item[0];  // Explicit un-boxing & casting required

    Solution Example
    An Order Management Application stores the domain data Item, Price, etc in Cache. Using Generics feature avoids necessity of run time boxing and casting requirements and makes sure of compile time type checking

    Implement the defining code for generic class. Generic class can be implemented only if required, else default List<T> class can be used
        //Use  to allow consumer code to specify the required type


        class OrderList{


        //Consumer specific type array to hold the data
       T[] elements;
        int count;


        //No implicit casting while adding to array
        public void Add(T element) {


        //Add data to array as an object
        elements[count++] = element;
        }


        // Method to set or get data
        public T this[int index] {


        //Returns data as T type
          get { return elements[index]; }


        //Set the data as T type
          set { elements[index] = value; }
          }
        }

       Initiate the class with specifying as int data type
        OrderList intList = new OrderList();


        //Cache data in to array
        intList.Add(45672);           // No boxing required
        intList.Add(45673);           // No boxing required


        //Retrieve data from cache
        int orderId = intList[0];  // Un-boxing & casting not required

    Additional Resources

  • J.D. Meier's Blog

    Performance Guideline: Use Token Handle Resolution API to get the Metadata for Reflection

    • 0 Comments

    Here's the next .NET Framework 2.0 performance guideline in the series from Prashant Bansode, Bhavin Raichura, Girisha Gadikere and Claudio Caldato.

     ...

    Use Token Handle Resolution API to get the Metadata for Reflection

    Applies to

    • .NET 2.0

    What to Do
    Use the new .NET 2.0 token handle resolution API RuntimeMethodHandle to get the Metadata of members while using Reflection.

    Why
    The RuntimeMethodHandle is a small, lightweight structure that defines the identity of a member. RuntimeMethodHandle is a trimmed down version of MemberInfos, which provides the metadata for the methods and data without consuming .NET 2.0 back-end cache.

    The .NET Framework also provides GetXxx type of API methods for e.g.. GetMethod, GetProperty, GetEvent to determine the metadata of given Type at runtime. There are two forms of these APIs, the non-plural, which return one MemberInfo (such as GetMethod), and the plural APIs (such as GetMethods).

    The .NET framework implements a back-end cache for the MemberInfo metadata to improve the performance of GetXxx API.

    The caching policy is implemented irrespective of plural or non-plural GetXxx API call. Such eager caching policy degrades the performance on calls to or non-plural GetXxx API calls.

    RuntimeMethodHandle works approximately twice faster than compared to equivalent GetXxx API call, if the MemberInfo is not present in the back-end .NET cache.

    When
    If it is required to get the metadata of a given Type at runtime, use new .NET 2.0 token handle resolution API RuntimeMethodHandle for better performance than traditional GetXxx API calls.

    How
    The following code snippet shows how to get the RuntimeMethodHandle:

     ...
        // Obtaining a Handle from an MemberInfo
        RuntimeMethodHandle handle = typeof(D).GetMethod("MyMethod").MethodHandle;
     ...

    The following code snippet shows how to get the MemeberInfo metadata from the handle:

     ...
        // Resolving the Handle back to the MemberInfo
        MethodBase mb = MethodInfo.GetMethodFromHandle(handle);
     ...

    Problem Example
    A Windows Forms based application needs to dynamically load the plug-in Assemblies and available Types. The application also needs to determine the metadata of a given Type (methods, members etc) at runtime to execute Reflection calls.

    The plug-in exposes a Type CustomToolBar, which is derived from Type BaseToolBar. The CustomToolBar Type has 2 methods - PrepareCommand, ExecuteCommand. The BaseToolBar Type has 3 methods - Initialize, ExecuteCommand and CleanUp. To execute the ExecuteCommand method of type CustomToolBar at runtime, it gets the metadata of that method using GetXxx API as shown in the following code snippet.

    Since the .NET Framework implements eager caching policy, the call to get the metadata for a single ExecuteCommand method will also get the metadata of all the five methods of CustomToolBar and BaseToolBarTypes.

        MethodInfo mi = typeof(CustomToolBar).GetMethod("ExecuteCommand");

    The .NET framework implements a back-end cache for the MemberInfo metadata to improve the performance of GetXxx API. The implemented caching policy caches all members by default, irrespective of plural or non-plural API call. Such eager caching policy degrades the performance on calls to or non-plural GetXxx API calls.

    Solution Example
    A Windows Forms based application needs to dynamically load the plug-in Assemblies and available Types. The application also needs to determine the metadata of a given Type (methods, members etc) at runtime to execute Reflection calls.

    The plug-in exposes a Type CustomToolBar, which is derived from Type BaseToolBar. The CustomToolBar Type has 2 methods - PrepareCommand, ExecuteCommand. The BaseToolBar Type has 3 methods - Initialize, ExecuteCommand and CleanUp. To execute the ExecuteCommand method of type CustomToolBar at runtime, it gets the metadata of that method using RuntimeMethodHandle is used, as shown in the following code snippet. This can improve the performance of the application. :

     ...
        // Obtaining a Handle from an MemberInfo
        RuntimeMethodHandle handle = typeof(CustomToolBar).GetMethod("ExecuteComand").MethodHandle;
        // Resolving the Handle back to the MemberInfo
        MethodBase mb = MethodInfo.GetMethodFromHandle(handle);
     ...

    If the appropriate MemberInfo is already in the back-end .NET cache, the cost of going from a handle to a MemberInfo is about the same as using one of the GetXxx API call. If the MemberInfo is not available in the cache RuntimeMethodHandle is approximately twice faster than the GetXxx API call.

    Additional Resources

  • J.D. Meier's Blog

    Performance Guideline: Use TryParse Method to Avoid Unnecessary Exceptions

    • 2 Comments

    Prashant Bansode, Bhavin Raichura, and Girisha Gadikere teamed up with Claudio Caldato (CLR team) to create some new performance guidelines for .NET Framework 2.0.  The guidelines use our new guideline template.

     ...


    Use TryParse Method to Avoid Unnecessary Exceptions

    Applies to

    • NET 2.0

    What to Do
    Use TryParse Method instead of Parse Method for converting string input to a valid .Net data type. For example, use TryParse method before converting a string Input to integer data type.

    Why
    The Parse method will throw exception - ArgumentNullexception or FormatException or OverflowException, if the string representation cannot be converted to the respective data type.

    Unnecessary Throwing Exceptions and Handling the same such as above has a negative impact on the performance of the application. The TryParse method does not throw an exception if the conversion fails instead it returns false, and hence saves exception handling related performance hit.

    When
    If it is required to convert a string representation of a data type to a valid .Net data type, use TryParse method instead of calling the Parse method to avoid unnecessary exception.

    How
    The following code snippet illustrates how to use TryParse method :

        ...
        Int32 intResult;
        if (Int32.TryParse(strData, intResult))
        {
           // process intResult result
        }
        else
        {
          //error handling
        }
        ...

    Problem Example
    Consider a Windows Forms application for creating an Invoice. The application takes user inputs for multiple items as product name, quantity, price per unit and date of purchase. The user provides these inputs in the text boxes. The user can enter multiple items in an invoice at a given time and then finally submit the data for automatic billing calculation. The application internally needs to convert the string input data to integer (assume for simplicity). If the user enters invalid data in the text box, the system will throw an exception. This has adverse impact on performance of the application.

        ...
        private Int32 ConvertToInt(string strData)
        {
            try
            {
                  return Int32.Parse(strData);
            }
            catch (exception ex)
            {
                  return 0; //some default value
            }
        }
        ...

    Solution Example
    Consider a Windows Forms application for creating an Invoice. The application takes user inputs for multiple items as product name, quantity, price per unit and date of purchase. The user provides these inputs in the text boxes. The user can enter multiple items in an invoice at a given time and then finally submit the data for automatic billing calculation. The application internally needs to convert the string input data to integer (assume for simplicity). If the user enters invalid data in the text box, the system will not throw unnecessary exceptions and hence improves the application performance.

        ...
        private Int32 ConvertToInt(string strData)
        {
            Int32 intResult;
            if (Int32.TryParse(strData, intResult))
            {
                return intResult;
            }
             return o;  //some default value
        }
        ...

    Additional Resources

     

  • J.D. Meier's Blog

    Guidelines Template

    • 1 Comments

    Sometimes the guidelines in our guidance such as Improving Web Application Security, Improving .NET Application Performance and .NET 2.0 Security Guidance are missing some of the important details such as when, why or how.  To correct that, we've created a template that explicitly captures the details.  We use this template in Guidance Explorer.  I'll also be posting .NET Framework 2.0 performance examples that use this new template.

    Guideline Template

    • Title
    • Applies To
    • What to Do
    • Why
    • When
    • How
    • Problem Example
    • Solution Example
    • Related Items

    Test Cases

    The test cases are simply questions we use to help improve the guidance.  The guidelines author can use the test cases to check that they are putting the right information into the template.  Reviewers use the test cases as a check against the content to make sure it's useful. 

    Title

    • Does the title clearly state the action to take?
    • Does the title start with an action word (eg. Do something, Avoid something)?

    Applies To

    • Versions are clear?
       

    What to Do

    • Do you state the action to take?
    • Do you avoid stating more than the action to take?

    Why

    • Do you provide enough information for the user to make a decision?
    • Do you state the negative consequences of not following this guideline?

    When

    • Do you state when the guideline is applicable?
    • Do you state when not to use this guideline?

    How

    • Do you state enough information to take action?
    • Do you provide explicit steps that are repeatable?

    Problem Example

    • Do you show a real world example of the problem from experience?
    • If there are variations of the problem, do you show the most common?
    • If this is an implementation guideline, do you show code?

    Solution Example

    • Does the example show the resulting solution if the problem example is fixed?
    • If this is a design guideline is the example illustrated with images and text?
    • If this is an implementation guideline is the example in code?

    Additional Resources

    • Are the links from trusted sites?
    • Are the links correct in context of the guideline?

    Related Items

    • Are the correct items linked in the context of the guideline?

    Additional Tests to Consider When Writing a Guideline

    • Does the title clearly state the action to take?
    • Does the title start with an action word (eg. Do something, Avoid something)?
    • If the item is a MUST, meaning it is prevelant and high impact, is Priority = p1?
    • If the item is a SHOULD, meaning it has less impact or is only applicable in narrower circumstances, is Priority = p2?
    • If the item is a COULD, meaning it is nice to know about but isn't highly prevelant or impactful, is Priority = p3?
    • If this item will have cascading impact on application design, is Type = Design?
    • If this item should be followed just before deployment, is concerned with configuration details or runtime behavior, is Type = Deployment?
    • If this item is still in progress or not fully reviewed, is Status = Beta?
  • J.D. Meier's Blog

    Guidance Explorer Beta 2 Release

    • 2 Comments

    We released Guidance Explorer Beta 2 on CodePlex.  Guidance Explorer is a patterns & practices R&D project to improve finding, sharing and creating prescriptive guidance.  Guidance Explorer features modular, actionable guidance in the form of checklists, guidelines, how tos, patterns … etc.

    What's New with This Release

    • Guidance Explorer now checks for updated guidance against an online guidance store.
    • Source code is available on CodePlex so you can shape or extend Guidance Explorer for your scenario.
    • Guidance Explorer Web Edition is available for quick browsing of the online guidance store.

    Learn More

    Resources

    Feedback
    Send your feedback to getool@microsoft.com.

  • J.D. Meier's Blog

    ASP.NET 2.0 Internet Security Reference Implementation

    • 8 Comments

    The ASP.NET 2.0 Internet Security Reference Implementation is a sample application complete with code and guidance.  Our purpose was to show patterns & practices security guidance in the context of an application scenario. We used Pet Shop 4 as the baseline application and tailored it for an internet facing scenario.  The application uses forms authentication with users and roles stored in SQL.

    Home Page/Download

    3 Parts
    The reference implementation contains 3 parts:

    1. VS 2005 Solution and Code 
    2. Reference Implemenation Document
    3. Scenario and Solution Document 

    The purpose of each part is as follows:

    1. VS 2005 Solution and Code - includes the Visual Studio 2005 solution, the reference implementation doc, and the scenario and solution doc.
    2. Reference Implemenation Document (ASP.NET 2.0 Internet Security Reference Implementation.doc) - is the reference implementation walkthrough document containing implementation details and key decisions we made along the way.  Use this document as a fast entry point into the relevant decisions and code.
    3. Scenario and Solution Document (Scenario and Solution - Forms Auth to SQL, Roles in SQL.doc) - is the more general scenario and solution document containing key decisions that apply to all applications in this scenario.

    Key Engineering Decisions Addressed
    We grouped the key problems into the following buckets:

    • Authentication
    • Authorization
    • Input and Data Validation
    • Data Access
    • Exception Management
    • Sensitive Data
    • Auditing and Logging

    These are actionable, potential high risk categories.  These buckets represent some of the more important security decisions you need to make that can have substantial impact on your design.  Using these buckets made it easier to both review the key security decisions and to present the decisions for fast consumption.

    Getting Started

    1. Download and install the ASP.NET 2.0 Internet Security Reference Implementation.
    2. Use ASP.NET 2.0 Internet Security Reference Implementation.doc to identify the code you want to explore
    3. Open the solution, Internet Security Reference Implementation.sln, and look into the details of the implementation
    4. If you're interested in testing SSL, then follow the instructions in  SSL Instructions.doc.

     

  • J.D. Meier's Blog

    patterns & practices Guiding Principles

    • 1 Comments

    As part of our technical strategy on the patterns & practices team, we created a set of guiding principles for our product development teams:

    • Long-term customer success
    • Help customers balanced tension between new product features and application portfolio stability
    • Collaborative, transparent execution
    • Explicit intent and rigorous prioritization
    • Quality over scope
    • Context precision over one-size fits all
    • Framework for evolution and innovation over fixed solutions
    • Skeletal over full-featured
    • Modular over monolithic
    • Easy to adopt; easy to adapt, easy to consume incrementally

    If you're familiar with Stephen Covey, he espoused using principles as a way to govern actions without needing an explicit rule for every situation.  An advantage of this is that you leave flexibility in the tactics while still guiding the outcome.

    To create the princples, we used the approach for the values in the Agile Manifesto which contrasts one value over another (e.g. Customer collaboration over contract negotiation.) Using this approach, we thought of the outcomes we wanted to move away from and outcomes we wanted to achieve.

  • J.D. Meier's Blog

    Test Our patterns and practices Guidance Explorer

    • 3 Comments

    I've been relatively quiet these past few weeks, getting ready to release our patterns & practices Guidance Explorer. Guidance Explorer is a new, experimental tool from the patterns & practices team that radically changes the way you consume guidance as well as the way we create it. If you’ve felt overwhelmed looking across multiple sources for good security or performance guidance then Guidance Explorer is the tool for you. You can use one tool to access a comprehensive, up to date, collection of modular guidance that will help you with your tough development tasks and design decisions. Guidance Explorer will allow you to create and distribute a set of standard best-practices that your team can adhere to for performance and security. The project includes the tool, Guidance Explorer, and a library of guidance for developers, Guidance Library. The Guidance Library will be updated weekly, ensuring you always have the most up to date information.

    What's In It For You

    • If you build software with the .NET Framework, use Guidance Explorer to find the "building codes" for the .NET technologies, in terms of security and performance. They are complimentary to FX Cop rules.
    • If you want to set development standards and best practices for your team, use Guidance Explorer views to build and then distribute your team’s standard rule-set.
    • If you author guidance for development teams, use Guidance Explorer to create guidance for your teams in a more efficient and effective way by leveraging our templates, information models, key concepts, and tooling support.

    What is Guidance Explorer
    Guidance Explorer is a client-side tool that lets you find, filter, and sort guidance. You can organize custom guidance collections into persistent views and share these views with others. You can also save these custom views of the guidance as indexed Word or HTML documents. You can browse guidance by source, such as the patterns & practices team. You can also browse by topic, such as security or performance, or by technology, such as ASP.NET 1.1 or ASP.NET 2.0. Within a given topic or technology, you can then browse guidance within more fine-grained categories. For example, within security, you can browse by input/data validation, authentication, authorization .. etc.

    Guidance Explorer was designed to simplify the creation and distribution of custom guidance. To author guidance, Guidance Explorer, includes a simple editor that uses templates for guidance. Each template includes a schema and test cases. For example, each guideline item should include what to do, why, how, a problem example, and solution example, as well as related items and where to go for more information. We created these templates by analyzing what's working and not working from our several thousands of pages of guidance over the past several years, around security and performance.

    What is Guidance Library
    Guidance Library is the collection of knowledge that is viewable by Guidance Explorer. It's organized by types, such as guidelines and checklists. Each type has a specific schema and test cases against that schema to help enforce quality. The library is also organized by topics, such as security and performance. The library is extensible by design so that we can add new types and new topics that prove to be useful.

    Not every type of guidance goes into the guidance library. For example, you don't use it to find monolithic guides or PDFs. The most important criteria for the modules in the library is that they are atomic units of action. They can directly be tested for relevancy. They can also be tested for the results they produce and how repeatable those results are.

    How To Get Started 

    1. Join the Guidance Explorer project
    2. Download Guidance Explorer
    3. Watch the video tutorials

    The key to getting started is getting the tool up and running so you can play with it, and watching the short videos (1-2 minute long) to learn the main features and usage scenarios.

    Your First Experiment with Guidance Explorer
    For your first test with Guidance Explorer, try creating a Word doc that has just the guidelines you want. 

    To run your first experiment:

    1. Create a custom view of the guidance
    2. Save the view fo the guidance as a Word doc

    How To Get Involved

    1. Join the CodeGallery workspace.  To join the workspace, sign up at the Guidance Explorer Home on Codegallery
    2. Subscribe to the RSS feeds.   To subscribe to the feeds, use the RSS buttons in each section of Guidance Explorer Home on Codegallery
    3. Participate in the newsgroups.  To participate in the newsgroups, use the Guidance Explorer Message Boards on Codegallery
    4. Provide feedback on the alias.  To do so, send email to GETOOL at Microsoft.com.

    What's Next
    These are some of the ideas we'd like to implement:

    • VS.NET integration
    • refactoring additional guidance (e.g. existing patterns & practices guidance such as the data access, exception management, and caching guidance)
    • New guidance types (such as "test cases", "code examples", "project patterns", "whiteboard solutions")
    • New topics (such as reliability, manageability, … etc.)
      integrating bodies of guidance and ecosystems (such as integration with product documentation)

    I also hope to create a model for "Guidance Feeds", where you can subscribe to relevant guidance, as well as integrate many of the emerging social software concepts, such as allowing the network/community to rate the guidance, rate the raters and contributors, and create community-driven, shareable custom views.

    About Our Team
    Our core team consists of:

    • Prashant Bansode.   Prashant was a core member of my Whidbey Security Guidance Project, so he's very seasoned.   I chose him specifically because of his unmatched ability to execute, and because he is one of the best customer champions I know.  What surprised me about Prashant is his ability to not only manage his own work, but help guide others, and he really gets how to deliver incremental value.
    • Diego Gonzalez.  Diego is a coding machine.  He's also capable of bridging dreams and reality with working models.   Usually, by the time you've finished your sentence on what you'd like to see, Diego's already implementing it. 
    • Ed Jezierski.   Ed's simply brilliant.  I've never seen a more impressive mix of people focus and technical expertise.  If you can dream it up, Ed can make it happen.  If you can't dream it up, Ed can dream it up for you.  Just insert a random wish and Ed can turn it into a working prototype, and incredible slideware to match.  Ed brings to the table a ton of social software concepts and ideas around taking guidance to the next level.  I've worked with Ed for many years, but it's been a while since we've partnered up on the same project.  I look forward to many brainstorms, whiteboard sessions, and off the deep end conversations over lunch.
    • Ariel Neisen.  Ariel is a developer on the team.  Ariel works for Lagash with Diego and has been Diego's coding partner.
    • Mike Reinstein.   Mike works for Security Innovation.  He's a Web application security expert.  He not only brings security development and design experience, but strong technical writing skills that have contributed to exceptional content.
    • Paul Saitta.   Paul is previously a member of IO Active, now working for Security Innovation.  He's an expert in Web applications and white-box security audits.  He's been able to distill thousands of hours of customer audits into prescriptive guidance that illuminates common mistakes in the real world.
    • Jason Taylor.   I first met Jason during my Whidbey Security Guidance Project.  He impressed me with his ability to think on his feet, execute at a rate faster than most people can ever imagine, and his ability to distill and document expertise at a level few individuals can go.  Jason has 7 years Microsoft experience under his belt, and was one of Microsoft's first test-architects.  Now he's a V.P. for Security Innovation's security consulting group.  Aside from bringing a wealth of security experience to the table,  Jason has a lot of ideas around how to improve guidance for customers in very practical ways.

    Key Links


Page 42 of 44 (1,077 items) «4041424344