J.D. Meier's Blog

Software Engineering, Project Management, and Effectiveness

July, 2007

  • J.D. Meier's Blog

    MSF Agile Frame (Workstreams and Key Activities)

    • 1 Comments

    When I review an approach, I find it helpful to distill it to a simple frame so I can get a bird's-eye view.  For MSF Agile, I found the most useful frame to be the workstreams and key activities.  According to MSF, workstreams are simply groups of activities that flow logically together and are usually associated with a particular role.  I couldn't find this view in MSF Agile, so I created one:

    Workstream Role Key Activities
    Capture Project Vision Business Analyst Write Vision Statement; Define Personas; Refine Personas
    Create a Quality of Service Requirement Business Analyst Brainstorm quality of Service Requirements; Develop Lifestyle Snapshot; Prioritize Quality of Service Requirements List; Write Quality of Service Requirements; Identify Security Objectives
    Create a Scenario Business Analyst Brainstorm Scenarios; Develop Lifestyle Snapshot; Prioritize Scenario List; Write Scenario Description; Storyboard a Scenario
    Guide Project Project Manager Review Objectives; Assess Progress; Evaluate Test Metric Thresholds; Triage Bugs; Identify Risk
    Plan an Iteration Project Manager Determine Iteration Length; Estimate Scenario; Estimate Quality of Service Requirements; Schedule Scenario; Schedule Quality of Service Requirement; Schedule bug Fixing Allotment; Divide Scenarios into Tasks; Divide Quality of Service Requirements into Tasks
    Guide Iteration Project Manager Monitor Iteration; Mitigate a Risk; Conduct Retrospectives
    Create a Solution Architecture Architect Partition the System; Determine Interfaces; Develop Threat Model; Develop Performance Model; Create Architectural Prototype; Create Infrastructure Architecture
    Build a Product Developer Start a Build; Verify a Build; Fix a Build; Accept Build
    Fix a Bug Developer Reproduce the Bug; Locate the Cause of a Bug; Reassign a Bug; Decide on a Bug Fix Strategy; Code the Fix for a Bug; Create or Update a Unit Test; Perform a Unit Test; Refactor Code; Review Code
    Implement a Development Task Developer Cost a Development Task; Create or Update a Unit Test; Write Code for a Development Task; Perform Code Analysis; Perform a Unit Test; Refactor Code; Review Code; Integrate Code Changes
    Close a Bug Tester Verify a Fix; Close the Bug
    Test a Quality of Service Requirement Tester Define Test Approach; Write Performance Tests; Write Security Tests; Write Stress Tests; Write Load Tests; Select and Run a Test Case; Open a Bug; Conduct Exploratory Testing
    Test a Scenario Tester Define Test Approach; Write Validation Tests; Select and Run a Test Case; Open a Bug; Conduct Exploratory Testing
    Release a Product Release Manager Execute a Release Plan; Validate a Release; Create Release Notes; Deploy the Product

  • J.D. Meier's Blog

    Inspections

    • 3 Comments

    Inspections are a white-box technique to proactively check against specific criteria.  You can integrate inspections as part of your testing process at key stages, such as design, implementation and deployment.

    Design Inspections
    In a design inspection, you evaluate the key engineering decisions.   This helps avoid expensive do-overs.  Think of inspections as a dry-run of the design assumptions.   Here’s some practices I’ve found to be effective for design inspections:

    • Use inspections to checkpoint your strategies before going too far down the implementation path.
    • Use inspections to expose the key engineering risks.
    • Use scenarios to keep the inspections grounded.  You can’t evaluate the merits of a design or architecture in a vacuum.
    • Use a whiteboard when you can.  It’s easy to drill into issues, as well as step back as needed.
    • Tease out the relevant end-to-end test cases based on risks you identify.
    • Build pools of strategies (i.e. design patterns) you can share.  It’s likely that for your product line or context, you’ll see recurring issues.
    • Balance user goals, business goals, and technical goals.  The pitfall is to do a purely technical evaluation.  Designs are always trade-offs.

    Code Inspections
    In a code inspection, you focus on the implementation.  Code inspections are particularly effective for finding lower-level issues, as well as balancing trade-offs.  For example, a lot of security issues are implementation level, and they require trade-off decisions.  Here’s some practices I’ve found to be effective for code inspections: 

    • Use checklists to share the “building codes.”  For example, the .NET Design Guidelines are one set of building codes.  There's also building codes for security, performance ... etc.
    • Use scenarios and objectives to bound and test.  This helps you avoid arbitrary optimization or blindly applying recommendations.
    • Focus the inspection.  I’ve found it’s better to do multiple, short-burst, focused inspections than a large, general inspection.
    • Pair with an expert in the area you’re inspecting.
    • Build and draw from a pool of idioms (i.e. patterns/anti-patterns)

    Deployment Inspections
    Deployment is where application meets infrastructure.  Deployment inspections are particularly helpful for quality attributes such as performance, security, reliability and manageability concerns.  Here’s some practices I’ve found to be effective for deployment inspections:

    • Use scenarios to help you prioritize.
    • Know the knobs and switches that influence runtime behavior.
    • Use checklists to help build and share expertise.  Knowledge of knobs and switches tends to be low-level and art-like.
    • Focus your inspections.  I’ve found it more productive and effective to do focused inspections.  Think of it as divide and conquer.

    Additional Considerations

    • Set objectives.  Without objectives, it's easy to go all over the board.
    • Keep a repository.  In practice, one of the most effective approaches is to have a common share that all teams can use as a starting point.  Each team then tailors for their specific project.
    • Integrate inspections with your quality assurance efforts for continuous improvement.
    • Identify skill sets you'll need for further drill downs (e.g. detail design, coding, troubleshooting, maintenance.)  If you don't involve the right people, you won't produce effective results.
    • Use inspections as part of your acceptance testing for security and performance.
    • Use checklists as starting points.  Refine and tailor them for your context and specific deliverables.
    • Leverage tools to automate the low-hanging fruit.  Focus manual inspections on more context-sensitive or more complex issues, where you need to make trade-offs.
    • Tailor your checklists for application types (Web application, Web Service, desktop application, component) and for verticals (manufacturing, financial ... etc.) or project context (Internet-facing, high security, ... etc.)

    In the future, I'll post some more specific techniques for security and performance.

  • J.D. Meier's Blog

    Performance Inspections

    • 1 Comments

    In this post, I'll focus on design, code, and deployment inspections for performance.  Inspections are a white-box technique to proactively check against specific criteria.  You can integrate inspections at key stages in your life cycle, such as design, implementation and deployment.

    Keys to Effective Inspections

    • Know what you're looking for.
    • Use scenarios to illustrate a problem.
    • Bound the acceptance criteria with goals and constraints.

    Performance Frame
    The Performance Frame is a set of categories that helps you organize and focus on performance issues.   You can use the frame to organize principles, practices, patterns and anti-patterns.  The categories are also effective for organizing sets of questions to use during inspections.  By using the categories in the frame, you can chunk up your inspections.   The frame is also good for finding low-hanging fruit.    

    Performance Design Inspections
    Performance design inspections focus on the key engineering decisions and strategies.  Basically, these are the decisions that have cascading impact and that you don't want to make up on the fly.  For example, your candidate strategies for caching per user and application-wide data, paging records, and exception management would be good to inspect.  Effective performance design inspections include analyzing the deployment and infrastructure, walking the performance frame, and doing a layer-by-layer analysis.  Question-driven inspections are good because they help surface key risks and they encourage curiosity.

    While there are underlying principles and patterns that you can consider, you need to temper your choices with prototypes, tests and feedback.  Performance decisions are usually trade-offs with other quality attributes, such as security, extensibility, or maintainability.  Performance Modeling helps you make trade-off decisions by focusing on scenarios, goals and constraints. 

    For more information, see Architecture and Design Review of a .NET Application for Performance and Scalability and Performance Modeling.

    Performance Code Inspections
    Performance code inspections focus on evaluating coding techniques and design choices. The goal is to identify potential performance and scalability issues before the code is in production.  The key to effective performance code inspections is to use a profiler to localize and find the hot spots.  The anti-pattern is blindly trying to optimize the code.  Again, a question-driven technique used in conjunction with measuring is key.

    For more information, see Performance Code Inspection.

    Performance Deployment Inspections
    Performance deployment inspections focus on tuning the configuration for your deployment scenario.  To do this, you need to have measurements and runtime data to know where to look.  This includes simulating your deployment environment and workload.  You also need to know the knobs and switches that influence the runtime behavior.  You also need to be bounded by your quality of service requirements so you know when you're done.  Scenarios help you prioritize.

    My Related Posts

  • J.D. Meier's Blog

    Security Inspections

    • 2 Comments

    Inspections are among my favorite tools for improving security.   I like them because they’re so effective and efficient.  Here’s why:

    • If you know what to look for, you have a better chance of finding it.  (The reverse is also true: if you don’t know what you’re looking for, you’re not going to see it)
    • You can build your inspection criteria from common patterns (Security issues tend to stem from common patterns)
    • You can share your inspection criteria
    • You can prioritize your inspection criteria
    • You can chunk your inspection criteria

    Bottom line -- you can identify, catalog and share security criteria faster than new security issues come along.

    Security Frame
    Our Security Frame is simply a set of categories we use to “frame” out, organize, and chunk up security threats, attacks, vulnerabilities and countermeasures, as well as principles, practices and patterns.  The categories make it easy to distill and share the information in a repeatable way. 

    Security Design Inspections
    Performing a Security Design Inspection involves evaluating your application’s architecture and design in relation to its target deployment environment from a security perspective.  You can use the Security Frame to help guide your analysis.   For example, you can walk the categories (authentication, authorization, … etc.) for the application.  You can also use the categories to do a layer-by-layer analysis.  Design inspections are a great place to checkpoint your core strategies, as well as identify what sort of end-to-end tests you need to verify your approach.

    Here's the approach in a nutshell:

    • Step 1.  Evaluate the deployment and infrastructure. Review the design of your application as it relates to the target deployment environment and the associated security policies. Consider the constraints imposed by the underlying infrastructure-layer security and the operational practices in use.
    • Step 2.  Evaluate key security design using the Security frame. Review the security approach that was used for critical areas of your application. An effective way to do this is to focus on the set of categories that have the most impact on security, particularly at an architectural and design level, and where mistakes are most often made. The security frame describes these categories. They include authentication, authorization, input validation, exception management, and other areas. Use the security frame as a road map so that you can perform reviews consistently, and to make sure that you do not miss any important areas during the inspection.
    • Step 3.  Perform a layer-by-layer analysis. Review the logical layers of your application, and evaluate your security choices within your presentation, business, and data access logic.

    For more information, see our patterns & practices Security Design Inspection Index.

    Security Code Inspections
    This is truly a place where inspections shine.  While static analysis will catch a lot of the low hanging fruit, manual inspection will find a lot of the important security issues that are context dependent.  Because it’s a manual exercise, it’s important to set objectives, and to prioritize based on what you’re looking for.   Whether you do your inspections in pairs or in groups or individually, checklists in the form of criteria or inspection questions are helpful.

    Here's the approach in a nutshell:

    • Step 1. Identify security code review objectives. Establish goals and constraints for the review.
    • Step 2. Perform a preliminary scan. Use static analysis to find an initial set of security issues and improve your understanding of where the security issues are most likely to be discovered through further review.
    • Step 3. Review the code for security issues. Review the code thoroughly with the goal of finding security issues that are common to many applications. You can use the results of step two to focus your analysis.
    • Step 4. Review for security issues unique to the architecture. Complete a final analysis looking for security issues that relate to the unique architecture of your application. This step is most important if you have implemented a custom security mechanism or any feature designed specifically to mitigate a known security threat.

    For more information on Security Code Inspections, see our patterns & practices Security Code Inspection Index.  For examples of “Inspection Questions”, see Security Question List: Managed Code (.NET Framework 2.0) and Security Question List: ASP.NET 2.0.” (Security Question List: ASP.NET 2.0).

    Security Deployment Inspections
    Deployment Inspections are particularly effective for security because this is where the rubber meets the road.  In a deployment inspection, you walk the various knobs and switches that impact the security profile of your solution.  This is where you check things such as accounts, shares, protocols, … etc. 

    The following server security categories are key when performing a security deployment inspection:

    • Patches and Updates
    • Accounts Accounts
    • Auditing and Logging
    • Files and Directories
    • Ports
    • Protocols
    • Registry
    • Services
    • Shares

    For more information, see our patterns & practices Security Deployment Inspection Index.

    My Related Posts

  • J.D. Meier's Blog

    Update on Key Projects

    • 0 Comments

    While I've been quiet on my blog, we've been busy behind the scenes.  Here's a rundown on key things:

    • Arch Nuggets. These don't exist yet.  I'm socializing an idea to create small, focused guidance for key engineering decisions.  I'm picturing small articles with insight and chunks of code.  The code would be less about reuse and more about helping you quickly prototype and test.  You can think of them as focused architectural spikes or tests.  The scope would be cross-technology, cross-cutting concerns and application infrastructure type scenarios such as data access, exception management, logging, ... etc.  They'll be light-weight and focused on helping you figure out how to put our platform legos together.  For a concrete example, imagine more articles and code examples such as How To: Page Records in .NET Applications
    • CodePlex Site - Performance Testing Guidance.  This is our online knowledge base for performance testing nuggets.   We'll refactoring nuggets from our performance testing guide.  We'll then create new modules that show you how to make the most out of Visual Studio.
    • CodePlex Site - VSTS Guidance.  This is our online knowledge base for Visual Studio Team Foundation Server guidance.  We're refactoring nuggets from our TFS guide.
    • Guidance Explorer.  This is where we store our reusable guidance nuggets.  On the immediate radar, we're making some fixes to improve performance as well as improve browsing our catalog of nuggets.
    • Guide - Getting Results.  As a pet project, I'm putting together what I've learned over the past several years getting results at Microsoft.  It's going to include what I learned from the school of hard knocks.  I'm approaching it the way I approach any guide and I'm focusing on the principles, practices and patterns for effectiveness.
    • Guide - Performance Testing Guidance for Web Applications.  We're wrapping this up this week.  We're finishing the final edits and then building a new PDF.
    • Guide - Team Development with Visual Studio Team Foundation Server.   We’re basically *guidance complete.*  Since the Beta release, we added guidelines and practices for build, project management, and reporting.  We also revamped the deployment chapter, as well as improved the process guidance.  It's a substantial update. 
    • MSDN port of the guidance.  We have enough critical mass in terms of VSTS and Performance Testing guidance to justify porting to MSDN.  While many customers have used the guidance from the CodePlex site as is, for greater reach, we need to start the process of making the guidance a part of the MSDN library.  This will be an interesting exercise.
    • Sharepoint test for our guidance store.  We're testing the feasibility of using Sharepoint for our back-end (our guidance store) and our online Web application.  The key challenges we're hitting are creating effective publishing and consuming user experiences.  It's interesting and insightful and there's lots to learn.

    I'll have more to say soon.

Page 1 of 1 (5 items)