J.D. Meier's Blog

Software Engineering, Project Management, and Effectiveness

  • J.D. Meier's Blog

    New Release: patterns & practices Performance Testing Guidance for Web Applications

    • 14 Comments

    We released the final version of our patterns & practices Performance Testing Guidance for Web Applications.  This guide provides an end-to-end approach for implementing performance testing. Whether you're new to performance testing or looking for ways to improve your current performance-testing approach, you will gain insights that you can tailor to your specific scenarios.  The main purpose of the guide is to be a relatively stable backdrop to capture, consolidate and share a methodology for performance testing.  Even though the topics addressed apply to other types of applications, we focused on explaining from a Web application perspective to maintain consistency and to be relevant to the majority of our anticipated readers.

    Key Changes Since Beta 1

    • Added forewords by Alberto Savoia and Rico Mariani.
    • Integrated more feedback and insights from customer reviews (particularly chapters 1-4, 9, 14, 18)
    • Integrated learnings from our Engineering Excellence team.
    • Refactored and revamped the performance testing types.
    • Revamped and improved the test execution chapter.
    • Revamped and improved the reporting chapter.
    • Revamped the stress testing chapter.
    • Released the guide in HTML pages on our CodePlex Wiki.

    Highlights

    • Learn the core activities of performance testing.
    • Learn the values and benefits associated with each type of performance testing.
    • Learn how to map performance testing to agile
    • Learn how to map performance testing to CMMI
    • Learn how to identify and capture performance requirements and testing objectives based on the perspectives of system users, business owners of the system, and the project team, in addition to compliance expectations and technological considerations.
    • Learn how to apply principles of effective reporting to performance test data.
    • Learn how to construct realistic workload models for Web applications based on expectations, documentation, observation, log files, and other data available prior to the release of the application to production.

    Why We Wrote the Guide

    • To consolidate real-world lessons learned around performance testing.
    • To present a roadmap for end-to-end performance testing.
    • To narrow the gap between state of the art and state of the practice.

    Scope

    • Managing and conducting performance testing in both dynamic (e.g., Agile) and structured (e.g., CMMI) environments.
    • Performance testing, including load testing, stress testing, and other types of performance related testing.
    • Core activities of performance testing: identifying objectives, designing tests, executing tests, analyzing results, and reporting.

    Features of the Guide

    • Approach for performance testing.  The guide provides an approach that organizes performance testing into logical units to help you incrementally adopt performance testing throughout your application life cycle.
    • Principles and practices.  These serve as the foundation for the guide and provide a stable basis for recommendations. They also reflect successful approaches used in the field.
    • Processes and methodologies.  These provide steps for managing and conducting performance testing. For simplification and tangible results, they are broken down into activities with inputs, outputs, and steps. You can use the steps as a baseline or to help you evolve your own process.
    • Life cycle approach.  The guide provides end-to-end guidance on managing performance testing throughout your application life cycle, to reduce risk and lower total cost of ownership (TCO).
    • Modular.  Each chapter within the guide is designed to be read independently. You do not need to read the guide from beginning to end to benefit from it. Use the parts you need.
    • Holistic.  The guide is designed with the end in mind. If you do read the guide from beginning to end, it is organized to fit together in a comprehensive way. The guide, in its entirety, is better than the sum of its parts.
    • Subject matter expertise.  The guide exposes insight from various experts throughout Microsoft and from customers in the field.

    Parts

    • Part 1, Introduction to Performance Testing
    • Part II, Exemplar Performance Testing Approaches
    • Part III, Identify the Test Environment
    • Part IV, Identify Performance Acceptance Criteria
    • Part V, Plan and Design Tests
    • Part VI, Execute Tests
    • Part VII, Analyze Results and Report
    • Part VIII, Performance-Testing Techniques

    Chapters

    • Chapter 1 – Fundamentals of Web Application Performance Testing
    • Chapter 2 – Types of Performance Testing
    • Chapter 3 – Risks Addressed Through Performance Testing
    • Chapter 4 – Web Application Performance Testing Core Activities
    • Chapter 5 – Coordinating Performance Testing with an Iteration-Based Process
    • Chapter 6 – Managing an Agile Performance Test Cycle
    • Chapter 7 – Managing the Performance Test Cycle in a Regulated (CMMI) Environment
    • Chapter 8 – Evaluating Systems to Increase Performance-Testing Effectiveness
    • Chapter 9 – Determining Performance Testing Objectives
    • Chapter 10 – Quantifying End-User Response Time Goals
    • Chapter 11 – Consolidating Various Types of Performance Acceptance Criteria
    • Chapter 12 – Modeling Application Usage
    • Chapter 13 – Determining Individual User Data and Variances
    • Chapter 14 – Test Execution
    • Chapter 15 – Key Mathematic Principles for Performance Testers
    • Chapter 16 – Performance Test Reporting Fundamentals
    • Chapter 17 – Load-Testing Web Applications
    • Chapter 18 – Stress-Testing Web Applications

    Our Team

    Contributors and Reviewers

    • External Contributors and Reviewers: Alberto Savoia; Ben Simo; Cem Kaner; Chris Loosley; Corey Goldberg; Dawn Haynes; Derek Mead; Karen N. Johnson; Mike Bonar; Pradeep Soundararajan; Richard Leeke; Roland Stens; Ross Collard; Steven Woody
    • Microsoft Contributors / Reviewers: Alan Ridlehoover; Clint Huffman; Edmund Wong; Ken Perilman; Larry Brader; Mark Tomlinson; Paul Williams; Pete Coupland; Rico Mariani

    My Related Posts

  • J.D. Meier's Blog

    Performance Threats

    • 4 Comments

    Rico and I have long talked about performance threats.  I finally created a view that shows how you can think of performance issues in terms of vulnerabilities, threats and countermeasures.  See Performance Frame v2

    In this case, the vulnerabilities, threats and countermeasures are purely from a technical design standpoint.  To rationalize performance against other quality attributes and against goals and constraints, you can use performance modeling and threat modeling.  To put it another way, evaluate your design trade-offs against the acceptance criteria for your usage scenarios, considering your user, system, and business goals and constraints.

  • J.D. Meier's Blog

    Blog Improvements

    • 2 Comments

    I did a few things to try and improve browsing and findability:

    I was surprised by how many of my posts related to productivity.  Then again, I focus heavily on productivity with my mentees.  I think personal productivity is an important tool for turning their great ideas, hopes, and dreams into results.  If it's not already their strength, I want to make sure it's a least not a liability. 

    On my Book Share blog, I changed themes, reorganized key features, and created a best of list.  While it may sound simple here, I actually went through quite a bit of trial and error.  I tested many, many user experience patterns and relied heavily on feedback from a trusted set of reviewers.  Although I used a satisficing strategy, I did try to make browsing the content as efficient and effective as possible.  I was surprised by how many subtle patterns and practices there are for blog layouts.  Maybe more surprising was how many anti-patterns there are.

  • J.D. Meier's Blog

    Cutting Questions

    • 5 Comments

    How do you cut to the chase?  How do you clear the air of ambiguity and get to facts?  Ask cutting questions.

    My manager, Per , doesn't ask a lot of questions.  He asks the right ones.  Here's some examples:

    • Who's on board?  Who are five customers that stand behind you?
    • Next steps?
    • What does your gut say?
    • Is it working?  Is it effective?
    • What would "x" say? (for example, what would your peers say?)
    • What's their story?
    • Where's your prioritized list of scenarios?

    As simple as it sounds, having five separate customers stand behind you is a start.  I'm in the habbit of litmus checking my path early on to see who's on board or to find the resistance.  As customers get on board, my confidence goes up.  I've also seen this cutting question work well with startups. I've asked a few startups about their five customers.  Some had great ideas, but no customers on board.  The ones that had at least five are still around. 

    At the end of any meeting, Per never fails to ask "next steps?", and the meeting quickly shifts from talk to action.

    "Is it working?" is a pretty cutting question.  It's great because it forces you to step back and reflect on your results and consider a change in approach.

  • J.D. Meier's Blog

    Vision, Mission, Values

    • 2 Comments

    There's a lot to be said for well-crafted vision and mission statements.   I've been researching and leaving a trail at The Bookshare.

    In a Nutshell

    • Mission - who are you? what do you do?
    • Vision - where do you want to go?
    • Values - what do you value? what's important? (your corporate culture)

    How Do You Craft Them

    1. You start by figuring out the values.  You figure out the values by observing how your organization prioritizes and how they spend their time.  There can be a gap between what folks say they value and what they actually do.  Actions speak louder than words.
    2. Once you know your culture and values, you can figure out your mission -- who you are and what you do.  What is your organization's unique value you bring to the table?  What is your unique strength?  In a world of survival of the fittest, this is important to know and to leverage.
    3. Now that you know who you are, you can figure out where you want to go.

    A good vision statement is a one-liner statement you can repeat in the halls.  Nobody has to memorize it.  It's easy to say and it's easy to groc.  The same goes for a mission statement.  You might need to add another line or two to your mission statement to disambiguate, but if folks don't quickly get what you do from your mission statement -- it's not working.

    How Do You Use Them

    • Use a mission statement to quickly tell others what you do.
    • Use a vision statement to inspire and rally the team.  It should be on the horizon, but achievable and believable.
    • Use a mission statement as a gauge for success. 
    • Set goals and objectives that tell you whether you're accomplishing your mission and moving toward or away from your vision.
    • Use your mission to remind you what you do (and what you don't) and to help you prioritize.
    • Craft a personal mission and vision statement to help you get clarity on what you want to accomplish.
    • Use your personal vision and mission statements to help you stay on your horse, or get back on, when you get knocked down, or lose your way.

    Examples
    I'm a fan of using reference examples (lots of them) to get a sense of what works and what doesn't.  The Man on a Mission blog is dedicated to mission statements and has plenty of real-life examples to walk through. 

  • J.D. Meier's Blog

    Daily Syncs

    • 2 Comments

    On my teams we do a daily sync meeting.  It's 10 minutes max.  We go around the team with three questions:

    1. What did you get done?
    2. What are you getting done next?
    3. Where do you need help?

    We stay out of details (that's for offline and follow-up).  It's a status meeting more on accomplishments and progress over reporting activities (lots of folks are doing lots of things, so it's crisper to focus on accomplishments.)  The more distributed the team, the more important the meeting.

    Keys to Results

    • 10 minute Timebox.  The 10 minute bar is apparently a big factor in how folks view the meeting, based on feedback from folks that have been in longer meetings (1/2 hr or more).  The 10 minute max is key because it keeps a fast pace and energy high (vs. another meeting of blah, blah, blah.)  We can always finish earlier (in fact, one of my teams was regularly finishing the meeting in under 2 minutes for a 5 person team).
    • Daily.  Daily is important.  Having them daily, means everybody can structure their day consistently.  Daily means it's also easy to build a routine and reduce friction points.  It also means that team members have a reliable forum for getting help if needed.

    The best pattern that has worked over time is ...

    • Mondays - we define the most important outcomes for the week (the few big things that matter, no laundry lists).  This is actually closer to a 1/2 hour (max) meeting.
    • Daily - we do a daily checkpoint meetings. (this is about execution, bottlenecks and awareness)
    • Fridays - we reflect on lessons learned and make any improvements to project practices.

    Another way of thinking about this is ... "if this were the end of the week, what would you feel good about having completed?"  "Each day, are we getting closer or further, or do we need to readjust priorities or expectations?" ...  "What did we learn and what can we improve?"

    My Related Posts

  • J.D. Meier's Blog

    Execution Checklists

    • 5 Comments

    Execution checklists are a simple, but effective technique for improving results.  Rather than a to do list, it's a focused checklist of steps in sequence to execute a specific task.  I use notepad to start.  I write the steps.  On each execution of the steps, by myself or a teammate, we improve the steps as we learn.  We share our execution checklists in Groove or in a Wiki.

    Key Scenarios
    There's two main scenarios:

    1. You are planning the work to execute.  In this case, you're thinking through what you have to get done.  This is great when you feel over-burdened or if you have a mind-numbing, routine task that you need to get done.  This can help you avoid task saturation and it can also help you avoid silly mistakes while you're in execution mode.
    2. You are paving a path through the execution.  In this case, you're leaving a trail of what worked.  This works great for tasks that you'll have to perform more than once or you have to share best practices across the team.   

    I encourage my teams to create execution checklists for any friction points or sticking spots we hit.  For example, if there's a tough process with lots of movable parts, we capture the steps and tune them over time as we gain proficiency.  As simple as this sounds it's very effective whether it's for a personal task, a team task, or any execution steps you want to improve. 

    One of my most valuable execution checklists is steps for rebuilding my box.  While I could rebuild my box without it, I would fumble around a bit and probably forget some key things, and potentially get reminded the hard way.

    The most recent execution checklist I made was for building the PDF for our Team Development with Visual Studio Team Foundation Server guide.  There were a lot of manual steps and there was plenty of room for error.  Each time I made a build, I baked the lessons learned into the execution checklist.  By the time I got to the successful build, there was much less room for error simply by following the checklist.

  • J.D. Meier's Blog

    New Release: patterns & practices Team Development with Team Foundation Server Guide

    • 11 Comments

    Today we release the final version of our patterns & practices: Team Development with Visual Studio Team Foundation Server.  It's our Microsoft playbook for Team Foundation Server.  It shows you how to make the most of the Team Foundation Server.  It's a compendium of proven practices, product team recommendations, and insights from the field.

    Key Changes Since Beta 1

    • We added guidelines for build, project management and reporting.
    • We added practices at a glance for build, project management, and reporting.
    • We added a chapter to summarize key Visual Studio 2008 changes.
    • We revamped our Internet access strategies.
    • We did a full sweep of the guide.
    • We completed more thorough product team reviews for key chapters.

    Contents at a Glance

    • Part I, Fundamentals
    • Part II, Source Control
    • Part III, Builds
    • Part IV, Large Project Considerations
    • Part V, Project Management
    • Part VI, Process Templates
    • Part VII, Reporting
    • Part VIII, Setting Up and Maintaining the Team Environment
    • Part IX, Visual Studio 2008 Team Foundation Server


    Chapters

    • Ch 01 – Introducing the Team Environment
    • Ch 02 – Team Foundation Server Architecture
    • Ch 03 – Structuring Projects and Solutions in Source Control
    • Ch 04 – Structuring Projects and Solutions in Team Foundation Source Control
    • Ch 05 – Defining Your Branching and Merging Strategy
    • Ch 06 – Managing Source Control Dependencies in Visual Studio Team System
    • Ch 07 – Team Build Explained
    • Ch 08 – Setting Up Continuous Integration with Team Build
    • Ch 09 – Setting Up Scheduled Builds with Team Build
    • Ch 10 – Large Project Considerations
    • Ch 11 – Project Management Explained
    • Ch 12 – Work Items Explained
    • Ch 13 – Process Templates Explained
    • Ch 14 – MSF for Agile Software Development Projects
    • Ch 15 – Reporting Explained
    • Ch 16 – Installation and Deployment
    • Ch 17 – Providing Internet Access to Team Foundation Server
    • Ch 18 – What’s New in Visual Studio 2008 Team Foundation Server

    Our Team

    Contributors and Reviewers

    • External Contributors / Reviewers: David P. Romig, Sr; Dennis Rea; Eugene Zakhareyev; Leon Langleyben; Martin Woodward; Michael Rummier; Miguel Mendoza; Mike Fourie; Quang Tran; Sarit Tamir; Tushar More; Vaughn Hughes
    • Microsoft Contributors / Reviewers:  Aaron Hallberg; Ahmed Salijee; Ajay Sudan; Ajoy Krishnamoorthy; Alan Ridlehoover; Alik Levin; Ameya Bhatawdekar; Bijan Javidi; Bill Essary; Brett Keown; Brian Harry; Brian Keller; Brian Moore; Buck Hodges; Burt Harris; Conor Morrison; David Caufield; David Lemphers; Doug Neumann; Edward Jezierski; Eric Blanchet; Eric Charran; Graham Barry; Gregg Boer; Grigori Melnik; Janet Williams Hepler; Jeff Beehler; Jose Parra; Julie MacAller; Ken Perilman; Lenny Fenster; Marc Kuperstein; Mario Rodriguez; Matthew Mitrik; Michael Puleio; Nobuyuki Akama; Paul Goring; Pete Coupland; Peter Provost; Granville (Randy) Miller; Rob Caron; Robert Horvick; Rohit Sharma; Ryley Taketa; Sajee Mathew; Siddharth Bhatia; Tom Hollander; Tom Marsh; Venky Veeraraghavan

    My Related Posts

  • J.D. Meier's Blog

    Improvement Frame

    • 3 Comments

    As a mentor at work, I like to checkpoint results.  While I can do area-specific coaching, I tend to take a more holistic approach.  For me, it's more rewarding to find ways to unleash somebody's full potential and improve their overall effectiveness at Microsoft.  Aside from checking against specific goals, I use the following frame to gauge progress.

    Improvement Frame

    Area Prompts
    Thinking / Feeling
  • Do you find your work rewarding?
  • Are you passionate about what you do?
  • Are you spending more time feeling good?
  • What thoughts dominate your mind now?
  • Is your general outlook more positive or negative?
  • Do you have more energy or less in general?
  • Are you still worried about the same things?
  • Are you excited about anything?
  • Have you changed your self-talk from inner-critic to coach?
  • Situation
  • Are you spending more time working on what you enjoy?
  • What would you rather be spending more time doing?
  • Do you have the manager you want?
  • Do you have the job you want?
  • Are you moving toward or away from your career goals?
  • If your situation was never going to change, what one skill would you need to make the most of it?
  • Time / Task Management
  • Are you driving your day or being driven?
  • Are you spending less time on administration?
  • Are you getting your "MUSTs" done?
  • Are you dropping the ball on anything important?
  • Do you have a task management system you trust?
  • Are you avoiding using your head as a collection point?
  • How are you avoiding biting off more than you can chew?
  • How are you delivering incremental value?
  • Domain Knowledge
  • Have you learned new skills?
  • Have you sharpened your key strengths?
  • Have you reduced your key liabilities?
  • What are you the go-to person for?
  • What could you learn that would make your more valuable to your team?
  • Strategies / Approaches
  • What are you approaching differently than the past?
  • How are you more resourceful?
  • How are you finding lessons in everything you do?
  • How are you learning from everybody that you can?
  • How are you improving your effectiveness?
  • How are you modeling the success of others?
  • How are you tailoring advice to make it work for you?
  • Relationships
  • Are you managing up effectively?
  • Are your priorities in sync with your manager's?
  • Has your support network grown or shrunk?
  • How are you participating in new circles of influence?
  • How are you spending more time with people that catalyze you?
  • How are you working more effectively with people that drain you?
  • How are you leveraging more mentors and area specific coaches?
  • I've found this frame very effective for quickly finding areas that need work or to find sticking points.  It's also very revealing in terms of how much dramatic change there can be.  While situations or circumstances may not change much, I find that changes in strategies and approaches can have a profound impact.  My take on this is that while you can't always control what's on your plate, you can control how you eat it.

  • J.D. Meier's Blog

    One-Sliders

    • 2 Comments

    I showed a colleague of mine one of my tricks for building slide decks faster.  It's a divide and conquer approach I've been using a few years.  I do what I call "one-sliders." 

    Whenever I build a deck, such as for milestone meetings, I create a set of single-slide decks.  I name each slide appropriately (vision, scope, budget, ... etc.)  I then compose the master deck from the slides.

    Here's the benefits that might not be obvious:

    • It's easy to jump to a particular slide without manipulating a heavy deck, which helps when I'm first building the deck.
    • It encourages quick focused reviews with the right people (e.g. I can pair with our CFO on the budget slide without hunting through a deck)
    • It encourages sharing with precision.  I share the relevant slide vs. "see slide 32" in a 60 slide deck.
    • I end up with a repository of reusable slide nuggets.  I find myself drawing from my "one-slider" depot regularly 
    • Doing a slide at a time, encourages thinking in great slides.  It's similar to thinking in great pages in a Wiki (a trick Ward taught me).

    The biggest impact though is that now I find myself frequently sharing concise one-sliders, and getting points across faster and simpler than blobby mails.

  • J.D. Meier's Blog

    Update on Key Projects

    • 0 Comments

    While I've been quiet on my blog, we've been busy behind the scenes.  Here's a rundown on key things:

    • Arch Nuggets. These don't exist yet.  I'm socializing an idea to create small, focused guidance for key engineering decisions.  I'm picturing small articles with insight and chunks of code.  The code would be less about reuse and more about helping you quickly prototype and test.  You can think of them as focused architectural spikes or tests.  The scope would be cross-technology, cross-cutting concerns and application infrastructure type scenarios such as data access, exception management, logging, ... etc.  They'll be light-weight and focused on helping you figure out how to put our platform legos together.  For a concrete example, imagine more articles and code examples such as How To: Page Records in .NET Applications
    • CodePlex Site - Performance Testing Guidance.  This is our online knowledge base for performance testing nuggets.   We'll refactoring nuggets from our performance testing guide.  We'll then create new modules that show you how to make the most out of Visual Studio.
    • CodePlex Site - VSTS Guidance.  This is our online knowledge base for Visual Studio Team Foundation Server guidance.  We're refactoring nuggets from our TFS guide.
    • Guidance Explorer.  This is where we store our reusable guidance nuggets.  On the immediate radar, we're making some fixes to improve performance as well as improve browsing our catalog of nuggets.
    • Guide - Getting Results.  As a pet project, I'm putting together what I've learned over the past several years getting results at Microsoft.  It's going to include what I learned from the school of hard knocks.  I'm approaching it the way I approach any guide and I'm focusing on the principles, practices and patterns for effectiveness.
    • Guide - Performance Testing Guidance for Web Applications.  We're wrapping this up this week.  We're finishing the final edits and then building a new PDF.
    • Guide - Team Development with Visual Studio Team Foundation Server.   We’re basically *guidance complete.*  Since the Beta release, we added guidelines and practices for build, project management, and reporting.  We also revamped the deployment chapter, as well as improved the process guidance.  It's a substantial update. 
    • MSDN port of the guidance.  We have enough critical mass in terms of VSTS and Performance Testing guidance to justify porting to MSDN.  While many customers have used the guidance from the CodePlex site as is, for greater reach, we need to start the process of making the guidance a part of the MSDN library.  This will be an interesting exercise.
    • Sharepoint test for our guidance store.  We're testing the feasibility of using Sharepoint for our back-end (our guidance store) and our online Web application.  The key challenges we're hitting are creating effective publishing and consuming user experiences.  It's interesting and insightful and there's lots to learn.

    I'll have more to say soon.

  • J.D. Meier's Blog

    Security Inspections

    • 2 Comments

    Inspections are among my favorite tools for improving security.   I like them because they’re so effective and efficient.  Here’s why:

    • If you know what to look for, you have a better chance of finding it.  (The reverse is also true: if you don’t know what you’re looking for, you’re not going to see it)
    • You can build your inspection criteria from common patterns (Security issues tend to stem from common patterns)
    • You can share your inspection criteria
    • You can prioritize your inspection criteria
    • You can chunk your inspection criteria

    Bottom line -- you can identify, catalog and share security criteria faster than new security issues come along.

    Security Frame
    Our Security Frame is simply a set of categories we use to “frame” out, organize, and chunk up security threats, attacks, vulnerabilities and countermeasures, as well as principles, practices and patterns.  The categories make it easy to distill and share the information in a repeatable way. 

    Security Design Inspections
    Performing a Security Design Inspection involves evaluating your application’s architecture and design in relation to its target deployment environment from a security perspective.  You can use the Security Frame to help guide your analysis.   For example, you can walk the categories (authentication, authorization, … etc.) for the application.  You can also use the categories to do a layer-by-layer analysis.  Design inspections are a great place to checkpoint your core strategies, as well as identify what sort of end-to-end tests you need to verify your approach.

    Here's the approach in a nutshell:

    • Step 1.  Evaluate the deployment and infrastructure. Review the design of your application as it relates to the target deployment environment and the associated security policies. Consider the constraints imposed by the underlying infrastructure-layer security and the operational practices in use.
    • Step 2.  Evaluate key security design using the Security frame. Review the security approach that was used for critical areas of your application. An effective way to do this is to focus on the set of categories that have the most impact on security, particularly at an architectural and design level, and where mistakes are most often made. The security frame describes these categories. They include authentication, authorization, input validation, exception management, and other areas. Use the security frame as a road map so that you can perform reviews consistently, and to make sure that you do not miss any important areas during the inspection.
    • Step 3.  Perform a layer-by-layer analysis. Review the logical layers of your application, and evaluate your security choices within your presentation, business, and data access logic.

    For more information, see our patterns & practices Security Design Inspection Index.

    Security Code Inspections
    This is truly a place where inspections shine.  While static analysis will catch a lot of the low hanging fruit, manual inspection will find a lot of the important security issues that are context dependent.  Because it’s a manual exercise, it’s important to set objectives, and to prioritize based on what you’re looking for.   Whether you do your inspections in pairs or in groups or individually, checklists in the form of criteria or inspection questions are helpful.

    Here's the approach in a nutshell:

    • Step 1. Identify security code review objectives. Establish goals and constraints for the review.
    • Step 2. Perform a preliminary scan. Use static analysis to find an initial set of security issues and improve your understanding of where the security issues are most likely to be discovered through further review.
    • Step 3. Review the code for security issues. Review the code thoroughly with the goal of finding security issues that are common to many applications. You can use the results of step two to focus your analysis.
    • Step 4. Review for security issues unique to the architecture. Complete a final analysis looking for security issues that relate to the unique architecture of your application. This step is most important if you have implemented a custom security mechanism or any feature designed specifically to mitigate a known security threat.

    For more information on Security Code Inspections, see our patterns & practices Security Code Inspection Index.  For examples of “Inspection Questions”, see Security Question List: Managed Code (.NET Framework 2.0) and Security Question List: ASP.NET 2.0.” (Security Question List: ASP.NET 2.0).

    Security Deployment Inspections
    Deployment Inspections are particularly effective for security because this is where the rubber meets the road.  In a deployment inspection, you walk the various knobs and switches that impact the security profile of your solution.  This is where you check things such as accounts, shares, protocols, … etc. 

    The following server security categories are key when performing a security deployment inspection:

    • Patches and Updates
    • Accounts Accounts
    • Auditing and Logging
    • Files and Directories
    • Ports
    • Protocols
    • Registry
    • Services
    • Shares

    For more information, see our patterns & practices Security Deployment Inspection Index.

    My Related Posts

  • J.D. Meier's Blog

    Performance Inspections

    • 1 Comments

    In this post, I'll focus on design, code, and deployment inspections for performance.  Inspections are a white-box technique to proactively check against specific criteria.  You can integrate inspections at key stages in your life cycle, such as design, implementation and deployment.

    Keys to Effective Inspections

    • Know what you're looking for.
    • Use scenarios to illustrate a problem.
    • Bound the acceptance criteria with goals and constraints.

    Performance Frame
    The Performance Frame is a set of categories that helps you organize and focus on performance issues.   You can use the frame to organize principles, practices, patterns and anti-patterns.  The categories are also effective for organizing sets of questions to use during inspections.  By using the categories in the frame, you can chunk up your inspections.   The frame is also good for finding low-hanging fruit.    

    Performance Design Inspections
    Performance design inspections focus on the key engineering decisions and strategies.  Basically, these are the decisions that have cascading impact and that you don't want to make up on the fly.  For example, your candidate strategies for caching per user and application-wide data, paging records, and exception management would be good to inspect.  Effective performance design inspections include analyzing the deployment and infrastructure, walking the performance frame, and doing a layer-by-layer analysis.  Question-driven inspections are good because they help surface key risks and they encourage curiosity.

    While there are underlying principles and patterns that you can consider, you need to temper your choices with prototypes, tests and feedback.  Performance decisions are usually trade-offs with other quality attributes, such as security, extensibility, or maintainability.  Performance Modeling helps you make trade-off decisions by focusing on scenarios, goals and constraints. 

    For more information, see Architecture and Design Review of a .NET Application for Performance and Scalability and Performance Modeling.

    Performance Code Inspections
    Performance code inspections focus on evaluating coding techniques and design choices. The goal is to identify potential performance and scalability issues before the code is in production.  The key to effective performance code inspections is to use a profiler to localize and find the hot spots.  The anti-pattern is blindly trying to optimize the code.  Again, a question-driven technique used in conjunction with measuring is key.

    For more information, see Performance Code Inspection.

    Performance Deployment Inspections
    Performance deployment inspections focus on tuning the configuration for your deployment scenario.  To do this, you need to have measurements and runtime data to know where to look.  This includes simulating your deployment environment and workload.  You also need to know the knobs and switches that influence the runtime behavior.  You also need to be bounded by your quality of service requirements so you know when you're done.  Scenarios help you prioritize.

    My Related Posts

  • J.D. Meier's Blog

    Inspections

    • 3 Comments

    Inspections are a white-box technique to proactively check against specific criteria.  You can integrate inspections as part of your testing process at key stages, such as design, implementation and deployment.

    Design Inspections
    In a design inspection, you evaluate the key engineering decisions.   This helps avoid expensive do-overs.  Think of inspections as a dry-run of the design assumptions.   Here’s some practices I’ve found to be effective for design inspections:

    • Use inspections to checkpoint your strategies before going too far down the implementation path.
    • Use inspections to expose the key engineering risks.
    • Use scenarios to keep the inspections grounded.  You can’t evaluate the merits of a design or architecture in a vacuum.
    • Use a whiteboard when you can.  It’s easy to drill into issues, as well as step back as needed.
    • Tease out the relevant end-to-end test cases based on risks you identify.
    • Build pools of strategies (i.e. design patterns) you can share.  It’s likely that for your product line or context, you’ll see recurring issues.
    • Balance user goals, business goals, and technical goals.  The pitfall is to do a purely technical evaluation.  Designs are always trade-offs.

    Code Inspections
    In a code inspection, you focus on the implementation.  Code inspections are particularly effective for finding lower-level issues, as well as balancing trade-offs.  For example, a lot of security issues are implementation level, and they require trade-off decisions.  Here’s some practices I’ve found to be effective for code inspections: 

    • Use checklists to share the “building codes.”  For example, the .NET Design Guidelines are one set of building codes.  There's also building codes for security, performance ... etc.
    • Use scenarios and objectives to bound and test.  This helps you avoid arbitrary optimization or blindly applying recommendations.
    • Focus the inspection.  I’ve found it’s better to do multiple, short-burst, focused inspections than a large, general inspection.
    • Pair with an expert in the area you’re inspecting.
    • Build and draw from a pool of idioms (i.e. patterns/anti-patterns)

    Deployment Inspections
    Deployment is where application meets infrastructure.  Deployment inspections are particularly helpful for quality attributes such as performance, security, reliability and manageability concerns.  Here’s some practices I’ve found to be effective for deployment inspections:

    • Use scenarios to help you prioritize.
    • Know the knobs and switches that influence runtime behavior.
    • Use checklists to help build and share expertise.  Knowledge of knobs and switches tends to be low-level and art-like.
    • Focus your inspections.  I’ve found it more productive and effective to do focused inspections.  Think of it as divide and conquer.

    Additional Considerations

    • Set objectives.  Without objectives, it's easy to go all over the board.
    • Keep a repository.  In practice, one of the most effective approaches is to have a common share that all teams can use as a starting point.  Each team then tailors for their specific project.
    • Integrate inspections with your quality assurance efforts for continuous improvement.
    • Identify skill sets you'll need for further drill downs (e.g. detail design, coding, troubleshooting, maintenance.)  If you don't involve the right people, you won't produce effective results.
    • Use inspections as part of your acceptance testing for security and performance.
    • Use checklists as starting points.  Refine and tailor them for your context and specific deliverables.
    • Leverage tools to automate the low-hanging fruit.  Focus manual inspections on more context-sensitive or more complex issues, where you need to make trade-offs.
    • Tailor your checklists for application types (Web application, Web Service, desktop application, component) and for verticals (manufacturing, financial ... etc.) or project context (Internet-facing, high security, ... etc.)

    In the future, I'll post some more specific techniques for security and performance.

  • J.D. Meier's Blog

    MSF Agile Frame (Workstreams and Key Activities)

    • 1 Comments

    When I review an approach, I find it helpful to distill it to a simple frame so I can get a bird's-eye view.  For MSF Agile, I found the most useful frame to be the workstreams and key activities.  According to MSF, workstreams are simply groups of activities that flow logically together and are usually associated with a particular role.  I couldn't find this view in MSF Agile, so I created one:

    Workstream Role Key Activities
    Capture Project Vision Business Analyst Write Vision Statement; Define Personas; Refine Personas
    Create a Quality of Service Requirement Business Analyst Brainstorm quality of Service Requirements; Develop Lifestyle Snapshot; Prioritize Quality of Service Requirements List; Write Quality of Service Requirements; Identify Security Objectives
    Create a Scenario Business Analyst Brainstorm Scenarios; Develop Lifestyle Snapshot; Prioritize Scenario List; Write Scenario Description; Storyboard a Scenario
    Guide Project Project Manager Review Objectives; Assess Progress; Evaluate Test Metric Thresholds; Triage Bugs; Identify Risk
    Plan an Iteration Project Manager Determine Iteration Length; Estimate Scenario; Estimate Quality of Service Requirements; Schedule Scenario; Schedule Quality of Service Requirement; Schedule bug Fixing Allotment; Divide Scenarios into Tasks; Divide Quality of Service Requirements into Tasks
    Guide Iteration Project Manager Monitor Iteration; Mitigate a Risk; Conduct Retrospectives
    Create a Solution Architecture Architect Partition the System; Determine Interfaces; Develop Threat Model; Develop Performance Model; Create Architectural Prototype; Create Infrastructure Architecture
    Build a Product Developer Start a Build; Verify a Build; Fix a Build; Accept Build
    Fix a Bug Developer Reproduce the Bug; Locate the Cause of a Bug; Reassign a Bug; Decide on a Bug Fix Strategy; Code the Fix for a Bug; Create or Update a Unit Test; Perform a Unit Test; Refactor Code; Review Code
    Implement a Development Task Developer Cost a Development Task; Create or Update a Unit Test; Write Code for a Development Task; Perform Code Analysis; Perform a Unit Test; Refactor Code; Review Code; Integrate Code Changes
    Close a Bug Tester Verify a Fix; Close the Bug
    Test a Quality of Service Requirement Tester Define Test Approach; Write Performance Tests; Write Security Tests; Write Stress Tests; Write Load Tests; Select and Run a Test Case; Open a Bug; Conduct Exploratory Testing
    Test a Scenario Tester Define Test Approach; Write Validation Tests; Select and Run a Test Case; Open a Bug; Conduct Exploratory Testing
    Release a Product Release Manager Execute a Release Plan; Validate a Release; Create Release Notes; Deploy the Product

  • J.D. Meier's Blog

    How To Share Lessons Learned

    • 2 Comments

    I'm a fan of sharing lessons learned along the way.  One light-weight technique I do with a distributed team is a simple mail of Do's and Dont's.  At the end of the week or as needed, I start the mail with a list of dos and dont's I learned and then ask the team to reply all with their lessons learned.

    Example of a Lessons Learned Mail

    Collaboration

    • Do require daily live synchs to keep the team on the same page and avoid churn in mail.
    • Do reduce the friction to be able to spin up Live Meetings as needed.

    Guidance Engineering

    • Do index product docs to help build categories and to know what's available.
    • Do scenario frames to learn and prioritize the problem space.
    • Do use Scenarios, Questions and Answers, Practices at a Glance, and Guidelines to build and capture knowledge as we go.
    • Do use Scenarios as a scorecard for the problem space.
    • Do use Questions and Answers as a chunked set of focused answers, indexed by questions.
    • Do use Practices as a Glance, as a frame for organizing task-based nuggets (how to blah …)
    • Do use Guidelines for recommended practices (do this, don't do this … etc.)
    • Do create the "frame"/categories earlier vs. later.

    Personal Effectiveness

    • Do blog as I go versus over-engineer entries.
    • Do sweep across bodies of information and compile indexes up front versus ad-hoc (for example, compile bloggers, tags, doc indexes, articles, sites … etc.)

    Project Management

    • Don't split the team across areas.  Let the team first cast a wide net to learn the domain, but then focus everybody on the same area for collaboration, review, pairing …etc.

    Tools

    • Do use CodePlex as a channel for building community content projects.

    Guidelines Help Carry Lessons Forward
    While this approach isn't perfect, I found it makes it easier to carry lessons forward, since each lesson is a simple guideline.  I prefer this technique to approaches where there's a lot of dialogue but no results.  I also like it because it's a simple enough forum for everybody to share their ideas and focus on objective learnings versus finger point and dwelling.  I also find it easy to go back through my projects and quickly thumb through the lessons learned.

    Do's and Don'ts Make Great Wiki Pages Too
    Note that this approach actually works really well in Wikis too.  That's where I actually started it.  On one project, my team created a lot of lessons learned in a Wiki, where each page was dedicated to something we found useful.  The problem was, it was hard to browse the lessons in a useful way.  It was part rant, part diatribe, with some ideas on improvements scattered here or there.  We then decided to name each page as a Do or Don't and suddenly we had a Wiki of valuable lessons we could act on.

  • J.D. Meier's Blog

    Quick and Dirty Getting Things Done

    • 4 Comments

    If you're backlogged and you want to get out, here's a quick, low tech, brute force approach.  On your whiteboard, first write your key backlog items.  Next to it, write down To Do.  Under To Do, write the three most valuable things you'll complete today.  Not tomorrow or in the future, but what you'll actually get done today.  Don't bite off more than you can chew.  Bite off enough to feel good about what you accomplished when the day is done.

    If you don't have a whiteboard, substitute a sheet of paper.  The point is keep it visible and simple. Each day for this week, grab a new set of three.  When you nail the three, grab more.  Again, only bite off as much as you can chew for the day.  At the end of the week, you'll feel good about what you got done.

    This is a technique I've seen work for many colleagues and it's stood the test of time.  There's a few reasons behind why this tends to work:

    • Whiteboards make it easy to step back, yet keep focus.
    • You only bite off a chunk at a time, so you don't feel swamped.
    • As you get things done, you build momentum.
    • You have constant visual feedback of your progress.
    • Unimportant things slough off.

    My Related Posts

  • J.D. Meier's Blog

    VSTS Guidance Projects Roundup

    • 5 Comments

    Here's a quick rundown of our patterns & practices VSTS related Guidance projects.   It's a combination of online knowledge bases, guides, video-based guidance and a community Wiki for public participation.  We're using CodePlex for agile release, before baking into MSDN for longer term.

    Guides

    Knowledge Bases

    • patterns & practices Performance Testing Guidance Wiki - This project is focused on creating an online knowledge base of how tos, guidelines, and practices for performance testing, including performance testing using Visual Studio Team System. It's a collaborative effort between industry experts, Microsoft ACE, patterns & practices, Premier, VSTS team members, and customers.
    • patterns & practices Visual Studio Team System Guidance Wiki - This project is focused on creating an online knowledge base of how tos, guidelines, and practices for Microsoft Visual Studio Team System. It's a collaborative effort between patterns & practices, Team System team members, industry experts, and customers.

    Video-Based Guidance

    Community Wiki

    Note that we're busy wrapping up the guides.  Once the guides are complete, we'll do a refresh of the online knowledge bases.  We'll also push some updated modules to Guidance Explorer.

    My Related Posts

     

  • J.D. Meier's Blog

    Get Lean, Eliminate Waste

    • 4 Comments

    If you want to tune your software engineering, take a look at Lean.  Lean is a great discipline with a rich history and proven practices to draw from.  James has a good post on applying Lean principles to software engineering.  I think he summarizes a key concept very well:

    "You let quality drive your speed by building in quality up front and with increased speed and quality comes lower cost and easier maintenance of the product moving forward."

    7 Key Principles in Lean
    James writes about 7 key principles in Lean:

    1. Eliminate waste.
    2. Focus on learning.
    3. Build quality in.
    4. Defer commitment.
    5. Deliver fast.
    6. Respect people.
    7. Optimize the whole.

    Example of Deferring Commitment
    I think the trick with any principles is knowing when to use them and how to apply them in context.  James gives an example of how Toyota defers commitment until the last possible moment:

    "Another key idea in Toyota's Product Development System is set-based design. If a new brake system is needed for a car, for example, three teams may design solutions to the same problem. Each team learns about the problem space and designs a potential solution. As a solution is deemed unreasonable, it is cut. At the end of a period, the surviving designs are compared and one is chosen, perhaps with some modifications based on learning from the others - a great example of deferring commitment until the last possible moment. Software decisions could also benefit from this practice to minimize the risk brought on by big up-front design."

    Examples in Software Engineering
    From a software perspective, what I've seen teams do is prototype multiple solutions to a problem and then pick the best fit.  The anti-pattern that I've seen is committing to one path too early without putting other options on the table.

    A Lean Way of Life
    How can you use Lean principles in your software development effort?  ... your organization?  ... your life?

    More Information

  • J.D. Meier's Blog

    Clearing Your Inbox

    • 9 Comments

    Today I helped a colleague clear their inbox.  I've kept a zero mail inbox for a few years.  I forgot this wasn't common practice until a colleague said to me, "wow, your inbox doesn't scroll."

    I didn't learn the zen of the zero mail inbox over night.  As pathetic as this sounds, I've actually compared email practices over the years with several people to find some of the best practices that work over time.  The last thing I wanted to do was waste time in email, if there were better ways.  Some of my early managers also instilled in me that to be effective, I needed to master the basics.  Put it another way, don't let administration get in the way of results.

    Key Steps for a Clear Inbox
    My overall approach is to turn actions into next steps, and keep stuff I've seen, out of the way of my incoming mail.  Here's the key steps: 

    1. Filter out everything that's not directly to you.  To do so, create an inbox rule to remove everything that's not directly To or CC you.  As an exception, I do let my immediate team aliases fall through.
    2. Create a folder for everything that's read.  I have a folder to move everything I read and act on.  This is how I make way for incoming.
    3. Create a list for your actions.  Having a separate list means you can list the actions in the sequence that makes sense for you, versus let the sequence in your inbox drive you.

    Part of the key is acting on mail versus shuffling it.  For a given mail, if I can act on it immediately, I do.  If now's not the time, I add it to my list of actions.  If it will take a bit of time, then I drag it to my calendar and schedule the time.

    Anti-Patterns
    I think it's important to note the anti-patterns:

    1. Using your inbox as a large collection of action and semi-action items with varying priorities
    2. Using your inbox as a pool of interspersed action and reference items
    3. Adopting complicated mail and task management systems

    My Related Posts

    1. Scannable Outcome Lists
    2. Using Scannable Outcomes with My Results Approach
  • J.D. Meier's Blog

    How To Do Tasks More Efficiently

    • 2 Comments

    Here's my short-list of techniques I use for improving efficiency on a given task:

    • Increase the frequency.  If I'm not efficient at something and I need to be, I start doing it more.  A lot more.  Frequency helps me get over resistance.  I also get more chances to learn little things each time that help me improve.   
    • Reduce friction.  This is important and goes in hand with increasing the frequency.  When I do something more, I can quickly find the friction points.  For example, I was finding that pictures were piling up on my camera.  The problem was I needed my camera's cradle to transfer my pics.  When I got my new camera, I could transfer pics through the memory disk without the cradle and the friction was gone.  It was a world of difference.  I pay attention to friction points now in all the recurring tasks I need to do.
    • Model the best.  If I look around, I can usually find somebody who's doing what I want to do, better than I'm doing it.  I learn from them.  For example, when I was doing an improvement sprint on making videos, I learned from Jason Taylor, Alik Levin, and Alex Mackman, since they were all doing videos for some time and had lessons to share.
    • Batch the tasks.  There's two ways I batch tasks.  First, I gather enough so that when I do them, I'll learn in a batch.  Second, I look for way to split the work and to batch the workstreams.  For example, when I was working on an improvement sprint for speech to text, I made very little progress if I tried to dictate and edit.  I made much more progress when I dictated in batch, and then edited in batch.  It was a simple shift in strategy, but made a world of difference.

    While each technique is useful, I find I improve faster when I'm using them together.  It's synergy in action, where the sum is better than the parts.

    My Related Posts

  • J.D. Meier's Blog

    Timebox Your Day

    • 5 Comments

    Grigori Melnik joined our team recently.  He's new to Microsoft so I shared some tips for effectiveness.  Potentially, the most important advice I gave him was to timebox his day.  If you keep time a constant (by ending your day at a certain time), it helps with a lot of things:

    • Worklife balance (days can chew into nights can chew into weekends)
    • Figuring our where to optimize your day
    • Prioritizing (time is a great forcing function)

    To start, I think it helps to carve up your day into big buckets (e.g. administration, work time, think time, connect time), and then figure out how much time you're willing to give them.  If you're not getting the throughput you want, you can ask yourself:

    • are you working on the right things?
    • are you spending too much time on lesser things?
    • are there some things you can do more efficiently or effectively?

    To make the point hit home, I pointed out that without a timebox, you can easily spend all day reading mails, blogs, aliases, doing self-training, ... etc. and then wonder where your day went.  Microsoft is a technical playground with lots of potential distractions for curious minds that want to grow.  Using timeboxes helps strike balance.  Timeboxes also help with pacing.  If I only have so many hours to produce results, I'm very careful to spend my high energy hours on the right things.

    My Related Posts

  • J.D. Meier's Blog

    How To Research Efficiently

    • 7 Comments

    Building guidance takes a lot of research.  Over the years, I've learned how to do this faster and easier.  One of the most important things I do is setup my folders (whether file system or Groove)

    Initial Folders

    /Project X
    	/Drafts
    	/Research
    	/Reference
    
    Folder Over Time
    Over time, this ends up looking more like
    Project X
    	/Builds
    		/2007_05_26
    		/2007_05_27
    	/Drafts
    	/Reference
    		/Articles
    		/Blogs
    		/Bugs
    		/CaseStudies
    		/Docs
    		/Slides
    		/Source X
    		/Source Y
    		/Source Z
    	/Research
    		/Braindumps
    		/DataPoints
    		/QuestionsLists
    		/Topic X
    		/Topic Y
    		/Topix Z
    	/Tests
    		/Tests X
    		/Tests Y
    		/Tests Z
    	/Whiteboards
    		/Topic X
    		/Topic Y
    		/Topic Z
    


    Key Points

    • Factor reference from research.  Reference is stuff I pull in from various sources, such as slides, blogs, articles ... etc.  Research holds the notes and docs I create.  This way I avoid mixing information I create with information that I reference.  Having a place to store my reference information helps me optimize when I'm hunting and gathering resources in batch mode.  I also find that it saves me time when I have to go back and figure out where information came from.
    • Factor stages of information.  In my basic workflow, I move information from research to drafts to builds.  (where builds are guides)  Keeping them separate makes it very easy for me to know the current state of the information and it gives me a safe place to re-factor and make changes.  Research is effectively my sandbox to create documents and organize my notes as I see fit.  Drafts is where I have to make decision on what and how to share the information.  Builds is where I produce a shareable set of information.
    • Have a place for whiteboard captures.  Whiteboards is where I dump pics from whiteboarding sessions.  I'm a fan of doing braindumps at the whiteboard and quickly dumping to a place to reference.  If it's just text, I write it down.  If it's visual, I take a pic and file it.

    I use this approach whether I'm doing personal learning or building 1200+ page guides.  This approach helps me spend more time researching and less time figuring out where to put the information.

    My Related Posts

  • J.D. Meier's Blog

    Performance Testing Guide Beta 1 is Available

    • 6 Comments

    Today we released our Beta 1 of Performance Testing Guidance for Web Applications Guide.  It shows you an end-to-end approach for implementing performance testing, based on lessons learned from applied use in customer scenarios.  Whether you're new to performance testing or looking for ways to improve your current approach, you'll find insights you can use.

    Contents at a Glance

    • Part 1, Introduction to Performance Testing
    • Part II, Exemplar Performance Testing Approaches
    • Part III, Identify the Test Environment
    • Part IV, Identify Performance Acceptance Criteria
    • Part V, Plan and Design Tests
    • Part VI, Execute Tests
    • Part VII, Analyze Results and Report
    • Part VIII, Performance Testing Techniques


    Chapters

    • Introduction
    • Ch 01 - Fundamentals of Web Application Performance Testing
    • Ch 02 - Types of Performance Testing
    • Ch 03 - Risks Performance Testing Addresses
    • Ch 04 – Core Activities
    • Ch 05 - Managing an Agile Performance Test Cycle
    • Ch 06 - Coordinate Performance Testing with an Iteration-Based Process
    • Ch 07 – Managing the Performance Testing Cycle in a CMMI Environment
    • Ch 08 - Evaluating Systems to Improve Performance Testing
    • Ch 09 - Performance Testing Objectives
    • Ch 10 - Quantifying End User Response Time Goals
    • Ch 11 - Consolidate Various Types of Performance Acceptance Criteria
    • Ch 12 - Modeling Application Usage
    • Ch 13 - Modeling User Variances
    • Ch 16 - Test Execution
    • Ch 17 - Performance Testing Math
    • Ch 18 - Reporting Fundamentals
    • Ch 19 - Load Testing Web Applications
    • Ch 20 - Stress Testing Web Applications

    About Our Team

    • Carlos Farre - Carlos is our performance and security specialist in patterns & practices.  He helps make sure our patterns & practices code follows our performance and security guidance.
    • Prashant Bansode - When Prashant's on a project, you can be sure he's ripping through the technical accuracy and improving the customer focus.  This is the same Prashant from Guidance Explorer, Security Guidance, and VSTS Guidance.
    • Scott Barber - Scott brings his many years of performance testing experience to the table.  If you do performance testing for a living, you probably know his name, his articles and the trails he's blazed.  Scott’s worked with us previously on Improving .NET Application Performance and Scalability.
    • Dennis Rea - Dennis brings his years of editorial experience to the table.  He worked with us previously on our Security Guidance.
  • J.D. Meier's Blog

    TFS Guide Beta 1 is Available

    • 20 Comments

    Today we released our Beta 1 of Team Development with Visual Studio Team Foundation Server Guide.  It's our Microsoft playbook for TFS.  This is our guide to help show you how to make the most of Team Foundation Server.  It's a distillation of many lessons learned.  It's a collaborative effort among product team members, field, industry experts, MVPs, and customers.

    Contents at a Glance

    • Part I, Fundamentals
    • Part II, Source Control
    • Part III, Builds
    • Part IV, Large Project Considerations
    • Part V, Project Management
    • Part VI, Process Guidance
    • Part VII, Reporting
    • Part VIII, Setting Up and Maintaining the Team Environment


    Chapters

    • Introduction
    • Ch 01 - Introducing the Team Environment
    • Ch 02 - Team Foundation Server Architecture
    • Ch 03 - Structuring Projects and Solutions
    • Ch 04 - Structuring Projects and Solutions in Team Foundation Server
    • Ch 05 - Defining Your Branching and Merging Strategy
    • Ch 06 - Managing Source Control Dependencies in Visual Studio Team System
    • Ch 07 - Team Build Explained
    • Ch 08 - Setting Up Continuous Integration with Team Build
    • Ch 09 - Setting Up Scheduled Builds with Team Build
    • Ch 10 - Large Project Considerations
    • Ch 11 - Project Management Explained
    • Ch 12 - Work Items Explained
    • Ch 13 – MSF Agile Projects
    • Ch 14 - Process Templates Explained
    • Ch 15 - Reporting Explained
    • Ch 16 - Team Foundation Server Deployment
    • Ch 17 - Providing Internet Access to Team Foundation Server

    About Our Team

    • Prashant Bansode - Prashant's an experienced guidance builder and a master of execution.  He's a solid pillar on the team.
    • Jason Taylor - Jason's a master of results.  I've worked with Jason across a few projects.  He always hits the ground running and accelerates from there.
    • Alex Mackman - I worked with Alex on Building Secure ASP.NET Applications, Improving Perf and Scale, and Improving .NET Performance and Scalability, so it's great to have him back.
    • Kevin Jones - Kevin is new to our team, but getting up to speed fast.  He brings a wealth of Visual Studio Team System experience to the table.


    Contributors and Reviewers
    Here's our contributors and reviewers so far:

    • Microsoft: Ajay Sudan; Ajoy Krishnamoorthy; Alan Ridlehoover; Alik Levin; Bijan Javidi; Buck Hodges; Burt Harris; Doug Neumann; Edward Jezierski; Eric Charran; Graham Barry; Jeff Beehler; Julie MacAller; Ken Perilman; Mario Rodriguez; Marc Kuperstein; Matthew Mitrik; Michael Puleio; Nobuyuki Akama; Paul Goring; Pete Coupland; Peter Provost; Rob Caron; Robert Horvick; Rohit Sharma; Sajee Mathew; Siddharth Bhatia; Tom Hollander; Venky Veeraraghavan
    • External: David P. Romig, Sr; Eric Blanchet; Leon Langleyben; Martin Woodward; Quang Tran; Sarit Tamir; Tushar More; Vaughn Hughes; Michael Rummier

     

Page 36 of 43 (1,060 items) «3435363738»