J.D. Meier's Blog

Software Engineering, Project Management, and Effectiveness

  • J.D. Meier's Blog

    Testing Your Organizational Clarity


    How do you figure out what your organization is really about?  It's one thing to know it intutitively.  It's another to be able to share it or have meaningful dialogue.  Here's the tests I use lately to know what a team or org is really about:

    Tests for Organizational Clarity

    • Vision / Mission: What's the one-liner vision and mission?
    • CustomersWho's the customer?
    • Problems: What domains or problems does it focus on?
    • Business Case; What's the internal and external business case?
    • Measures of Success; What's the measures of success?
    • Catalog / Product Line: What's the product line / deliverables / results?
    • Rhythm of Results: What's the product cycle or rhythm of results?

    If you know these, it tells you a lot. 

    Why This Cuts to the Chase
    Here's a quick rundown on how you can use this:

    • Mission is who you are and vision is where you want to go. An important attribute in the mission, is some unique value or differentiator. If everybody knows the vision and mission, they can run to the same finish line.   For me, I like one-liner visions and missions that you can spit out in the hall without a cue card.
    • In a large shop like Microsoft, knowing your customers makes a big difference.  For me, while I server a lot of customers, my main focus is developers.  While it's partly a chicken and egg deal, knowing your customers helps you clarify which domains or problems you tackle.  When in doubt, ask your customers!
    • Owning important problems is key.  I think you can measure the value of an org, by measuring the value of the problems it solves.
    • Business cases are powerful because they justify existence and investment.  I think looking through two lenses helps -- what do your customers see as the business case? (why you versus some other group or service or product) ... and what does your company see as the business case? (what's the unique value to keep this group around and invest in it)  One important piece of a business case is knowing how big is the pie and what's your slice.
    • Measuring success is important.  Everybody wants to do a good job and know what it is.  Knowing what gets measured tells you what's valued.
    • Knowing the deliverables in the form of a catalog or product line tells a lot about an org or team.  I like to think in terms of portfolios of results.  When I talk to teams about what they do, finding out what they deliver really cuts to the chase.
    • The rhythm of results is important.  This tells you a lot about the cadence, work styles, value of time, ... etc.

    There's obviously a lot more you can know about an org or team, but I'm finding these are keys to cut to the chase.

  • J.D. Meier's Blog

    How To Be a Leader in Your Field


    How to be a leader in your field?  Dragos shared a link to How to Be a Leader, which I found interesting.  In the article,  Philip E. Agre presents a six step recipe for becoming a leader in your field:

    1. Pick an issue.
    2. Having chosen your issue, start a project to study it.
    3. Find relevant people and talk to them.
    4. Pull together what you've heard.
    5. Circulate the result.
    6. Build on your work.

    I think the takeaway was that to be a leader in the field, you help move the ball forward.  In step 1, Philip gives an A-Z list of how to pick which ball to move forward. 

    Another part of the article caught my attention.  Philip writes:

    "To succeed in your career, you need more than the skills that you got in school -- you need to be the world expert in something. Knowledge is global, it's growing exponentially, and nobody can pack all of the necessary knowledge into their head. So everyone's going to specialize."

    I think there's a lot to be said for focus and specialization.  The trick is picking what to specialize in.  Personally, I like to specialize in skills that compound over time versus flavor of the day.

  • J.D. Meier's Blog

    How Might That Be True?


    It's obvious in retrospect, but I found a distinction between low-friction communication and high-friction communication.  By low-friction, I mean *person A* doesn't have to work that hard for *person B* to get a point.

    I find low friction scenarios are often cases where *person B* starts with the mind-set "how might that be true" and they help *person A* tease out, or make their point.  The starting point is collaboration -- two people working to understand the message.  I find high-friction scenarios are often cases where *person B* starts with the mind-set "let me tell you how you're wrong." 

    It's really easy among a bunch of engineers to rip ideas apart.  The trick I found is to first ask, "how might that be true?"  This gets over the potential hump that maybe while the delivery was off, there was merit in the message (or a concept needs help to be teased out) and it certainly builds more rapport than starting off as a devil's advocate.

  • J.D. Meier's Blog

    How To Use the Six Thinking Hats


    How do you get past deadlocks in a meeting?  You can apply the Six Thinking Hats.  I've blogged about the Six Thinking Hats before, but to summarize, it's a way to get everybody thinking about the problem in a collaborative way. 

    The Keys to The Six Thinking Hats
    The real key here is that rather than circular or deadlock debates, you focus the group on a particular viewpoint at a time.  This is a similar to writing, then editing vs. editing while your write, or brainstorming, then critiquing vs. critiquing while you brainstorm.  The big difference is that rather than just brainstorming and critiquing, you're looking at the issue from multiple, specific angles.  On the people side of this technique, you're letting people wear a different "hat", in a safe, constructive way.

    Applying the Six Thinking Hats 
    The approach below is lightweight and low-overhead, but gets you 80% there without requiring everybody to know the details of the Six Thinking Hats.

    Summary of Steps

    • Step 1.  List the questions that represent the hats
    • Step 2.  Walkthrough each question as a team
    • Step 3.  Modify the approach

    Step 1.  List the questions that represent the hats
    List a set of questions on the whiteboard to represent the hats.  You can do this either at the start of the meeting or when you hit a sticking spot.
    Here's the Six Thinking Hats:

    • White Hat - the facts and figures
    • Red Hat - the emotional view
    • Black Hat - the "devil's advocate"
    • Yellow Hat - the positive side
    • Green Hat - the creative side
    • Blue Hat - the organizing view

    Here's an example set of questions you can use to represent the hats:

    • What are the facts and figures?
    • What's your gut reaction?  How do you feel about this?
    • Why can't we do this?  What prevents us?  What's the downside?
    • How can we do this?
    • What are additional opportunities?
    • How should we think about this? (what are the metaphors or mental models)

    The sequence of the questions can matter.  For example, it wouldn't make sense to start thinking up solutions before you've focused on the problem.

    Step 2.  Walkthrough each question as a team
    Walkthrough each question as a team.  This is the key.  Rather than debating each other, you're now collaborating.  You'll be surprised when suddenly your team's "Devil's Advocate" is now showing off their ability to dream up wild solutions that just might work!

    Step 3.  Modify the approach.
    If it's not working, change the approach.  For example, you might find that you started with the wrong "hat" or question.  See if switching to another question or hat makes a difference.  The key is to keep this lightweight but effective.

    This isn't a heavy handed approach.  Instead, it's a subtle shift in stratey from free-for all debate to focusing and coordinating your team's thinking power in a deliberate way.  This lets everybody get heard as well as really bang on a problem from multiple angles in a teamwork soft of way.

    More Information

  • J.D. Meier's Blog

    Mark Curphey Joins Microsoft


    Well, it's official ... Software security is in for a ride! -- Mark Curphey joins Microsoft.   Mark already brings a ton to the table in terms of ideas, network, and results.  At Micrsoft, he'll crank it up.  Congrats Mark and I look forward to our future adventures!

  • J.D. Meier's Blog

    Scenario Frames for Team Foundation Server


    Our Scenario Frames for Team Foundation Server are available on CodePlex.  We have Scenario Frames for the following:

    We use scenario frames for several things:

    1. Mapping out the problem space
    2. Performing scenario evaluations to evaluate platform, tools, and guidance
    3. Designing products
    4. Scoping work

    The real power of a scenario frame is that's it's a shared frame of reference.  Personally, because I've seen so much benefit from scenario frames time and again, I couldn't imagine doing guidance or building a product without using scenario frames.

    My Related Posts

  • J.D. Meier's Blog

    New Release: patterns & practices Performance Testing Guidance for Web Applications


    We released the final version of our patterns & practices Performance Testing Guidance for Web Applications.  This guide provides an end-to-end approach for implementing performance testing. Whether you're new to performance testing or looking for ways to improve your current performance-testing approach, you will gain insights that you can tailor to your specific scenarios.  The main purpose of the guide is to be a relatively stable backdrop to capture, consolidate and share a methodology for performance testing.  Even though the topics addressed apply to other types of applications, we focused on explaining from a Web application perspective to maintain consistency and to be relevant to the majority of our anticipated readers.

    Key Changes Since Beta 1

    • Added forewords by Alberto Savoia and Rico Mariani.
    • Integrated more feedback and insights from customer reviews (particularly chapters 1-4, 9, 14, 18)
    • Integrated learnings from our Engineering Excellence team.
    • Refactored and revamped the performance testing types.
    • Revamped and improved the test execution chapter.
    • Revamped and improved the reporting chapter.
    • Revamped the stress testing chapter.
    • Released the guide in HTML pages on our CodePlex Wiki.


    • Learn the core activities of performance testing.
    • Learn the values and benefits associated with each type of performance testing.
    • Learn how to map performance testing to agile
    • Learn how to map performance testing to CMMI
    • Learn how to identify and capture performance requirements and testing objectives based on the perspectives of system users, business owners of the system, and the project team, in addition to compliance expectations and technological considerations.
    • Learn how to apply principles of effective reporting to performance test data.
    • Learn how to construct realistic workload models for Web applications based on expectations, documentation, observation, log files, and other data available prior to the release of the application to production.

    Why We Wrote the Guide

    • To consolidate real-world lessons learned around performance testing.
    • To present a roadmap for end-to-end performance testing.
    • To narrow the gap between state of the art and state of the practice.


    • Managing and conducting performance testing in both dynamic (e.g., Agile) and structured (e.g., CMMI) environments.
    • Performance testing, including load testing, stress testing, and other types of performance related testing.
    • Core activities of performance testing: identifying objectives, designing tests, executing tests, analyzing results, and reporting.

    Features of the Guide

    • Approach for performance testing.  The guide provides an approach that organizes performance testing into logical units to help you incrementally adopt performance testing throughout your application life cycle.
    • Principles and practices.  These serve as the foundation for the guide and provide a stable basis for recommendations. They also reflect successful approaches used in the field.
    • Processes and methodologies.  These provide steps for managing and conducting performance testing. For simplification and tangible results, they are broken down into activities with inputs, outputs, and steps. You can use the steps as a baseline or to help you evolve your own process.
    • Life cycle approach.  The guide provides end-to-end guidance on managing performance testing throughout your application life cycle, to reduce risk and lower total cost of ownership (TCO).
    • Modular.  Each chapter within the guide is designed to be read independently. You do not need to read the guide from beginning to end to benefit from it. Use the parts you need.
    • Holistic.  The guide is designed with the end in mind. If you do read the guide from beginning to end, it is organized to fit together in a comprehensive way. The guide, in its entirety, is better than the sum of its parts.
    • Subject matter expertise.  The guide exposes insight from various experts throughout Microsoft and from customers in the field.


    • Part 1, Introduction to Performance Testing
    • Part II, Exemplar Performance Testing Approaches
    • Part III, Identify the Test Environment
    • Part IV, Identify Performance Acceptance Criteria
    • Part V, Plan and Design Tests
    • Part VI, Execute Tests
    • Part VII, Analyze Results and Report
    • Part VIII, Performance-Testing Techniques


    • Chapter 1 – Fundamentals of Web Application Performance Testing
    • Chapter 2 – Types of Performance Testing
    • Chapter 3 – Risks Addressed Through Performance Testing
    • Chapter 4 – Web Application Performance Testing Core Activities
    • Chapter 5 – Coordinating Performance Testing with an Iteration-Based Process
    • Chapter 6 – Managing an Agile Performance Test Cycle
    • Chapter 7 – Managing the Performance Test Cycle in a Regulated (CMMI) Environment
    • Chapter 8 – Evaluating Systems to Increase Performance-Testing Effectiveness
    • Chapter 9 – Determining Performance Testing Objectives
    • Chapter 10 – Quantifying End-User Response Time Goals
    • Chapter 11 – Consolidating Various Types of Performance Acceptance Criteria
    • Chapter 12 – Modeling Application Usage
    • Chapter 13 – Determining Individual User Data and Variances
    • Chapter 14 – Test Execution
    • Chapter 15 – Key Mathematic Principles for Performance Testers
    • Chapter 16 – Performance Test Reporting Fundamentals
    • Chapter 17 – Load-Testing Web Applications
    • Chapter 18 – Stress-Testing Web Applications

    Our Team

    Contributors and Reviewers

    • External Contributors and Reviewers: Alberto Savoia; Ben Simo; Cem Kaner; Chris Loosley; Corey Goldberg; Dawn Haynes; Derek Mead; Karen N. Johnson; Mike Bonar; Pradeep Soundararajan; Richard Leeke; Roland Stens; Ross Collard; Steven Woody
    • Microsoft Contributors / Reviewers: Alan Ridlehoover; Clint Huffman; Edmund Wong; Ken Perilman; Larry Brader; Mark Tomlinson; Paul Williams; Pete Coupland; Rico Mariani

    My Related Posts

  • J.D. Meier's Blog

    Performance Threats


    Rico and I have long talked about performance threats.  I finally created a view that shows how you can think of performance issues in terms of vulnerabilities, threats and countermeasures.  See Performance Frame v2

    In this case, the vulnerabilities, threats and countermeasures are purely from a technical design standpoint.  To rationalize performance against other quality attributes and against goals and constraints, you can use performance modeling and threat modeling.  To put it another way, evaluate your design trade-offs against the acceptance criteria for your usage scenarios, considering your user, system, and business goals and constraints.

  • J.D. Meier's Blog

    Blog Improvements


    I did a few things to try and improve browsing and findability:

    I was surprised by how many of my posts related to productivity.  Then again, I focus heavily on productivity with my mentees.  I think personal productivity is an important tool for turning their great ideas, hopes, and dreams into results.  If it's not already their strength, I want to make sure it's a least not a liability. 

    On my Book Share blog, I changed themes, reorganized key features, and created a best of list.  While it may sound simple here, I actually went through quite a bit of trial and error.  I tested many, many user experience patterns and relied heavily on feedback from a trusted set of reviewers.  Although I used a satisficing strategy, I did try to make browsing the content as efficient and effective as possible.  I was surprised by how many subtle patterns and practices there are for blog layouts.  Maybe more surprising was how many anti-patterns there are.

  • J.D. Meier's Blog

    Cutting Questions


    How do you cut to the chase?  How do you clear the air of ambiguity and get to facts?  Ask cutting questions.

    My manager, Per , doesn't ask a lot of questions.  He asks the right ones.  Here's some examples:

    • Who's on board?  Who are five customers that stand behind you?
    • Next steps?
    • What does your gut say?
    • Is it working?  Is it effective?
    • What would "x" say? (for example, what would your peers say?)
    • What's their story?
    • Where's your prioritized list of scenarios?

    As simple as it sounds, having five separate customers stand behind you is a start.  I'm in the habbit of litmus checking my path early on to see who's on board or to find the resistance.  As customers get on board, my confidence goes up.  I've also seen this cutting question work well with startups. I've asked a few startups about their five customers.  Some had great ideas, but no customers on board.  The ones that had at least five are still around. 

    At the end of any meeting, Per never fails to ask "next steps?", and the meeting quickly shifts from talk to action.

    "Is it working?" is a pretty cutting question.  It's great because it forces you to step back and reflect on your results and consider a change in approach.

  • J.D. Meier's Blog

    Vision, Mission, Values


    There's a lot to be said for well-crafted vision and mission statements.   I've been researching and leaving a trail at The Bookshare.

    In a Nutshell

    • Mission - who are you? what do you do?
    • Vision - where do you want to go?
    • Values - what do you value? what's important? (your corporate culture)

    How Do You Craft Them

    1. You start by figuring out the values.  You figure out the values by observing how your organization prioritizes and how they spend their time.  There can be a gap between what folks say they value and what they actually do.  Actions speak louder than words.
    2. Once you know your culture and values, you can figure out your mission -- who you are and what you do.  What is your organization's unique value you bring to the table?  What is your unique strength?  In a world of survival of the fittest, this is important to know and to leverage.
    3. Now that you know who you are, you can figure out where you want to go.

    A good vision statement is a one-liner statement you can repeat in the halls.  Nobody has to memorize it.  It's easy to say and it's easy to groc.  The same goes for a mission statement.  You might need to add another line or two to your mission statement to disambiguate, but if folks don't quickly get what you do from your mission statement -- it's not working.

    How Do You Use Them

    • Use a mission statement to quickly tell others what you do.
    • Use a vision statement to inspire and rally the team.  It should be on the horizon, but achievable and believable.
    • Use a mission statement as a gauge for success. 
    • Set goals and objectives that tell you whether you're accomplishing your mission and moving toward or away from your vision.
    • Use your mission to remind you what you do (and what you don't) and to help you prioritize.
    • Craft a personal mission and vision statement to help you get clarity on what you want to accomplish.
    • Use your personal vision and mission statements to help you stay on your horse, or get back on, when you get knocked down, or lose your way.

    I'm a fan of using reference examples (lots of them) to get a sense of what works and what doesn't.  The Man on a Mission blog is dedicated to mission statements and has plenty of real-life examples to walk through. 

  • J.D. Meier's Blog

    Daily Syncs


    On my teams we do a daily sync meeting.  It's 10 minutes max.  We go around the team with three questions:

    1. What did you get done?
    2. What are you getting done next?
    3. Where do you need help?

    We stay out of details (that's for offline and follow-up).  It's a status meeting more on accomplishments and progress over reporting activities (lots of folks are doing lots of things, so it's crisper to focus on accomplishments.)  The more distributed the team, the more important the meeting.

    Keys to Results

    • 10 minute Timebox.  The 10 minute bar is apparently a big factor in how folks view the meeting, based on feedback from folks that have been in longer meetings (1/2 hr or more).  The 10 minute max is key because it keeps a fast pace and energy high (vs. another meeting of blah, blah, blah.)  We can always finish earlier (in fact, one of my teams was regularly finishing the meeting in under 2 minutes for a 5 person team).
    • Daily.  Daily is important.  Having them daily, means everybody can structure their day consistently.  Daily means it's also easy to build a routine and reduce friction points.  It also means that team members have a reliable forum for getting help if needed.

    The best pattern that has worked over time is ...

    • Mondays - we define the most important outcomes for the week (the few big things that matter, no laundry lists).  This is actually closer to a 1/2 hour (max) meeting.
    • Daily - we do a daily checkpoint meetings. (this is about execution, bottlenecks and awareness)
    • Fridays - we reflect on lessons learned and make any improvements to project practices.

    Another way of thinking about this is ... "if this were the end of the week, what would you feel good about having completed?"  "Each day, are we getting closer or further, or do we need to readjust priorities or expectations?" ...  "What did we learn and what can we improve?"

    My Related Posts

  • J.D. Meier's Blog

    Execution Checklists


    Execution checklists are a simple, but effective technique for improving results.  Rather than a to do list, it's a focused checklist of steps in sequence to execute a specific task.  I use notepad to start.  I write the steps.  On each execution of the steps, by myself or a teammate, we improve the steps as we learn.  We share our execution checklists in Groove or in a Wiki.

    Key Scenarios
    There's two main scenarios:

    1. You are planning the work to execute.  In this case, you're thinking through what you have to get done.  This is great when you feel over-burdened or if you have a mind-numbing, routine task that you need to get done.  This can help you avoid task saturation and it can also help you avoid silly mistakes while you're in execution mode.
    2. You are paving a path through the execution.  In this case, you're leaving a trail of what worked.  This works great for tasks that you'll have to perform more than once or you have to share best practices across the team.   

    I encourage my teams to create execution checklists for any friction points or sticking spots we hit.  For example, if there's a tough process with lots of movable parts, we capture the steps and tune them over time as we gain proficiency.  As simple as this sounds it's very effective whether it's for a personal task, a team task, or any execution steps you want to improve. 

    One of my most valuable execution checklists is steps for rebuilding my box.  While I could rebuild my box without it, I would fumble around a bit and probably forget some key things, and potentially get reminded the hard way.

    The most recent execution checklist I made was for building the PDF for our Team Development with Visual Studio Team Foundation Server guide.  There were a lot of manual steps and there was plenty of room for error.  Each time I made a build, I baked the lessons learned into the execution checklist.  By the time I got to the successful build, there was much less room for error simply by following the checklist.

  • J.D. Meier's Blog

    New Release: patterns & practices Team Development with Team Foundation Server Guide


    Today we release the final version of our patterns & practices: Team Development with Visual Studio Team Foundation Server.  It's our Microsoft playbook for Team Foundation Server.  It shows you how to make the most of the Team Foundation Server.  It's a compendium of proven practices, product team recommendations, and insights from the field.

    Key Changes Since Beta 1

    • We added guidelines for build, project management and reporting.
    • We added practices at a glance for build, project management, and reporting.
    • We added a chapter to summarize key Visual Studio 2008 changes.
    • We revamped our Internet access strategies.
    • We did a full sweep of the guide.
    • We completed more thorough product team reviews for key chapters.

    Contents at a Glance

    • Part I, Fundamentals
    • Part II, Source Control
    • Part III, Builds
    • Part IV, Large Project Considerations
    • Part V, Project Management
    • Part VI, Process Templates
    • Part VII, Reporting
    • Part VIII, Setting Up and Maintaining the Team Environment
    • Part IX, Visual Studio 2008 Team Foundation Server


    • Ch 01 – Introducing the Team Environment
    • Ch 02 – Team Foundation Server Architecture
    • Ch 03 – Structuring Projects and Solutions in Source Control
    • Ch 04 – Structuring Projects and Solutions in Team Foundation Source Control
    • Ch 05 – Defining Your Branching and Merging Strategy
    • Ch 06 – Managing Source Control Dependencies in Visual Studio Team System
    • Ch 07 – Team Build Explained
    • Ch 08 – Setting Up Continuous Integration with Team Build
    • Ch 09 – Setting Up Scheduled Builds with Team Build
    • Ch 10 – Large Project Considerations
    • Ch 11 – Project Management Explained
    • Ch 12 – Work Items Explained
    • Ch 13 – Process Templates Explained
    • Ch 14 – MSF for Agile Software Development Projects
    • Ch 15 – Reporting Explained
    • Ch 16 – Installation and Deployment
    • Ch 17 – Providing Internet Access to Team Foundation Server
    • Ch 18 – What’s New in Visual Studio 2008 Team Foundation Server

    Our Team

    Contributors and Reviewers

    • External Contributors / Reviewers: David P. Romig, Sr; Dennis Rea; Eugene Zakhareyev; Leon Langleyben; Martin Woodward; Michael Rummier; Miguel Mendoza; Mike Fourie; Quang Tran; Sarit Tamir; Tushar More; Vaughn Hughes
    • Microsoft Contributors / Reviewers:  Aaron Hallberg; Ahmed Salijee; Ajay Sudan; Ajoy Krishnamoorthy; Alan Ridlehoover; Alik Levin; Ameya Bhatawdekar; Bijan Javidi; Bill Essary; Brett Keown; Brian Harry; Brian Keller; Brian Moore; Buck Hodges; Burt Harris; Conor Morrison; David Caufield; David Lemphers; Doug Neumann; Edward Jezierski; Eric Blanchet; Eric Charran; Graham Barry; Gregg Boer; Grigori Melnik; Janet Williams Hepler; Jeff Beehler; Jose Parra; Julie MacAller; Ken Perilman; Lenny Fenster; Marc Kuperstein; Mario Rodriguez; Matthew Mitrik; Michael Puleio; Nobuyuki Akama; Paul Goring; Pete Coupland; Peter Provost; Granville (Randy) Miller; Rob Caron; Robert Horvick; Rohit Sharma; Ryley Taketa; Sajee Mathew; Siddharth Bhatia; Tom Hollander; Tom Marsh; Venky Veeraraghavan

    My Related Posts

  • J.D. Meier's Blog

    Improvement Frame


    As a mentor at work, I like to checkpoint results.  While I can do area-specific coaching, I tend to take a more holistic approach.  For me, it's more rewarding to find ways to unleash somebody's full potential and improve their overall effectiveness at Microsoft.  Aside from checking against specific goals, I use the following frame to gauge progress.

    Improvement Frame

    Area Prompts
    Thinking / Feeling
  • Do you find your work rewarding?
  • Are you passionate about what you do?
  • Are you spending more time feeling good?
  • What thoughts dominate your mind now?
  • Is your general outlook more positive or negative?
  • Do you have more energy or less in general?
  • Are you still worried about the same things?
  • Are you excited about anything?
  • Have you changed your self-talk from inner-critic to coach?
  • Situation
  • Are you spending more time working on what you enjoy?
  • What would you rather be spending more time doing?
  • Do you have the manager you want?
  • Do you have the job you want?
  • Are you moving toward or away from your career goals?
  • If your situation was never going to change, what one skill would you need to make the most of it?
  • Time / Task Management
  • Are you driving your day or being driven?
  • Are you spending less time on administration?
  • Are you getting your "MUSTs" done?
  • Are you dropping the ball on anything important?
  • Do you have a task management system you trust?
  • Are you avoiding using your head as a collection point?
  • How are you avoiding biting off more than you can chew?
  • How are you delivering incremental value?
  • Domain Knowledge
  • Have you learned new skills?
  • Have you sharpened your key strengths?
  • Have you reduced your key liabilities?
  • What are you the go-to person for?
  • What could you learn that would make your more valuable to your team?
  • Strategies / Approaches
  • What are you approaching differently than the past?
  • How are you more resourceful?
  • How are you finding lessons in everything you do?
  • How are you learning from everybody that you can?
  • How are you improving your effectiveness?
  • How are you modeling the success of others?
  • How are you tailoring advice to make it work for you?
  • Relationships
  • Are you managing up effectively?
  • Are your priorities in sync with your manager's?
  • Has your support network grown or shrunk?
  • How are you participating in new circles of influence?
  • How are you spending more time with people that catalyze you?
  • How are you working more effectively with people that drain you?
  • How are you leveraging more mentors and area specific coaches?
  • I've found this frame very effective for quickly finding areas that need work or to find sticking points.  It's also very revealing in terms of how much dramatic change there can be.  While situations or circumstances may not change much, I find that changes in strategies and approaches can have a profound impact.  My take on this is that while you can't always control what's on your plate, you can control how you eat it.

  • J.D. Meier's Blog



    I showed a colleague of mine one of my tricks for building slide decks faster.  It's a divide and conquer approach I've been using a few years.  I do what I call "one-sliders." 

    Whenever I build a deck, such as for milestone meetings, I create a set of single-slide decks.  I name each slide appropriately (vision, scope, budget, ... etc.)  I then compose the master deck from the slides.

    Here's the benefits that might not be obvious:

    • It's easy to jump to a particular slide without manipulating a heavy deck, which helps when I'm first building the deck.
    • It encourages quick focused reviews with the right people (e.g. I can pair with our CFO on the budget slide without hunting through a deck)
    • It encourages sharing with precision.  I share the relevant slide vs. "see slide 32" in a 60 slide deck.
    • I end up with a repository of reusable slide nuggets.  I find myself drawing from my "one-slider" depot regularly 
    • Doing a slide at a time, encourages thinking in great slides.  It's similar to thinking in great pages in a Wiki (a trick Ward taught me).

    The biggest impact though is that now I find myself frequently sharing concise one-sliders, and getting points across faster and simpler than blobby mails.

  • J.D. Meier's Blog

    Update on Key Projects


    While I've been quiet on my blog, we've been busy behind the scenes.  Here's a rundown on key things:

    • Arch Nuggets. These don't exist yet.  I'm socializing an idea to create small, focused guidance for key engineering decisions.  I'm picturing small articles with insight and chunks of code.  The code would be less about reuse and more about helping you quickly prototype and test.  You can think of them as focused architectural spikes or tests.  The scope would be cross-technology, cross-cutting concerns and application infrastructure type scenarios such as data access, exception management, logging, ... etc.  They'll be light-weight and focused on helping you figure out how to put our platform legos together.  For a concrete example, imagine more articles and code examples such as How To: Page Records in .NET Applications
    • CodePlex Site - Performance Testing Guidance.  This is our online knowledge base for performance testing nuggets.   We'll refactoring nuggets from our performance testing guide.  We'll then create new modules that show you how to make the most out of Visual Studio.
    • CodePlex Site - VSTS Guidance.  This is our online knowledge base for Visual Studio Team Foundation Server guidance.  We're refactoring nuggets from our TFS guide.
    • Guidance Explorer.  This is where we store our reusable guidance nuggets.  On the immediate radar, we're making some fixes to improve performance as well as improve browsing our catalog of nuggets.
    • Guide - Getting Results.  As a pet project, I'm putting together what I've learned over the past several years getting results at Microsoft.  It's going to include what I learned from the school of hard knocks.  I'm approaching it the way I approach any guide and I'm focusing on the principles, practices and patterns for effectiveness.
    • Guide - Performance Testing Guidance for Web Applications.  We're wrapping this up this week.  We're finishing the final edits and then building a new PDF.
    • Guide - Team Development with Visual Studio Team Foundation Server.   We’re basically *guidance complete.*  Since the Beta release, we added guidelines and practices for build, project management, and reporting.  We also revamped the deployment chapter, as well as improved the process guidance.  It's a substantial update. 
    • MSDN port of the guidance.  We have enough critical mass in terms of VSTS and Performance Testing guidance to justify porting to MSDN.  While many customers have used the guidance from the CodePlex site as is, for greater reach, we need to start the process of making the guidance a part of the MSDN library.  This will be an interesting exercise.
    • Sharepoint test for our guidance store.  We're testing the feasibility of using Sharepoint for our back-end (our guidance store) and our online Web application.  The key challenges we're hitting are creating effective publishing and consuming user experiences.  It's interesting and insightful and there's lots to learn.

    I'll have more to say soon.

  • J.D. Meier's Blog

    Security Inspections


    Inspections are among my favorite tools for improving security.   I like them because they’re so effective and efficient.  Here’s why:

    • If you know what to look for, you have a better chance of finding it.  (The reverse is also true: if you don’t know what you’re looking for, you’re not going to see it)
    • You can build your inspection criteria from common patterns (Security issues tend to stem from common patterns)
    • You can share your inspection criteria
    • You can prioritize your inspection criteria
    • You can chunk your inspection criteria

    Bottom line -- you can identify, catalog and share security criteria faster than new security issues come along.

    Security Frame
    Our Security Frame is simply a set of categories we use to “frame” out, organize, and chunk up security threats, attacks, vulnerabilities and countermeasures, as well as principles, practices and patterns.  The categories make it easy to distill and share the information in a repeatable way. 

    Security Design Inspections
    Performing a Security Design Inspection involves evaluating your application’s architecture and design in relation to its target deployment environment from a security perspective.  You can use the Security Frame to help guide your analysis.   For example, you can walk the categories (authentication, authorization, … etc.) for the application.  You can also use the categories to do a layer-by-layer analysis.  Design inspections are a great place to checkpoint your core strategies, as well as identify what sort of end-to-end tests you need to verify your approach.

    Here's the approach in a nutshell:

    • Step 1.  Evaluate the deployment and infrastructure. Review the design of your application as it relates to the target deployment environment and the associated security policies. Consider the constraints imposed by the underlying infrastructure-layer security and the operational practices in use.
    • Step 2.  Evaluate key security design using the Security frame. Review the security approach that was used for critical areas of your application. An effective way to do this is to focus on the set of categories that have the most impact on security, particularly at an architectural and design level, and where mistakes are most often made. The security frame describes these categories. They include authentication, authorization, input validation, exception management, and other areas. Use the security frame as a road map so that you can perform reviews consistently, and to make sure that you do not miss any important areas during the inspection.
    • Step 3.  Perform a layer-by-layer analysis. Review the logical layers of your application, and evaluate your security choices within your presentation, business, and data access logic.

    For more information, see our patterns & practices Security Design Inspection Index.

    Security Code Inspections
    This is truly a place where inspections shine.  While static analysis will catch a lot of the low hanging fruit, manual inspection will find a lot of the important security issues that are context dependent.  Because it’s a manual exercise, it’s important to set objectives, and to prioritize based on what you’re looking for.   Whether you do your inspections in pairs or in groups or individually, checklists in the form of criteria or inspection questions are helpful.

    Here's the approach in a nutshell:

    • Step 1. Identify security code review objectives. Establish goals and constraints for the review.
    • Step 2. Perform a preliminary scan. Use static analysis to find an initial set of security issues and improve your understanding of where the security issues are most likely to be discovered through further review.
    • Step 3. Review the code for security issues. Review the code thoroughly with the goal of finding security issues that are common to many applications. You can use the results of step two to focus your analysis.
    • Step 4. Review for security issues unique to the architecture. Complete a final analysis looking for security issues that relate to the unique architecture of your application. This step is most important if you have implemented a custom security mechanism or any feature designed specifically to mitigate a known security threat.

    For more information on Security Code Inspections, see our patterns & practices Security Code Inspection Index.  For examples of “Inspection Questions”, see Security Question List: Managed Code (.NET Framework 2.0) and Security Question List: ASP.NET 2.0.” (Security Question List: ASP.NET 2.0).

    Security Deployment Inspections
    Deployment Inspections are particularly effective for security because this is where the rubber meets the road.  In a deployment inspection, you walk the various knobs and switches that impact the security profile of your solution.  This is where you check things such as accounts, shares, protocols, … etc. 

    The following server security categories are key when performing a security deployment inspection:

    • Patches and Updates
    • Accounts Accounts
    • Auditing and Logging
    • Files and Directories
    • Ports
    • Protocols
    • Registry
    • Services
    • Shares

    For more information, see our patterns & practices Security Deployment Inspection Index.

    My Related Posts

  • J.D. Meier's Blog

    Performance Inspections


    In this post, I'll focus on design, code, and deployment inspections for performance.  Inspections are a white-box technique to proactively check against specific criteria.  You can integrate inspections at key stages in your life cycle, such as design, implementation and deployment.

    Keys to Effective Inspections

    • Know what you're looking for.
    • Use scenarios to illustrate a problem.
    • Bound the acceptance criteria with goals and constraints.

    Performance Frame
    The Performance Frame is a set of categories that helps you organize and focus on performance issues.   You can use the frame to organize principles, practices, patterns and anti-patterns.  The categories are also effective for organizing sets of questions to use during inspections.  By using the categories in the frame, you can chunk up your inspections.   The frame is also good for finding low-hanging fruit.    

    Performance Design Inspections
    Performance design inspections focus on the key engineering decisions and strategies.  Basically, these are the decisions that have cascading impact and that you don't want to make up on the fly.  For example, your candidate strategies for caching per user and application-wide data, paging records, and exception management would be good to inspect.  Effective performance design inspections include analyzing the deployment and infrastructure, walking the performance frame, and doing a layer-by-layer analysis.  Question-driven inspections are good because they help surface key risks and they encourage curiosity.

    While there are underlying principles and patterns that you can consider, you need to temper your choices with prototypes, tests and feedback.  Performance decisions are usually trade-offs with other quality attributes, such as security, extensibility, or maintainability.  Performance Modeling helps you make trade-off decisions by focusing on scenarios, goals and constraints. 

    For more information, see Architecture and Design Review of a .NET Application for Performance and Scalability and Performance Modeling.

    Performance Code Inspections
    Performance code inspections focus on evaluating coding techniques and design choices. The goal is to identify potential performance and scalability issues before the code is in production.  The key to effective performance code inspections is to use a profiler to localize and find the hot spots.  The anti-pattern is blindly trying to optimize the code.  Again, a question-driven technique used in conjunction with measuring is key.

    For more information, see Performance Code Inspection.

    Performance Deployment Inspections
    Performance deployment inspections focus on tuning the configuration for your deployment scenario.  To do this, you need to have measurements and runtime data to know where to look.  This includes simulating your deployment environment and workload.  You also need to know the knobs and switches that influence the runtime behavior.  You also need to be bounded by your quality of service requirements so you know when you're done.  Scenarios help you prioritize.

    My Related Posts

  • J.D. Meier's Blog



    Inspections are a white-box technique to proactively check against specific criteria.  You can integrate inspections as part of your testing process at key stages, such as design, implementation and deployment.

    Design Inspections
    In a design inspection, you evaluate the key engineering decisions.   This helps avoid expensive do-overs.  Think of inspections as a dry-run of the design assumptions.   Here’s some practices I’ve found to be effective for design inspections:

    • Use inspections to checkpoint your strategies before going too far down the implementation path.
    • Use inspections to expose the key engineering risks.
    • Use scenarios to keep the inspections grounded.  You can’t evaluate the merits of a design or architecture in a vacuum.
    • Use a whiteboard when you can.  It’s easy to drill into issues, as well as step back as needed.
    • Tease out the relevant end-to-end test cases based on risks you identify.
    • Build pools of strategies (i.e. design patterns) you can share.  It’s likely that for your product line or context, you’ll see recurring issues.
    • Balance user goals, business goals, and technical goals.  The pitfall is to do a purely technical evaluation.  Designs are always trade-offs.

    Code Inspections
    In a code inspection, you focus on the implementation.  Code inspections are particularly effective for finding lower-level issues, as well as balancing trade-offs.  For example, a lot of security issues are implementation level, and they require trade-off decisions.  Here’s some practices I’ve found to be effective for code inspections: 

    • Use checklists to share the “building codes.”  For example, the .NET Design Guidelines are one set of building codes.  There's also building codes for security, performance ... etc.
    • Use scenarios and objectives to bound and test.  This helps you avoid arbitrary optimization or blindly applying recommendations.
    • Focus the inspection.  I’ve found it’s better to do multiple, short-burst, focused inspections than a large, general inspection.
    • Pair with an expert in the area you’re inspecting.
    • Build and draw from a pool of idioms (i.e. patterns/anti-patterns)

    Deployment Inspections
    Deployment is where application meets infrastructure.  Deployment inspections are particularly helpful for quality attributes such as performance, security, reliability and manageability concerns.  Here’s some practices I’ve found to be effective for deployment inspections:

    • Use scenarios to help you prioritize.
    • Know the knobs and switches that influence runtime behavior.
    • Use checklists to help build and share expertise.  Knowledge of knobs and switches tends to be low-level and art-like.
    • Focus your inspections.  I’ve found it more productive and effective to do focused inspections.  Think of it as divide and conquer.

    Additional Considerations

    • Set objectives.  Without objectives, it's easy to go all over the board.
    • Keep a repository.  In practice, one of the most effective approaches is to have a common share that all teams can use as a starting point.  Each team then tailors for their specific project.
    • Integrate inspections with your quality assurance efforts for continuous improvement.
    • Identify skill sets you'll need for further drill downs (e.g. detail design, coding, troubleshooting, maintenance.)  If you don't involve the right people, you won't produce effective results.
    • Use inspections as part of your acceptance testing for security and performance.
    • Use checklists as starting points.  Refine and tailor them for your context and specific deliverables.
    • Leverage tools to automate the low-hanging fruit.  Focus manual inspections on more context-sensitive or more complex issues, where you need to make trade-offs.
    • Tailor your checklists for application types (Web application, Web Service, desktop application, component) and for verticals (manufacturing, financial ... etc.) or project context (Internet-facing, high security, ... etc.)

    In the future, I'll post some more specific techniques for security and performance.

  • J.D. Meier's Blog

    MSF Agile Frame (Workstreams and Key Activities)


    When I review an approach, I find it helpful to distill it to a simple frame so I can get a bird's-eye view.  For MSF Agile, I found the most useful frame to be the workstreams and key activities.  According to MSF, workstreams are simply groups of activities that flow logically together and are usually associated with a particular role.  I couldn't find this view in MSF Agile, so I created one:

    Workstream Role Key Activities
    Capture Project Vision Business Analyst Write Vision Statement; Define Personas; Refine Personas
    Create a Quality of Service Requirement Business Analyst Brainstorm quality of Service Requirements; Develop Lifestyle Snapshot; Prioritize Quality of Service Requirements List; Write Quality of Service Requirements; Identify Security Objectives
    Create a Scenario Business Analyst Brainstorm Scenarios; Develop Lifestyle Snapshot; Prioritize Scenario List; Write Scenario Description; Storyboard a Scenario
    Guide Project Project Manager Review Objectives; Assess Progress; Evaluate Test Metric Thresholds; Triage Bugs; Identify Risk
    Plan an Iteration Project Manager Determine Iteration Length; Estimate Scenario; Estimate Quality of Service Requirements; Schedule Scenario; Schedule Quality of Service Requirement; Schedule bug Fixing Allotment; Divide Scenarios into Tasks; Divide Quality of Service Requirements into Tasks
    Guide Iteration Project Manager Monitor Iteration; Mitigate a Risk; Conduct Retrospectives
    Create a Solution Architecture Architect Partition the System; Determine Interfaces; Develop Threat Model; Develop Performance Model; Create Architectural Prototype; Create Infrastructure Architecture
    Build a Product Developer Start a Build; Verify a Build; Fix a Build; Accept Build
    Fix a Bug Developer Reproduce the Bug; Locate the Cause of a Bug; Reassign a Bug; Decide on a Bug Fix Strategy; Code the Fix for a Bug; Create or Update a Unit Test; Perform a Unit Test; Refactor Code; Review Code
    Implement a Development Task Developer Cost a Development Task; Create or Update a Unit Test; Write Code for a Development Task; Perform Code Analysis; Perform a Unit Test; Refactor Code; Review Code; Integrate Code Changes
    Close a Bug Tester Verify a Fix; Close the Bug
    Test a Quality of Service Requirement Tester Define Test Approach; Write Performance Tests; Write Security Tests; Write Stress Tests; Write Load Tests; Select and Run a Test Case; Open a Bug; Conduct Exploratory Testing
    Test a Scenario Tester Define Test Approach; Write Validation Tests; Select and Run a Test Case; Open a Bug; Conduct Exploratory Testing
    Release a Product Release Manager Execute a Release Plan; Validate a Release; Create Release Notes; Deploy the Product

  • J.D. Meier's Blog

    How To Share Lessons Learned


    I'm a fan of sharing lessons learned along the way.  One light-weight technique I do with a distributed team is a simple mail of Do's and Dont's.  At the end of the week or as needed, I start the mail with a list of dos and dont's I learned and then ask the team to reply all with their lessons learned.

    Example of a Lessons Learned Mail


    • Do require daily live synchs to keep the team on the same page and avoid churn in mail.
    • Do reduce the friction to be able to spin up Live Meetings as needed.

    Guidance Engineering

    • Do index product docs to help build categories and to know what's available.
    • Do scenario frames to learn and prioritize the problem space.
    • Do use Scenarios, Questions and Answers, Practices at a Glance, and Guidelines to build and capture knowledge as we go.
    • Do use Scenarios as a scorecard for the problem space.
    • Do use Questions and Answers as a chunked set of focused answers, indexed by questions.
    • Do use Practices as a Glance, as a frame for organizing task-based nuggets (how to blah …)
    • Do use Guidelines for recommended practices (do this, don't do this … etc.)
    • Do create the "frame"/categories earlier vs. later.

    Personal Effectiveness

    • Do blog as I go versus over-engineer entries.
    • Do sweep across bodies of information and compile indexes up front versus ad-hoc (for example, compile bloggers, tags, doc indexes, articles, sites … etc.)

    Project Management

    • Don't split the team across areas.  Let the team first cast a wide net to learn the domain, but then focus everybody on the same area for collaboration, review, pairing …etc.


    • Do use CodePlex as a channel for building community content projects.

    Guidelines Help Carry Lessons Forward
    While this approach isn't perfect, I found it makes it easier to carry lessons forward, since each lesson is a simple guideline.  I prefer this technique to approaches where there's a lot of dialogue but no results.  I also like it because it's a simple enough forum for everybody to share their ideas and focus on objective learnings versus finger point and dwelling.  I also find it easy to go back through my projects and quickly thumb through the lessons learned.

    Do's and Don'ts Make Great Wiki Pages Too
    Note that this approach actually works really well in Wikis too.  That's where I actually started it.  On one project, my team created a lot of lessons learned in a Wiki, where each page was dedicated to something we found useful.  The problem was, it was hard to browse the lessons in a useful way.  It was part rant, part diatribe, with some ideas on improvements scattered here or there.  We then decided to name each page as a Do or Don't and suddenly we had a Wiki of valuable lessons we could act on.

  • J.D. Meier's Blog

    Quick and Dirty Getting Things Done


    If you're backlogged and you want to get out, here's a quick, low tech, brute force approach.  On your whiteboard, first write your key backlog items.  Next to it, write down To Do.  Under To Do, write the three most valuable things you'll complete today.  Not tomorrow or in the future, but what you'll actually get done today.  Don't bite off more than you can chew.  Bite off enough to feel good about what you accomplished when the day is done.

    If you don't have a whiteboard, substitute a sheet of paper.  The point is keep it visible and simple. Each day for this week, grab a new set of three.  When you nail the three, grab more.  Again, only bite off as much as you can chew for the day.  At the end of the week, you'll feel good about what you got done.

    This is a technique I've seen work for many colleagues and it's stood the test of time.  There's a few reasons behind why this tends to work:

    • Whiteboards make it easy to step back, yet keep focus.
    • You only bite off a chunk at a time, so you don't feel swamped.
    • As you get things done, you build momentum.
    • You have constant visual feedback of your progress.
    • Unimportant things slough off.

    My Related Posts

  • J.D. Meier's Blog

    VSTS Guidance Projects Roundup


    Here's a quick rundown of our patterns & practices VSTS related Guidance projects.   It's a combination of online knowledge bases, guides, video-based guidance and a community Wiki for public participation.  We're using CodePlex for agile release, before baking into MSDN for longer term.


    Knowledge Bases

    • patterns & practices Performance Testing Guidance Wiki - This project is focused on creating an online knowledge base of how tos, guidelines, and practices for performance testing, including performance testing using Visual Studio Team System. It's a collaborative effort between industry experts, Microsoft ACE, patterns & practices, Premier, VSTS team members, and customers.
    • patterns & practices Visual Studio Team System Guidance Wiki - This project is focused on creating an online knowledge base of how tos, guidelines, and practices for Microsoft Visual Studio Team System. It's a collaborative effort between patterns & practices, Team System team members, industry experts, and customers.

    Video-Based Guidance

    Community Wiki

    Note that we're busy wrapping up the guides.  Once the guides are complete, we'll do a refresh of the online knowledge bases.  We'll also push some updated modules to Guidance Explorer.

    My Related Posts


  • J.D. Meier's Blog

    Get Lean, Eliminate Waste


    If you want to tune your software engineering, take a look at Lean.  Lean is a great discipline with a rich history and proven practices to draw from.  James has a good post on applying Lean principles to software engineering.  I think he summarizes a key concept very well:

    "You let quality drive your speed by building in quality up front and with increased speed and quality comes lower cost and easier maintenance of the product moving forward."

    7 Key Principles in Lean
    James writes about 7 key principles in Lean:

    1. Eliminate waste.
    2. Focus on learning.
    3. Build quality in.
    4. Defer commitment.
    5. Deliver fast.
    6. Respect people.
    7. Optimize the whole.

    Example of Deferring Commitment
    I think the trick with any principles is knowing when to use them and how to apply them in context.  James gives an example of how Toyota defers commitment until the last possible moment:

    "Another key idea in Toyota's Product Development System is set-based design. If a new brake system is needed for a car, for example, three teams may design solutions to the same problem. Each team learns about the problem space and designs a potential solution. As a solution is deemed unreasonable, it is cut. At the end of a period, the surviving designs are compared and one is chosen, perhaps with some modifications based on learning from the others - a great example of deferring commitment until the last possible moment. Software decisions could also benefit from this practice to minimize the risk brought on by big up-front design."

    Examples in Software Engineering
    From a software perspective, what I've seen teams do is prototype multiple solutions to a problem and then pick the best fit.  The anti-pattern that I've seen is committing to one path too early without putting other options on the table.

    A Lean Way of Life
    How can you use Lean principles in your software development effort?  ... your organization?  ... your life?

    More Information

Page 40 of 47 (1,166 items) «3839404142»