Software Engineering, Project Management, and Effectiveness
We released the final version of our patterns & practices Performance Testing Guidance for Web Applications. This guide provides an end-to-end approach for implementing performance testing. Whether you're new to performance testing or looking for ways to improve your current performance-testing approach, you will gain insights that you can tailor to your specific scenarios. The main purpose of the guide is to be a relatively stable backdrop to capture, consolidate and share a methodology for performance testing. Even though the topics addressed apply to other types of applications, we focused on explaining from a Web application perspective to maintain consistency and to be relevant to the majority of our anticipated readers.
Key Changes Since Beta 1
Why We Wrote the Guide
Features of the Guide
Contributors and Reviewers
My Related Posts
Today we release the final version of our patterns & practices: Team Development with Visual Studio Team Foundation Server. It's our Microsoft playbook for Team Foundation Server. It shows you how to make the most of the Team Foundation Server. It's a compendium of proven practices, product team recommendations, and insights from the field.
Contents at a Glance
Execution checklists are a simple, but effective technique for improving results. Rather than a to do list, it's a focused checklist of steps in sequence to execute a specific task. I use notepad to start. I write the steps. On each execution of the steps, by myself or a teammate, we improve the steps as we learn. We share our execution checklists in Groove or in a Wiki.
Key ScenariosThere's two main scenarios:
I encourage my teams to create execution checklists for any friction points or sticking spots we hit. For example, if there's a tough process with lots of movable parts, we capture the steps and tune them over time as we gain proficiency. As simple as this sounds it's very effective whether it's for a personal task, a team task, or any execution steps you want to improve.
One of my most valuable execution checklists is steps for rebuilding my box. While I could rebuild my box without it, I would fumble around a bit and probably forget some key things, and potentially get reminded the hard way.
The most recent execution checklist I made was for building the PDF for our Team Development with Visual Studio Team Foundation Server guide. There were a lot of manual steps and there was plenty of room for error. Each time I made a build, I baked the lessons learned into the execution checklist. By the time I got to the successful build, there was much less room for error simply by following the checklist.
How do you cut to the chase? How do you clear the air of ambiguity and get to facts? Ask cutting questions.
My manager, Per , doesn't ask a lot of questions. He asks the right ones. Here's some examples:
As simple as it sounds, having five separate customers stand behind you is a start. I'm in the habbit of litmus checking my path early on to see who's on board or to find the resistance. As customers get on board, my confidence goes up. I've also seen this cutting question work well with startups. I've asked a few startups about their five customers. Some had great ideas, but no customers on board. The ones that had at least five are still around.
At the end of any meeting, Per never fails to ask "next steps?", and the meeting quickly shifts from talk to action.
"Is it working?" is a pretty cutting question. It's great because it forces you to step back and reflect on your results and consider a change in approach.
Rico and I have long talked about performance threats. I finally created a view that shows how you can think of performance issues in terms of vulnerabilities, threats and countermeasures. See Performance Frame v2.
In this case, the vulnerabilities, threats and countermeasures are purely from a technical design standpoint. To rationalize performance against other quality attributes and against goals and constraints, you can use performance modeling and threat modeling. To put it another way, evaluate your design trade-offs against the acceptance criteria for your usage scenarios, considering your user, system, and business goals and constraints.
As a mentor at work, I like to checkpoint results. While I can do area-specific coaching, I tend to take a more holistic approach. For me, it's more rewarding to find ways to unleash somebody's full potential and improve their overall effectiveness at Microsoft. Aside from checking against specific goals, I use the following frame to gauge progress.
I've found this frame very effective for quickly finding areas that need work or to find sticking points. It's also very revealing in terms of how much dramatic change there can be. While situations or circumstances may not change much, I find that changes in strategies and approaches can have a profound impact. My take on this is that while you can't always control what's on your plate, you can control how you eat it.
I showed a colleague of mine one of my tricks for building slide decks faster. It's a divide and conquer approach I've been using a few years. I do what I call "one-sliders."
Whenever I build a deck, such as for milestone meetings, I create a set of single-slide decks. I name each slide appropriately (vision, scope, budget, ... etc.) I then compose the master deck from the slides.
Here's the benefits that might not be obvious:
The biggest impact though is that now I find myself frequently sharing concise one-sliders, and getting points across faster and simpler than blobby mails.
I did a few things to try and improve browsing and findability:
I was surprised by how many of my posts related to productivity. Then again, I focus heavily on productivity with my mentees. I think personal productivity is an important tool for turning their great ideas, hopes, and dreams into results. If it's not already their strength, I want to make sure it's a least not a liability.
On my Book Share blog, I changed themes, reorganized key features, and created a best of list. While it may sound simple here, I actually went through quite a bit of trial and error. I tested many, many user experience patterns and relied heavily on feedback from a trusted set of reviewers. Although I used a satisficing strategy, I did try to make browsing the content as efficient and effective as possible. I was surprised by how many subtle patterns and practices there are for blog layouts. Maybe more surprising was how many anti-patterns there are.
On my teams we do a daily sync meeting. It's 10 minutes max. We go around the team with three questions:
We stay out of details (that's for offline and follow-up). It's a status meeting more on accomplishments and progress over reporting activities (lots of folks are doing lots of things, so it's crisper to focus on accomplishments.) The more distributed the team, the more important the meeting.
Keys to Results
The best pattern that has worked over time is ...
Another way of thinking about this is ... "if this were the end of the week, what would you feel good about having completed?" "Each day, are we getting closer or further, or do we need to readjust priorities or expectations?" ... "What did we learn and what can we improve?"
There's a lot to be said for well-crafted vision and mission statements. I've been researching and leaving a trail at The Bookshare.
In a Nutshell
How Do You Craft Them
A good vision statement is a one-liner statement you can repeat in the halls. Nobody has to memorize it. It's easy to say and it's easy to groc. The same goes for a mission statement. You might need to add another line or two to your mission statement to disambiguate, but if folks don't quickly get what you do from your mission statement -- it's not working.
How Do You Use Them
ExamplesI'm a fan of using reference examples (lots of them) to get a sense of what works and what doesn't. The Man on a Mission blog is dedicated to mission statements and has plenty of real-life examples to walk through.