Software Engineering, Project Management, and Effectiveness
We released the final version of our patterns & practices Performance Testing Guidance for Web Applications. This guide provides an end-to-end approach for implementing performance testing. Whether you're new to performance testing or looking for ways to improve your current performance-testing approach, you will gain insights that you can tailor to your specific scenarios. The main purpose of the guide is to be a relatively stable backdrop to capture, consolidate and share a methodology for performance testing. Even though the topics addressed apply to other types of applications, we focused on explaining from a Web application perspective to maintain consistency and to be relevant to the majority of our anticipated readers.
Key Changes Since Beta 1
Why We Wrote the Guide
Features of the Guide
Contributors and Reviewers
My Related Posts
Rico and I have long talked about performance threats. I finally created a view that shows how you can think of performance issues in terms of vulnerabilities, threats and countermeasures. See Performance Frame v2.
In this case, the vulnerabilities, threats and countermeasures are purely from a technical design standpoint. To rationalize performance against other quality attributes and against goals and constraints, you can use performance modeling and threat modeling. To put it another way, evaluate your design trade-offs against the acceptance criteria for your usage scenarios, considering your user, system, and business goals and constraints.
I did a few things to try and improve browsing and findability:
I was surprised by how many of my posts related to productivity. Then again, I focus heavily on productivity with my mentees. I think personal productivity is an important tool for turning their great ideas, hopes, and dreams into results. If it's not already their strength, I want to make sure it's a least not a liability.
On my Book Share blog, I changed themes, reorganized key features, and created a best of list. While it may sound simple here, I actually went through quite a bit of trial and error. I tested many, many user experience patterns and relied heavily on feedback from a trusted set of reviewers. Although I used a satisficing strategy, I did try to make browsing the content as efficient and effective as possible. I was surprised by how many subtle patterns and practices there are for blog layouts. Maybe more surprising was how many anti-patterns there are.
How do you cut to the chase? How do you clear the air of ambiguity and get to facts? Ask cutting questions.
My manager, Per , doesn't ask a lot of questions. He asks the right ones. Here's some examples:
As simple as it sounds, having five separate customers stand behind you is a start. I'm in the habbit of litmus checking my path early on to see who's on board or to find the resistance. As customers get on board, my confidence goes up. I've also seen this cutting question work well with startups. I've asked a few startups about their five customers. Some had great ideas, but no customers on board. The ones that had at least five are still around.
At the end of any meeting, Per never fails to ask "next steps?", and the meeting quickly shifts from talk to action.
"Is it working?" is a pretty cutting question. It's great because it forces you to step back and reflect on your results and consider a change in approach.
There's a lot to be said for well-crafted vision and mission statements. I've been researching and leaving a trail at The Bookshare.
In a Nutshell
How Do You Craft Them
A good vision statement is a one-liner statement you can repeat in the halls. Nobody has to memorize it. It's easy to say and it's easy to groc. The same goes for a mission statement. You might need to add another line or two to your mission statement to disambiguate, but if folks don't quickly get what you do from your mission statement -- it's not working.
How Do You Use Them
ExamplesI'm a fan of using reference examples (lots of them) to get a sense of what works and what doesn't. The Man on a Mission blog is dedicated to mission statements and has plenty of real-life examples to walk through.
On my teams we do a daily sync meeting. It's 10 minutes max. We go around the team with three questions:
We stay out of details (that's for offline and follow-up). It's a status meeting more on accomplishments and progress over reporting activities (lots of folks are doing lots of things, so it's crisper to focus on accomplishments.) The more distributed the team, the more important the meeting.
Keys to Results
The best pattern that has worked over time is ...
Another way of thinking about this is ... "if this were the end of the week, what would you feel good about having completed?" "Each day, are we getting closer or further, or do we need to readjust priorities or expectations?" ... "What did we learn and what can we improve?"
Execution checklists are a simple, but effective technique for improving results. Rather than a to do list, it's a focused checklist of steps in sequence to execute a specific task. I use notepad to start. I write the steps. On each execution of the steps, by myself or a teammate, we improve the steps as we learn. We share our execution checklists in Groove or in a Wiki.
Key ScenariosThere's two main scenarios:
I encourage my teams to create execution checklists for any friction points or sticking spots we hit. For example, if there's a tough process with lots of movable parts, we capture the steps and tune them over time as we gain proficiency. As simple as this sounds it's very effective whether it's for a personal task, a team task, or any execution steps you want to improve.
One of my most valuable execution checklists is steps for rebuilding my box. While I could rebuild my box without it, I would fumble around a bit and probably forget some key things, and potentially get reminded the hard way.
The most recent execution checklist I made was for building the PDF for our Team Development with Visual Studio Team Foundation Server guide. There were a lot of manual steps and there was plenty of room for error. Each time I made a build, I baked the lessons learned into the execution checklist. By the time I got to the successful build, there was much less room for error simply by following the checklist.
Today we release the final version of our patterns & practices: Team Development with Visual Studio Team Foundation Server. It's our Microsoft playbook for Team Foundation Server. It shows you how to make the most of the Team Foundation Server. It's a compendium of proven practices, product team recommendations, and insights from the field.
Contents at a Glance
As a mentor at work, I like to checkpoint results. While I can do area-specific coaching, I tend to take a more holistic approach. For me, it's more rewarding to find ways to unleash somebody's full potential and improve their overall effectiveness at Microsoft. Aside from checking against specific goals, I use the following frame to gauge progress.
I've found this frame very effective for quickly finding areas that need work or to find sticking points. It's also very revealing in terms of how much dramatic change there can be. While situations or circumstances may not change much, I find that changes in strategies and approaches can have a profound impact. My take on this is that while you can't always control what's on your plate, you can control how you eat it.
I showed a colleague of mine one of my tricks for building slide decks faster. It's a divide and conquer approach I've been using a few years. I do what I call "one-sliders."
Whenever I build a deck, such as for milestone meetings, I create a set of single-slide decks. I name each slide appropriately (vision, scope, budget, ... etc.) I then compose the master deck from the slides.
Here's the benefits that might not be obvious:
The biggest impact though is that now I find myself frequently sharing concise one-sliders, and getting points across faster and simpler than blobby mails.
While I've been quiet on my blog, we've been busy behind the scenes. Here's a rundown on key things:
I'll have more to say soon.
Inspections are among my favorite tools for improving security. I like them because they’re so effective and efficient. Here’s why:
Bottom line -- you can identify, catalog and share security criteria faster than new security issues come along.
Security FrameOur Security Frame is simply a set of categories we use to “frame” out, organize, and chunk up security threats, attacks, vulnerabilities and countermeasures, as well as principles, practices and patterns. The categories make it easy to distill and share the information in a repeatable way.
Security Design InspectionsPerforming a Security Design Inspection involves evaluating your application’s architecture and design in relation to its target deployment environment from a security perspective. You can use the Security Frame to help guide your analysis. For example, you can walk the categories (authentication, authorization, … etc.) for the application. You can also use the categories to do a layer-by-layer analysis. Design inspections are a great place to checkpoint your core strategies, as well as identify what sort of end-to-end tests you need to verify your approach.
Here's the approach in a nutshell:
For more information, see our patterns & practices Security Design Inspection Index.
Security Code InspectionsThis is truly a place where inspections shine. While static analysis will catch a lot of the low hanging fruit, manual inspection will find a lot of the important security issues that are context dependent. Because it’s a manual exercise, it’s important to set objectives, and to prioritize based on what you’re looking for. Whether you do your inspections in pairs or in groups or individually, checklists in the form of criteria or inspection questions are helpful.
For more information on Security Code Inspections, see our patterns & practices Security Code Inspection Index. For examples of “Inspection Questions”, see Security Question List: Managed Code (.NET Framework 2.0) and Security Question List: ASP.NET 2.0.” (Security Question List: ASP.NET 2.0).
Security Deployment InspectionsDeployment Inspections are particularly effective for security because this is where the rubber meets the road. In a deployment inspection, you walk the various knobs and switches that impact the security profile of your solution. This is where you check things such as accounts, shares, protocols, … etc.
The following server security categories are key when performing a security deployment inspection:
For more information, see our patterns & practices Security Deployment Inspection Index.
In this post, I'll focus on design, code, and deployment inspections for performance. Inspections are a white-box technique to proactively check against specific criteria. You can integrate inspections at key stages in your life cycle, such as design, implementation and deployment.
Keys to Effective Inspections
Performance FrameThe Performance Frame is a set of categories that helps you organize and focus on performance issues. You can use the frame to organize principles, practices, patterns and anti-patterns. The categories are also effective for organizing sets of questions to use during inspections. By using the categories in the frame, you can chunk up your inspections. The frame is also good for finding low-hanging fruit.
Performance Design InspectionsPerformance design inspections focus on the key engineering decisions and strategies. Basically, these are the decisions that have cascading impact and that you don't want to make up on the fly. For example, your candidate strategies for caching per user and application-wide data, paging records, and exception management would be good to inspect. Effective performance design inspections include analyzing the deployment and infrastructure, walking the performance frame, and doing a layer-by-layer analysis. Question-driven inspections are good because they help surface key risks and they encourage curiosity.
While there are underlying principles and patterns that you can consider, you need to temper your choices with prototypes, tests and feedback. Performance decisions are usually trade-offs with other quality attributes, such as security, extensibility, or maintainability. Performance Modeling helps you make trade-off decisions by focusing on scenarios, goals and constraints.
For more information, see Architecture and Design Review of a .NET Application for Performance and Scalability and Performance Modeling.
Performance Code InspectionsPerformance code inspections focus on evaluating coding techniques and design choices. The goal is to identify potential performance and scalability issues before the code is in production. The key to effective performance code inspections is to use a profiler to localize and find the hot spots. The anti-pattern is blindly trying to optimize the code. Again, a question-driven technique used in conjunction with measuring is key.
For more information, see Performance Code Inspection.
Performance Deployment InspectionsPerformance deployment inspections focus on tuning the configuration for your deployment scenario. To do this, you need to have measurements and runtime data to know where to look. This includes simulating your deployment environment and workload. You also need to know the knobs and switches that influence the runtime behavior. You also need to be bounded by your quality of service requirements so you know when you're done. Scenarios help you prioritize.
Inspections are a white-box technique to proactively check against specific criteria. You can integrate inspections as part of your testing process at key stages, such as design, implementation and deployment.
Design InspectionsIn a design inspection, you evaluate the key engineering decisions. This helps avoid expensive do-overs. Think of inspections as a dry-run of the design assumptions. Here’s some practices I’ve found to be effective for design inspections:
Code InspectionsIn a code inspection, you focus on the implementation. Code inspections are particularly effective for finding lower-level issues, as well as balancing trade-offs. For example, a lot of security issues are implementation level, and they require trade-off decisions. Here’s some practices I’ve found to be effective for code inspections:
Deployment InspectionsDeployment is where application meets infrastructure. Deployment inspections are particularly helpful for quality attributes such as performance, security, reliability and manageability concerns. Here’s some practices I’ve found to be effective for deployment inspections:
In the future, I'll post some more specific techniques for security and performance.
When I review an approach, I find it helpful to distill it to a simple frame so I can get a bird's-eye view. For MSF Agile, I found the most useful frame to be the workstreams and key activities. According to MSF, workstreams are simply groups of activities that flow logically together and are usually associated with a particular role. I couldn't find this view in MSF Agile, so I created one:
I'm a fan of sharing lessons learned along the way. One light-weight technique I do with a distributed team is a simple mail of Do's and Dont's. At the end of the week or as needed, I start the mail with a list of dos and dont's I learned and then ask the team to reply all with their lessons learned.
Example of a Lessons Learned Mail
Guidelines Help Carry Lessons ForwardWhile this approach isn't perfect, I found it makes it easier to carry lessons forward, since each lesson is a simple guideline. I prefer this technique to approaches where there's a lot of dialogue but no results. I also like it because it's a simple enough forum for everybody to share their ideas and focus on objective learnings versus finger point and dwelling. I also find it easy to go back through my projects and quickly thumb through the lessons learned.
Do's and Don'ts Make Great Wiki Pages TooNote that this approach actually works really well in Wikis too. That's where I actually started it. On one project, my team created a lot of lessons learned in a Wiki, where each page was dedicated to something we found useful. The problem was, it was hard to browse the lessons in a useful way. It was part rant, part diatribe, with some ideas on improvements scattered here or there. We then decided to name each page as a Do or Don't and suddenly we had a Wiki of valuable lessons we could act on.
If you're backlogged and you want to get out, here's a quick, low tech, brute force approach. On your whiteboard, first write your key backlog items. Next to it, write down To Do. Under To Do, write the three most valuable things you'll complete today. Not tomorrow or in the future, but what you'll actually get done today. Don't bite off more than you can chew. Bite off enough to feel good about what you accomplished when the day is done.
If you don't have a whiteboard, substitute a sheet of paper. The point is keep it visible and simple. Each day for this week, grab a new set of three. When you nail the three, grab more. Again, only bite off as much as you can chew for the day. At the end of the week, you'll feel good about what you got done.
This is a technique I've seen work for many colleagues and it's stood the test of time. There's a few reasons behind why this tends to work:
Here's a quick rundown of our patterns & practices VSTS related Guidance projects. It's a combination of online knowledge bases, guides, video-based guidance and a community Wiki for public participation. We're using CodePlex for agile release, before baking into MSDN for longer term.
Note that we're busy wrapping up the guides. Once the guides are complete, we'll do a refresh of the online knowledge bases. We'll also push some updated modules to Guidance Explorer.
If you want to tune your software engineering, take a look at Lean. Lean is a great discipline with a rich history and proven practices to draw from. James has a good post on applying Lean principles to software engineering. I think he summarizes a key concept very well:
"You let quality drive your speed by building in quality up front and with increased speed and quality comes lower cost and easier maintenance of the product moving forward."
7 Key Principles in LeanJames writes about 7 key principles in Lean:
Example of Deferring CommitmentI think the trick with any principles is knowing when to use them and how to apply them in context. James gives an example of how Toyota defers commitment until the last possible moment:
"Another key idea in Toyota's Product Development System is set-based design. If a new brake system is needed for a car, for example, three teams may design solutions to the same problem. Each team learns about the problem space and designs a potential solution. As a solution is deemed unreasonable, it is cut. At the end of a period, the surviving designs are compared and one is chosen, perhaps with some modifications based on learning from the others - a great example of deferring commitment until the last possible moment. Software decisions could also benefit from this practice to minimize the risk brought on by big up-front design."
Examples in Software EngineeringFrom a software perspective, what I've seen teams do is prototype multiple solutions to a problem and then pick the best fit. The anti-pattern that I've seen is committing to one path too early without putting other options on the table.
A Lean Way of LifeHow can you use Lean principles in your software development effort? ... your organization? ... your life?
Today I helped a colleague clear their inbox. I've kept a zero mail inbox for a few years. I forgot this wasn't common practice until a colleague said to me, "wow, your inbox doesn't scroll."
I didn't learn the zen of the zero mail inbox over night. As pathetic as this sounds, I've actually compared email practices over the years with several people to find some of the best practices that work over time. The last thing I wanted to do was waste time in email, if there were better ways. Some of my early managers also instilled in me that to be effective, I needed to master the basics. Put it another way, don't let administration get in the way of results.
Key Steps for a Clear InboxMy overall approach is to turn actions into next steps, and keep stuff I've seen, out of the way of my incoming mail. Here's the key steps:
Part of the key is acting on mail versus shuffling it. For a given mail, if I can act on it immediately, I do. If now's not the time, I add it to my list of actions. If it will take a bit of time, then I drag it to my calendar and schedule the time.
Anti-PatternsI think it's important to note the anti-patterns:
Here's my short-list of techniques I use for improving efficiency on a given task:
While each technique is useful, I find I improve faster when I'm using them together. It's synergy in action, where the sum is better than the parts.
Grigori Melnik joined our team recently. He's new to Microsoft so I shared some tips for effectiveness. Potentially, the most important advice I gave him was to timebox his day. If you keep time a constant (by ending your day at a certain time), it helps with a lot of things:
To start, I think it helps to carve up your day into big buckets (e.g. administration, work time, think time, connect time), and then figure out how much time you're willing to give them. If you're not getting the throughput you want, you can ask yourself:
To make the point hit home, I pointed out that without a timebox, you can easily spend all day reading mails, blogs, aliases, doing self-training, ... etc. and then wonder where your day went. Microsoft is a technical playground with lots of potential distractions for curious minds that want to grow. Using timeboxes helps strike balance. Timeboxes also help with pacing. If I only have so many hours to produce results, I'm very careful to spend my high energy hours on the right things.
Building guidance takes a lot of research. Over the years, I've learned how to do this faster and easier. One of the most important things I do is setup my folders (whether file system or Groove)
I use this approach whether I'm doing personal learning or building 1200+ page guides. This approach helps me spend more time researching and less time figuring out where to put the information.
Today we released our Beta 1 of Performance Testing Guidance for Web Applications Guide. It shows you an end-to-end approach for implementing performance testing, based on lessons learned from applied use in customer scenarios. Whether you're new to performance testing or looking for ways to improve your current approach, you'll find insights you can use.
About Our Team
Today we released our Beta 1 of Team Development with Visual Studio Team Foundation Server Guide. It's our Microsoft playbook for TFS. This is our guide to help show you how to make the most of Team Foundation Server. It's a distillation of many lessons learned. It's a collaborative effort among product team members, field, industry experts, MVPs, and customers.
About Our Team
Contributors and ReviewersHere's our contributors and reviewers so far: