Maybe it’s because I’ve been swamped this week while the sun’s been shining here in the beautiful Pacific Northwest, but I’ve been doing some thinking about buffer, downtime, and productivity. I don’t think it’s a secret that projects have a tendency to come in late sometimes. Things come up, bad stuff happens, tasks are delayed…not uncommon. So how do you deal with it? You schedule in some buffer time to help offset the impact of changes to your project schedule. This isn’t rocket science, but is it the right approach?
First let’s talk about productivity. I was reading through Twitter the other day, and David Allen, who you may know as the “Getting Things Done” guy (@gtdguy), had replied to someone else about letting go/relaxing as a prerequisite for productive intensity. This got my attention. A lot of times we don’t think about buffer in terms of enhancing productivity and encouraging teams to get things done on time. Instead, we focus on buffer as one part of a realistic approach to scheduling. Well of course we do, because that’s what it is. But I think it’s important to also remember that A) there are actual *people* working on your projects, B) people tend to be more productive when they feel relaxed, and C) if a project has buffer scheduled in, the people working on that project are bound to feel less stressed than if the project had no buffer. That’s all I’m saying…you do the math.
So there’s the productivity aspect, but what else? Well after I replied to @gtdguy’s tweet, another fellow Twitter-er replied to me, suggesting that maybe some PMs aren’t doing proper risk response planning. Instead, they’re including buffers, with risk as the justification. Interesting idea. On the one hand, hey, at least they’re including buffer, but on the other hand, it’s important to remember that risk management isn’t just some kind of lightweight throwaway work. I mean, PMI’s got an entire certification for Risk Management Professionals. This is serious business. It also could be the reverse…that some organizations have full-on risk management happening, but it’s happening outside of the project schedule, so buffer in the plan itself is being overlooked. And then we’re back to that productivity discussion again. It seems to me that the right answer is a combination of both. Risk management *includes* scheduling buffer. With both in place, you’ve really got a handle on those what-if scenarios, and your team feels supported because you’ve recognized the reality of schedules slipping for one reason or another.
I’m wondering what the reality is out there. Do you include buffer in your project plans? Where, as separate line items, or as padded work estimates? If you don’t include buffer, why not? How do you implement risk management in your project schedules?
Looking for some resources on this subject? Try these: Use schedule buffers to manage change Manage project change with Microsoft Office Project 2007 Security Risk Management Guide View and edit project issues and risks Goals: Identify and plan for risks, Identify new risks, and Control project risks Risk management templates on Office Online Know Your Enemy: Introduction to Risk Management
PingBack from http://microsoft-sharepoint.simplynetdev.com/buffer-downtime-and-productivity/
There are a number of different methods to manage "buffer" or contingency. Sometimes management owns the contingency and doles it out. Other approaches like CCPM include buffer for the critical chain and also insert buffers for feeding chains (chains subordinate to the main chain) and then produce indicators based on buffer consumption.
There are good reasons to include it. Whether you hide it or make it explicit boils down to culture of your team/organization, but I think there is almost always some there.
Buffer however is not equal to downtime. You sort of expect to use the buffer because the unexpected happens (among other reasons). Making some downtime for your team is a different thing.
As Jack mentioned there are many methods and apporaches. What we mustn't forget is Parkinson's Law:
"Work expands so as to fill the time available for its completion."
I would not in general include "buffer" time unless there was a good reason.
I prefer to do a more analytical take on the issue by doing a Monte Carlo simulation of the project plan based on making reasonable assumptions about probabilistic parts, e.g. the durations, start dates, weather down periods, etc. Things the we don't know for certain, but we have judgments of data about their probability. Compute the probability distribution of the completion parameters (date, cost, etc.) of the project. Then based on that set targets. That by definition puts in the net effect of buffers, but in a more understandable way.
Buffers are not a good idea. Risk Management is. Why? Because the buffer belongs to the sponsor. He or she should decide when to "spend" it. If you have properly alerted the sponsor to the key risks, and how much you and the team think they could delay the project (which you can cost out), then the sponsor owns the risk, not the team. If the risk it realized, i.e., becomes an issue, the PM has the right to submit a Change Request to change the end date and / or cost. The sponsor then decides if that is acceptible. Maybe they will descope the project, maybe they will go for it, but it is their show. The PM is trying to give the sponsor the informaiton they need to make the right business decision. If the PM is sandbagging, he or she is effectively lying to the sponsor ... Oh want a tangled web we weave...when first we practice to deceive...