There are a great many products on the market these days that provide information about a set of projects. The idea is to let the stakeholders know how well their money is being spent. Information Technology departments often get criticized for "always asking for money" but never showing value, so Project Management Offices (PMOs) have been adopting these tools at an increasing rate.
Most tools capture basic statistics, and then let the IT group add whatever project stats that they want. Today, I want to examine those additional statistics: what measures should the project management office be tracking?
What logic leads to these measurements anyway? Plenty of reasons. Here's my take:
The key to understanding the metrics is to look at the outcome. We want to improve the success of IT projects. The measurements are there to encourage the practices that lead to project success.
Are we measuring the right practices? What are the practices that lead to project success?
We can guess, or we can go find projects that are successful and ask the project leaders what they did. We can do this for dozens of projects, and find common actions. We can look for the "critical behaviors" that led to success, and measure them.
Some of those things are in the typical scorecard.
But is this enough? Are these all of the behaviors that account for success?
If you ask a successful project manager about the things that lead to project success, have you ever heard things like this:
The project scorecard is measuring the success of IT team behavior, but not the success of business team behavior, and as a result, the scorecard cannot possibly predict the success or failure of the project.
If building a system requires a partnership, then we need to measure the customer's behavior as well. Assuming that we do, who will look at the numbers that show a customer that is not being responsive?
Customers are business people. They have managers too.
Think about it. The project scorecard can be used to demonstrate that the right behaviors are happening on both sides. After all, if a project fails because the business sponsor was unwilling to buy in to the approach, or wouldn't sign off on the interface design, or because the business users wouldn't participate in the test process, why should the IT team take the rap for missing the dates or overruns in cost?
Here's another benefit: if your project team resents the PMO, because they seem like the "project police," then adding the customer's behavior to the metrics can get the project team to sign up. After all, a complete scorecard is a fair scorecard. If the project team can point to the scorecard to demonstrate that the business sponsor is being lazy or uncooperative, then they are far more likely to support the PMO.
PingBack from http://itknowledgeexchange.techtarget.com/serviceendpoint/wsdac-sharp-27-martin-fowler-on-the-increasing-interest-in-the-database-space/