In a previous post, I talked about how we tracked risks across multiple projects. In this post, I'll talk about how we tracked quality gates.

Let's ponder this for a moment. Let's say at the beginning of the Orcas development cycle, someone very high up in the organization makes statements like the following:

  1. VS 2008 will have no performance regressions over VS 2005.
  2. We will have 70% code coverage via automated test runs

Those are just two statements, but very big ones. How do you ensure that when a 3,000 person organization adds 100's of features over 2-3 years time that those statements will be true when its all done?

Our answer to that question was Quality Gates. In Orcas, we had 16 quality gates, ranging from simple ones like: "You will have a written spec" to measurable ones like "70% code coverage via automation".

On our Feature work item, we had an entire tab dedicated to Quality Gates.

image

Let's look at this a bit.

The first 4 quality gates were document based. That is, a document had to exist and be signed off to pass those quality gates.

image

The remaining quality gates were tracked by sign off and a status indicator.

image

Before a feature crew could mark a feature as complete, they had to ensure all the quality gates were met. To indicate they were done, they set the Quality Gate fields as met, not applicable, or exempted. The feature work item rules would not allow the State to be set to Complete unless this was done. If "Exception" was marked for any quality gate, then an "Exception Authorization" field became required, where you'd have to specify the executive manager who approved that exception.

This was effectively an electronic sign-off document. When you marked a Quality Gate as met, it was stored in the work item revision history that you made that change, effectively recording your signature.

That begs the question: How did you keep people from cheating and just signing them off. The answer is, we really didn't. Do I think that every single feature completed actually met the full spirit of every quality gate when marking the gate as met? No. Ensuring that would require a great deal of effort and time to follow up in detail with every single feature crew. This was an trust-based system. We trusted people to do the right thing, and I think for the most part, they did.

Another safety net was some of the quality gates were re-run on the main branch. For example, all localization testing (Quality Gate "Pseudo Loc" above), all performance  testing, automated testing, was also run at the main branch on a regular basis. If someone checked in code that didn't meet those requirements, it showed up on the reports performed on the main branch.

For tracking the progress of quality gates across several feature crews, we used this report:

image

It was an Excel based report, that pulled in all the in-progress features, and used Excel 2007's conditional formatting to display yellow/green/red/black indicators for each quality gate. (Black meant, not started)

Like the other reports, each week, our project manager displayed the report and would ask questions like:

  • I noticed you are getting close to the end of your feature crew (the RI column in the report above), but you haven't started any of your quality gates. Care to explain?
  • You have marked some quality gates as Red. Why is that? Can we help?

So what we have here is not rocket science, nor is it mysterious. Just decide on the quality gates, make sure everyone is aware of them, and hold them accountable. However, without a system like TFS/Work Item Tracking to support this process, I do not see how we as an organization (or any organization for that matter) could be successful at implementing a process that is so broadly impacting and culture changing as quality gates was.

Wow, thanks for hanging in there to the end of this blog post! In the next post, I'll talk about some more reports we created to provide visibility from the top to bottom.