I’ve always claimed, in my patented homespun kind of way, that most computer users adhere to the “all I want to do is push a button” model of complexity management for software application development - which is to say that complexity is dismissed as an accidental attribute of the problem at hand, and that this accidental complexity is easy to winnow out of existence.  I have also been known to, in my patented homespun kind of way, be mighty handy at fishing for bullheads with a bent pin, but that’s another story altogether. This “automatic” idea conjures up images of Peter Boyle donning a colander chapeau replete with bristling coiled wires, on a dark and stormy night. Not only is that one helluva spooky image, but the Mel Brooks-style and real world result is always the definition of unintended and suboptimal consequences. 


BTW, anyone who claims that this complexity thing can be managed automatically is certainly itching for fight with Fred Brooks. For the full story, check out No Silver Bullet at http://www-inst.eecs.berkeley.edu/~maratb/readings/NoSilverBullet.html . Otherwise, take my word for it that my anecdotal experiences in the field, as well as the overwhelming preponderance of evidence indicates that the best way to hose down a development project is to start adding personnel. As a corollary, automating a goofed up process simply means that you end up with an automated, expensive goofed up process.


So, Ken, you say, what that heck does that have to do with the anything that I really give a hoot about anyway? Well, says I, I’m about to claim that automation, selectively and judiciously applied to a software development process, can now yield tremendous productivity gains, and I didn’t want you think that I had I left any intellectual stone unturned in my pursuit of higher truth. But even more to the point, the ultimate value of using a tool to capture SDLC metrics is in the research circumscribed by and built into the product, and the automatic decision making that it enables. Imaging, if you will, perfmon.exe performance counters at the SDLC process level!


Visual Studio Team Foundation, the new repository added to the Visual Studio 2005 product line-up, has loads of interesting features targeting this issue of driving out accidental complexity from the development process. The big idea is that by adhering to a low overhead SDLC discipline, key project metrics can be automatically captured into the VSTF repository for incorporation into a robust decision support engine.


Yes, Ken, this is standard fol-dee-rol, clap-trap, and flap-doodle that we have all heard before. So what the heck is so new here? Well, I claim that there is much more to VSTF than you might perceive at first blush, so let me help you walk down the path of enlightenment.


By way of example:


Part 1

  1. A software test engineer on your project discover a bug in the most recent build of the software, so they create a new work item of type bug in the VSTF repository.
  2. A developer queries the work items repository and fields the bug for triage and remediation.
  3. The developer fixes the bug and checks in related source code to VSTF source control, AND associates the work item bug instance from VSTF with this checked in changeset. (Changeset = all tangible and meta-artifacts scoped to this check-in transaction to the repository)
  4. The hourly build process picks up the source code changes for the fixed bug, compiles the binaries, and makes them available on a UNC file for share for re-testing.
  5.  The original software test engineer tests the bug fix, and closes out the work item.


Part 2

Later, a project lead noodles to the reports section of the VSTF project through project explorer in VS 2005. Noodling project lead then double-clicks on the report titled “Remaining Work”. This is a cumulative flow diagram, and it tells several important stories. (See David J. Anderson, Agile Management for Software Engineering). This chart graphically displays aspects of the project status like work item accrual, testing backlogs, and shortcomings in iteration planning. (See the online process guidance for any VSTF project created using the MSF for Agile process template for more details.)


So the point is this – the step-wise and typical process of the triaging and fixing the bug in part 1 of this example resulted in metrics automatically pouring into the VSTF repository. In step 2, the real value of the silent metrics capture is realized in the form of a Cumulative Project Flow Chart that is recognized by top researchers in the field of software project management as a good indicator of project health. And all you did was use the work item tracking feature of VSTF.


Finally, the key to closing the feedback loop in VSTF are the charts created from distilled repository data. The repository charts are simply the tip of an incredibly deep iceberg of wisdom and decades of project management research. Make sure that your end goals for using VSTF are the project health indicators surfaced through the bundled charts and reports, and you’ll be hailed as an automatic hero.


Ken Garove