Last week I took a trip to Texas to visit my girlfriend.  It was a pretty nice way to unwind after shipping Beta2, but I am not really capable of 'relaxing' without focusing on doing some sort work.  So I started to think about some of the ways we could improve our development process, in particular reducing the number of defects in our software. 

What began as a simple list of process improvements, such a website to track changes in a file, quickly became a philosophical journey into the nature of software.  (Seriously.)

You see, when I was going to school I firmly believed that with quality devs it is possible to create zero-defect software.  Zero bugs, nada, zilch, none.  (This is touched upon in the Mythical Man-Month.)  But after spending some time in 'the real world' my dreams of software perfection came to an abrupt and violent end.

Consider this maxim:
If you can write a function without any bugs you can write a class without any bugs.  If you can write a class without any bugs you can write a code-file without any bugs.  If you can write a code-file without any bugs you can write an entire project without any bugs. 

Sounds simple enough, but most developers quickly learn the fallacy in that statement is that the word 'bug' is far too vague.  Does it refer to a deviation from what the programmer intended?  Is it a deviation from the code to the spec?  Is it a difference from what the user expected the program to do?  Not only are there stats on the average number of ‘defects’ per 1,000 lines of code but there are also several ways to view and interpret each of those ‘defects’.

Any process change which is used to curb the number of software defects is limited to the perspective by which it defines defects.

Unit testing for example fixes bugs according to the developer’s perspective.  QA departments on the other hand traditionally fix bugs according to the user’s perspective.

Obviously fewer bugs lead to better software, regardless of the ‘types’ of bugs.  However I would argue that there needs to be a balance. 

If software is useful and intuitive to the user but grotesque under the hood, then is it truly good software?  What about a piece of code that works perfectly, but the UI complexly obtuse.

I figured I would share this quandary with you and welcome any of your insights.  When I can get a better hold on this issue I’ll try to write a more formal definition of software quality in relation to perspective.

Until then, enjoy Beta2!