For the past year-and-a-half, I have helped manage the development team responsible for the NxOpinion diagnostic software. Although the methodology we're using for the project isn't 100% Agile, we have borrowed and blended a number of Agile tenets that have afforded us many benefits (Boehm & Turner's Balancing Agility and Discipline is a good book about effectively balancing traditional and Agile methodologies). We are using two techniques that aren't normally talked about when discussing Agile software development: formal code review and code metrics. A recent event prompted me to write this article about how we relate these two techniques on the NxOpinion project.
One of the practices of eXtreme Programming (or "XP", an instance of Agile software development) is pair-programming, the concept that two people physically work side-by-side at a single computer. The idea is that by having two people work on the same logic, one can type the code while the other watches for errors and possible improvements. In a properly functioning XP pair, partners change frequently (although I've heard of many projects where "pair-programming" means two people are stuck together for the entire length of the project...definitely not XP's concept of pair-programming). Not only does this pairing directly influence code quality, but the constantly changing membership naturally has the effect of distributing project knowledge throughout the entire development team. The goal of pair-programming is not to make everyone an expert in all specialties, but the practice does teach everyone who the "go to" people are.
Advocates of XP will often argue that pair-programming eliminates the need for formal code review because the code is reviewed as it is being written. Although I do believe that there is some truth to this, I think it also misses out on some key points. On the NxOpinion project, we have a set of documented coding standards (based on Microsoft's Design Guidelines for Class Library Developers) that we expect the development team to adhere to. Coding standards are part of the XP process, but in my experience, just because something is documented doesn't necessarily mean that it will be respected and followed. We use our formal code review process to help educate the team about our standards and help them gain a respect for why those standards exist. After a few meetings, this is something that can usually be automated through the use of tools, and having code pass a standards check before a review is scheduled is a good requirement. Of course, the primary reason we formally review code is to subjectively comment on other possible ways to accomplish the same functionality, simplify its logic, or identify candidates for refactoring.
Because we write comprehensive unit tests, a lot of the time that we would traditionally spend reviewing proper functionality is no longer necessary. Instead, we focus on improving the functionality of code that has already been shown to work. Compared to a more traditional approach, we do not require all code to be formally reviewed before it is integrated into the system (frankly, XP's notion of collective code ownership would make this notion unrealistic). So, since we believe that there are benefits of a formal code review process, but we don't need to spend the time reviewing everything in the system, how do we decide what we formally review?
There are two key areas that we focus on when choosing code for review:
As an example, for the NxOpinion applications, most of our data types inherit from a base type that provides a lot of common functionality. Because of its placement in the hierarchy, it is important that our base type functions in a consistent, reliable, and expected manner. Likewise, the inference algorithms that drive the medical diagnostics must work properly and without error. These are two good examples of core functionality that is required for correct system operation. For other code, we rely on code complexity measurements.
Every day at 5:00pm, an automated process checks out all current source code for the NxOpinion project and calculates its metrics. These metrics are stored as checkpoints that each represent a snapshot of the project at a given point in time. In addition to trending, we use the metrics to gauge our team productivity. They can also be used as a historical record to help improve future estimates. Related to the current discussion, we closely watch our maximum code complexity measurement.
In 1976, Tom McCabe published a paper arguing that code complexity is defined by its control flow. Since that time, others have identified different ways of measuring complexity (e.g. data complexity, module coupling, algorithmic complexity, calls-to and called-by, etc.). Although these other methods are effective in the right context, it seems to be generally accepted that control flow is one of the most useful measurements of complexity, and high complexity scores have been shown to be a strong indicator of low reliability and frequent errors.
The Cyclomatic Complexity computation that we use on the NxOpinion project is based on Tom McCabe's work and is defined in Steve McConnell's book, Code Complete on page 395 (a second edition of Steve's excellent book has just become available):
So, if we have this C# example:
while (nextPage != true)
if ((lineCount <= linesPerPage) && (status != Status.Cancelled) && (morePages == true))
In the code above, we start with 1 for the routine, add 1 for the while, add 1 for the if, and add 1 for each && for a total calculated complexity of 5. Anything with a greater complexity than 10 or so is an excellent candidate for simplification and refactoring. Minimizing complexity is a great goal for writing high-quality, maintainable code.
Some advantages of McCabe's Cyclomatic Complexity include:
It is important to note that a high complexity score does not automatically mean that code is bad. However, it does highlight areas of the code that have the potential for error. The more complex a method is, the more likely it is to contain errors, and the more difficult it is to completely test.
Recently, I was reviewing our NxOpinion code complexity measurements to determine what to include in an upcoming code review. Without divulging all of the details, the graph of our maximum complexity metric looked like this:
As you can plainly see, the "towering monolith" in the center of the graph represents a huge increase in complexity (it was this graph that inspired this article). Fortunately for our team, this is an abnormal occurrence, but it made it very easy for me to identify the code for our next formal review.
Upon closer inspection, the culprit of this high measurement was a method that we use to parse mathematical expressions. Similar to other parsing code I've seen in the past, it was cluttered with a lot of conditional logic (ifs and cases). After a very productive code review meeting that produced many good suggestions, the original author of this method was able to re-approach the problem, simplify the design, and refactor a good portion of the logic. As represented in the graph, the complexity measurement for the parsing code decreased considerably. As a result, it was easier to test the expression feature, and we are much more comfortable about the maintenance and stability of its code.
Hopefully, I've been able to illustrate that formal code review coupled with complexity measurements provide a very compelling technique for quality improvement, and it is something that can easily be adopted by an Agile team. So, what can you do to implement this technique for your project?
Good luck, and don't forget to let me know if this works for you and your team!
Boehm, Barry and Turner, Richard. 2003. Balancing Agility and Discipline: A Guide for the Perplexed. Boston: Addison-Wesley.Extreme Programming. 2003 <http://www.extremeprogramming.org/>Fowler, Martin. 1999. Refactoring: Improving the Design of Existing Code. Boston: Addison-Wesley.McCabe, Tom. 1976. "A Complexity Measure." IEEE Transactions on Software Engineering, SE-2, no. 4 (December): 308-20.McConnell, Steve. 1993. Code Complete. Redmond: Microsoft Press.Martin, Robert C. 2002. Agile Software Development: Principles, Patterns, and Practices. Upper Saddle River, New Jersey: Prentice Hall.
I've had fun with photography for many years, and I especially enjoy macro photography. Whenever I'm on a trip, I keep my eye out for interesting subjects and textures. You should see some of the strange looks I get when I'm standing about 6 inches from a wall taking photographs of stucco, wood, or bricks. I get even stranger looks when I spend time taking photographs of the floor. Anyway, I keep a folder on my computer full of macro shots that make good desktop wallpaper. Here are four "natural" shots of leaves and flowers that I thought you might enjoy. All images have been resized to 1280 x 1024, and they're around 275KB each.
As a point of interest, the first photograph (palm leaf...oops, Ravages points out that this is most likely a Banana leaf) was taken in front of Ernest Hemingway's home in Key West, Florida. You'll notice that it's the same image I've used for the header graphic of my blog.
Let me know if you'd like me to post more of these. I have quite a few.
I'm currently helping the Robertson Research Institute convert their large C# Visual Studio .NET 2003 applications to Visual Studio 2005 ("Whidbey") Beta 1. For the past year-and-a-half, we've used CruiseControl.NET (CC.NET), NAnt, NUnit, and FxCop to automate our build, unit testing, and static code analysis. Now that we're running under the .NET Framework 2.0, we'd like to leverage the MSBuild tool in our processes (unfortunately, the most recent version of NAnt doesn't understand the new file formats). Fortunately, the new formats can be directly consumed by MSBuild.So, I spent some time today trying to figure out how to get the latest versions of these tools to work together. Although I wasn't able to achieve a perfect solution, I did find a configuration that is workable until CC.NET adds native MSBuild support. Here are some of the ideas I considered:
Because I wanted a simple solution, I decided to try the commandLineBuilder. Additionally, since the CC.NET team is already planning MSBuild support, it should be an easy transition when the new feature finally arrives. So, the setup I'll describe uses NUnit 2.2, FxCop 1.30 for .NET 2.0, and the latest nightly build of CruiseControl.NET.
Before you do anything else, you should modify your NUnit 2.2 configuration file to prefer the .NET Framework 2.0. You'll need to do this for CC.NET to successfully run your unit tests. Open the nunit-console.exe.config file and find the following section:
<startup> <supportedRuntime version="v1.1.4322" /> <supportedRuntime version="v2.0.40607" /> <supportedRuntime version="v1.0.3705" /> <requiredRuntime version="v1.0.3705" /></startup>
Then, move the version 2.0.40607 element to the front of the list and save:
<startup> <supportedRuntime version="v2.0.40607" /> <supportedRuntime version="v1.1.4322" /> <supportedRuntime version="v1.0.3705" /> <requiredRuntime version="v1.0.3705" /></startup>
As a side note, if you plan to use the NUnit 2.2 GUI for manual testing, you'll have to make the same change to the nunit-gui.exe.config file.
MSBuild scripts can range from the very simple to the very complex. For this example, we'll stick with something easy. Whether you know it or not, the solution and project files that Visual Studio 2005 produces can be directly fed into MSBuild. So, for a solution called Bank.sln, I can build all of its projects by typing MSBuild Bank.sln. Although this works with solution files, they are not actually in the official MSBuild XML format. But, project files are. And the best part is that you can manually modify the project/MSBuild file and Visual Studio 2005 will respect your changes. Okay...I'm getting off-topic...let's get back on track...
If you don't want to run FxCop analysis as part of your CC.NET build, you do not need to create your own build file. However, if you'd like to include FxCop analysis, you can create your own build file in the same directory as your solution file. For example, I created the following Bank.msbuild file:
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Target Name="Build">
<!-- Clean, then rebuild entire solution --> <MSBuild Projects="Bank.sln" Targets="Clean;Rebuild" />
<!-- Run FxCop analysis --> <Exec Command="FxCopCmd.exe /project:C:\depot\Bank\Bank.FxCop /out:C:\depot\Bank\Bank.FxCop.xml" WorkingDirectory="C:\Program Files\Microsoft FxCop 1.30" />
This file has a single target called Build. The first thing it does is call MSBuild on the Bank.sln file. The Targets attribute says that we'd like to execute the Clean task first (which deletes all of the old build output), then execute the Rebuild task. I run Clean so that there are no old files from the last build left sitting in the output folders. It can be confusing if your build fails but the unit tests pass. This is usually because the old unit test assembly is still hanging around from a prior successful build. Using Clean eliminates that problem.
Next, the Exec command executes FxCop. I have it execute against an FxCop project file, but you can call it directly against an assembly if that works better for you. I've directed the output to an xml file that will later be merged into the build and NUnit output so it can be reported by CC.NET.
Now we need to tell CC.NET to call our MSBuild file (or, if you don't need to run FxCop, you can directly call the solution file). Although there are many sections in the ccnet.config file, these are the elements that we're interested in:
<build type="commandLineBuilder"> <executable>C:\WINDOWS\Microsoft.NET\Framework\v2.0.40607\MSBuild.exe</executable> <baseDirectory>C:\depot\Bank</baseDirectory> <buildArgs>Bank.msbuild /p:Configuration=Debug</buildArgs> <buildTimeoutSeconds>60</buildTimeoutSeconds></build> Notice that instead of using a build type of NAnt, we've specified the commandLineBuilder. This allows us to call anything we'd like at the command line. The baseDirectory points to the directory containing the Bank.msbuild file that we created earlier. The buildArgs specifies the name of the build file and any other command-line options we'd like to send to MSBuild. If you don't plan to run FxCop, you can simply call MSBuild on the solution file by replacing the buildArgs with the following line:
<buildArgs>Bank.sln /t:Clean;Rebuild /p:Configuration=Debug</buildArgs>
Since our MSBuild script doesn't get the latest source from the source repository, you'll want to let CC.NET do it by setting autoGetSource to true in your sourcecontrol tag.
To run the NUnit tests, use the nunit task by specifying the path to the NUnit console application, and list the assemblies that you'd like to test. Because we're using CC.NET's nunit task, we don't need to worry about merging the test output (like we might normally have to do if we were using NAnt).
<tasks> <!-- No file merging necessary if using CC.NET's NUnit task --> <nunit> <path>C:\Program Files\NUnit 2.2\bin\nunit-console.exe</path> <assemblies> <assembly>C:\depot\Bank\BankTest\bin\debug\BankTest.dll</assembly> </assemblies> </nunit> <!-- Merge FxCop output --> <merge> <files> <file>c:\depot\Bank\*.FxCop.xml</file> </files> </merge> </tasks>
Last, we do need to tell CC.NET to merge our FxCop output. This lets us see the FxCop detail on the CC.NET web site.
Although this isn't a perfect solution, it's a great temporary solution that doesn't take a lot of effort. By using this configuration, you'll be able to use MSBuild, run your NUnit 2.2 tests, and leverage the latest version of FxCop all under the control of CruiseControl.NET. The only downside is that—because CC.NET doesn’t know how to capture useful build output from the commandLineBuilder—you won't see your build warnings and errors in the CC.NET e-mail or on the web site. Although this sounds like a show-stopper, you can use the original log link on the web site, and it is trivial to quickly find the last failure.
If you're curious about MSBuild, you can find a three-part MSDN article here, here, and here. If you'd like to know more about CruiseControl.NET, continuous integration, and the Ambient Orb, check out my other article.
A few posts back, I mentioned that I installed Media Center 2005 on my old P4 1.8GHz computer. Since I've now fallen in love with it, I figured it was time to assimilate its remote control functionality into my 2-year-old Marantz RC9200 universal remote. As a side note, my wife was gracious enough to give me the RC9200 as a gift after I had been salivating over it for months. Although it's a couple years old, it has been a fantastic remote control. It is truly universal in that it can "learn" by recording other remote infra-red (IR) signals, it can broadcast radio frequency (RF) signals (to control your X10 room lighting, for example), and it has custom software that allows you to completely configure everything, including the look and feel of the screen. It's not for the faint-of-heart, but if you like programming, you have a bit of creativity, and you are a patient person, you can work wonders. Here are a few of my screens:
As usual, I plugged the RC9200 into my computer and started recording IR commands from the Media Center remote. The process went smoothly, and I was able to successfully record the codes for each key. I configured the macros I planned on using and hooked up my virtual touch-screen buttons to their appropriate IR counterparts. After downloading the new configuration to my universal remote, I carried it downstairs for its first test. Initially, it seemed as if everything was working just fine. However, after a few short moments, I quickly realized that something was amiss. The first press of my down arrow button worked properly, but it wouldn't accept a second press...that is, until I pressed something else first. And the other buttons all behaved in a similar fashion. Very strange.
So, I turned to the remote control experts at Remote Central and The Green Button. A casual search of their forums turned up a couple of posts (here and here) about similar behavior with other learning remotes. Turns out that the Media Center remote has two sets of codes that alternate with each button press (apparently using a bit flipping technique). This method is used so that a single key press isn't accidentally received twice by the computer and is referred to as debounce. From what I've been able to find on the internet, it seems that IR codes can inadvertently be received more than once by reflecting off surfaces or being interfered with by displays, lamps, etc. How interesting. To avoid this effect, the Media Center remote sends the first IR command for down arrow, and when the user presses the button again, it sends a second IR command for down arrow. If a different button is pressed in between these two presses, it doesn't matter, because clearly, it's not a key "bounce" in that scenario. It's interesting to note that this is exactly the behavior I was noticing.
There are a few suggested options to deal with this. First, you can follow every normal command with a "do nothing" command. Unfortunately, it's often difficult to identify a do nothing command on the remote. For example, if the clear command did nothing useful on the remote, you could conceptually program the down arrow functionality as: down arrow + clear. By doing this, you've sent a second real command to the receiver, and your next down arrow command will be considered a second press. Not pretty, but a functional hack. The second option is to literally duplicate the user interface panels and switch between them with each press of a key. Of course, you'd have two panels, each with their own set of IR codes. Although it sounds doable, it's definitely more work, and it sounds like even more of a hack. And I'm no fan of hacks.
The third option is to simply disable the debounce feature of Media Center and use a single set of IR codes. I don't know why this isn't exposed in the settings screens in Media Center, because it's something that anyone with a learning remote will run into. To disable the debounce feature, you need to modify a single registry key. Standard registry editing rules apply...make sure you create a backup, know what you're doing, etc., etc. The key is called EnableDebounce, and from what I've read in various posts, it's found in the following locations:
For Media Center 2004: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\idIr\Remotes\745a17a0-74d3-11d0-b6fe-00a0c90f57da
For Media Center 2005: HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\HidIr\Remotes\745a17a0-74d3-11d0-b6fe-00a0c90f57da
So, fire up RegEdit, navigate to the appropriate key, and change the EnableDebounce value from 1 (its default setting) to 0. Note that you'll have to reboot your system for this change to take effect. After this modification, my universal remote now works like a charm, and I've added one more remote to the remote control graveyard behind my big screen TV.
In response to my XAML By Hand? post awhile back, I became curious about what it would take to export Avalon-friendly XAML from a tool like Adobe® Illustrator®. So, I downloaded the publicly available Illustrator SDK, and I’ve been spending some spare time in the evenings working on a plug-in.
Well, the plug-in is far enough along at this point to be relatively useful, so I’m releasing it to the public as a free download. The current version works with Adobe Illustrator CS and CS2 running on Windows. Note that this plug-in is not endorsed, warranted, or supported by Microsoft. It was created by me after hours, so use it at your own risk.
For most of the common scenarios, I think you’ll find that the plug-in works very well. However, there are limitations, and you can see some of them illustrated on the Eye Candy page.
Version 0.11 of the plug-in exports XAML that is compatible with Avalon Beta 1 RC, so you should be able to start producing some pretty cool stuff right away. If you do create something that others should see, please let me know, and upload it to the Channel 9 Sandbox.
Speaking of Channel 9, Robert Scoble talked to me about the development of the plug-in, and if you have 25 minutes to spare, you can watch the complete video interview. We talk about raster/vector artwork, how Avalon enables smooth workflow between a graphic designer and an application developer, and I show some demos of the exporter in action.
I’d certainly be very interested in any feedback, comments, or questions you may have.