Mike Swanson

  • Mike Swanson's Blog

    SyncToy for Windows XP

    • 143 Comments

    If you're like me, you probably have thousands of digital photos and documents that you want to backup or copy to external media. In my case, I copy everything to an external 160GB XIMETA NetDisk for safe keeping. I have used the free version of Allway Sync in the past, and I've had very good results. However, we recently released a handy tool for Windows XP called SyncToy, and based on my few days of experience, it appears to do everything I need. Here are a few of its features:

    • Provides easy and flexible copying, moving, and synchronization of files in different directories
    • Manages multiple sets of directories at the same time
    • Can combine files from two folders in one case, and mimic renames and deletes in another
    • Keeps track of renames to files and will make sure those changes get carried over to the synchronized folder

    Configuring SyncToy is as easy as setting up one or more folder pairs and corresponding actions for each pair. For example, I might setup one pair to synchronize changes between two folders (which works both ways) and setup another pair to simply echo changes from one folder to another (echo is the action I use for backup purposes). If you want to get more specific, there are additional options that can be configured.

    If you'd like to know what operations SyncToy would perform on your folder pairs, you can run the convenient preview feature. The preview feature analyzes the folders, then tells you what it would do if it ran, but—most importantly—it doesn't actually make any of the changes. This is a great way to get comfortable with the tool before letting it loose on your precious files. And if you want to automatically process your folder pairs, there's even a topic in the help file (lookup Schedule in the index) that explains how to schedule SyncToy to run on a periodic basis.

    Download SyncToy v1 Beta for Windows XP or to learn more, grab the whitepaper titled: Synchronizing Images and Files in Windows XP Using Microsoft SyncToy.

  • Mike Swanson's Blog

    MIX09 Keynote and Session Videos

    • 16 Comments

    Whew…what a show! Thanks to everyone who joined us in Las Vegas last week for our fourth MIX conference, MIX09. It was great to meet many of you in person and to associate Twitter aliases with real names. It’s awesome that someone can walk up and say, “Hi, I’m WoogyChuck,” and I actually know what that means!

    As we’ve done in prior years at both PDC and MIX, all keynotes and sessions were recorded and published within 24 hours by the talented Brian Keller. The first video format we publish is WMV, and the other formats show up as they’re encoded. By now, almost all of the videos in all of the formats have been published. The few that remain will be added over the coming days.

    Special thanks to Greg Duncan for taking our session data and publishing a simple list of links within days of MIX09. Based on Greg’s work, feedback from the #MIX09 tweets, direct e-mail, and many blog comments, our online team quickly implemented a dynamic list of all MIX09 session and keynote recordings. We’ve learned that you like this straightforward format, and we’ll make sure we add this to our list of features for PDC09 and MIX10. If you prefer to browse the videos by image, check out the thumbnail view.

    To download videos for offline viewing, you have a few options:

    • Visit the list of all MIX09 sessions, and download them individually.
    • Use your favorite RSS tool to download all of the videos in your format of choice (All, WMV, WMV High, MP4). Note that there are also many non-session videos and audio recordings that are available in WMA and MP3 format.
    • If you’re like me and prefer a command-line tool, download a recent build of cURL (1.2MB), and extract it to your folder-of-choice. Then, download MIX09Downloader.zip (1.27KB) and extract the MIX09Downloader.bat file to the same folder. From a command prompt, start MIX09Downloader by passing it one of the following parameters: WMV-HQ, WMV, MP4, Zune, PPTX. Then wait. :-) For files that aren’t available, cURL will download a file that is around 220 bytes in size (if you change the extension to .htm and open it, you’ll see that the file is simply an HTML “not found” error page).

    Here’s how much disk space you need to plan for (~45.5GB in total):

    WMV-HQ

    = ~22GB  

    MP4

    = ~7GB  

    PPTX

    = ~530MB

    WMV

    = ~10GB  

    Zune

    = ~6GB  

     

     

    If you'd like to rename your downloaded files, I've created a MIX09 Renamer batch file (4.19KB) that will do it for you. Extract the MIX09Renamer.bat file to the folder that contains your downloaded files, and from a command prompt, type MIX09Renamer WMV to rename all of the .WMV files to the full session title. By changing the parameter, you can also rename your PPTX and MP4 files. For example:

    B01M.wmv is renamed to B01M - Scaling a Rich Client to Half a Billion Users.wmv

    Last, but not least, we did record some of the workshop sessions. However, because we don’t always record them for publishing (often for contractual reasons), we’re working to determine which ones can be posted. Also, we’ve noticed some audio/video quality issues with some of them that we’re trying to fix. It’ll likely be a few days to a week before we know more, and I’d encourage you to keep your eyes on the MIX09 session list.

    Is there anything else that we’ve missed? I’d love to hear your feedback!

  • Mike Swanson's Blog

    Interviewing at Microsoft

    • 33 Comments

    I am frequently asked about the interview process at Microsoft, and although I’m usually more than happy to relate my individual story and provide some general tips, I can’t provide nearly the insight that two of our recruiters, Gretchen Ledgard, and Zoë Goldring (both responsible for the JobsBlog) provide in this 20 minute Channel 9 video. You’ll hear them talk about dress code, pre-interview tips, the actual interview day, logic questions, coding questions, what we’re looking for in people, whether or not you need a degree, etc. This is the first of at least two video segments to be published, so expect a second part sometime soon. On a related note, Chris Sells maintains a page about Interviewing at Microsoft that is worth reading. And for those who haven’t heard my Microsoft interview story, read on…

    I always thought that I would either work for myself or for Microsoft. After being an independent consultant for many years, creating the industry’s first uninstall application, the first nationwide movie showtime web site, working at Donnelly Corporation for exactly a year (to the minute), and lots of other mildly interesting things, I decided that it was time to send in a resume. I knew that Microsoft received thousands of resumes each day (according to Gretchen, we now receive around 6,000 per day), so I knew that I had to do something that would show my passion for the company, my “out of the box” thinking, and grab their attention.

    You know those life sized celebrity cardboard cutouts that adorn the occasional geek office? I decided to build a cardboard cutout of me and send it along with my resume as the “model Microsoft employee.” To figure out how large it could be, I visited the FedEx office and asked for the maximum dimensions for something shipped next-day air. Although I don’t recall the exact numbers, it was something like 170 inches for combined length and girth. Not only did I want it to be as close to actual size as possible, but I wanted it to make a splash when it was delivered to the HR department in Redmond. After all, how many next-day air packages have you received that were much bigger than a standard letter?

    So, I had some professional photographs taken of me holding a mouse and keyboard and had it professionally printed and mounted on foam core. Using a cardboard celebrity cutout that I purchased as a template, I proceeded to remove the extra foam core and create the folding flaps that would allow it to stand on its own. Then, I created an advertising slick sheet that would accompany my package that explained the model Microsoft employee and how I was obviously a perfect fit.

    Of course, I still spent a lot of time polishing my resume, and it served as the “meat” of my job application. The cardboard cutout was simply a way to get noticed among the thousands of resumes that are received by the company each day. After sandwiching everything between two sides of a cardboard refrigerator box and carefully taping around the edges, I managed to squeeze it in my car and take it down to the local FedEx office. The FedEx employee that helped me was fascinated by the story behind the contents of my package and proceeded to measure the length and girth with a small chain he kept behind the counter. Boy, was it close. I had neglected to consider the thickness that all of that foam core and cardboard would add to the measurement, and I barely squeezed by. Whew!

    About two weeks of stomach churning passed before I finally received a letter from Microsoft in my mailbox. It was a personal note from the Vice President of Human Resources, and he was writing to say that in all of his years at Microsoft, he had never seen a resume quite like mine. He was impressed that I was able to make my job application stand out (no pun intended), and apparently, it was the talk of the whole department (he had it standing outside of his office). He went on to say that my resume would be added to the database and considered for all open positions (I hadn’t applied for a specific job code). He wished me the best of luck and hoped to see me as a future employee.

    I never heard anything else from Microsoft about that resume, and it didn’t end up getting me a job (surprise, surprise). However, I was proud of the fact that I had tried, and I cherished the letter that I received as a response. Unfortunately, I can’t seem to find that letter, or I’d post a copy of it right here. If I do manage to dig it up, I’ll be sure to update this post.

    So, there must be another interview story, right? I mean, I do work for Microsoft, so there has to be more! Of course there is, but it’s not quite as interesting as this one, and it has a much better outcome. The funny thing is, a lot of people have confused this story with the fact that I work for the company, so I often hear that “you’re the guy who sent the cardboard cutout to get the job, right?” That’s when I smile and proceed to tell my story.

    Update: I was able to find the photo I used to produce the cutout...I look pretty silly. I haven't found the letter yet, but I think I know where it is.

    Update #2: I found the letter! I'll scan it and add it to this post later tonight.

    Update #3: I've added the scanned letter below.

  • Mike Swanson's Blog

    Automated Continuous Integration and the Ambient Orb™

    • 43 Comments

    How often have you or one of your fellow developers checked code into your source control system that causes a build failure for someone else? And how many times have you heard that “it builds just fine on my machine?” More importantly, how much time does it take to backtrack through all of the changes to uncover the root cause of the failure? If the last good check-in was two days ago, it can be difficult to figure out not only what caused the failure, but whose check-in did it.

    Integration problems like this happen every day on software development projects. Back in 1996, Steve McConnell published an article in IEEE Software magazine about the benefits of performing a Daily Build and Smoke Test that addressed this specific issue. Since then, the concept of a daily build has become more accepted in the development community, but it still amazes me how many projects don’t take advantage of this simple, yet powerful technique.

    Well, if we agree that integrating and building a project once a day is good, are there any benefits to continually integrating and building many times a day? How about every 15 minutes?

    Continuous Integration

    Continuous integration is a technique that encourages developers to check-in their code more frequently, have it compiled by an automated process, run a suite of unit tests, and report on the status. The idea is to tighten the feedback loop so that the effect of an integrated change is communicated back to the developer as soon as possible. By reducing the time between check-in and build status, developers find it much easier to identify faulty code.

    A continuous integration server is responsible for monitoring the source code repository for changes that are made during check-in. When a change is detected, the server automatically:

    • Performs a full check-out
    • Cleans the build output folder
    • Forces a complete rebuild of the entire project
    • Executes all unit tests (optional)
    • Reports on the build and unit testing status

    Of course, for this to work, all source code needs to be stored in a central location that is shared by the development team. Fortunately, source control tools like Visual Studio 2005 Team System, Visual SourceSafe, CVS, Perforce, Subversion, Rational ClearCase, SourceGear Vault, etc. are found on many projects and provide a single repository.

    For the continuous integration server to build the project, a build script needs to be created. NAnt is a popular and free Open Source project that is supported by both CruiseControl.NET and Draco.NET, two popular continuous integration systems for .NET development. Before configuring the continuous integration server, it is worth spending the time to create a functional build script. Although NAnt is not difficult to use, plan to invest a little time reading through the documentation before generating your first script.

    Last, although unit tests are optional, they are highly recommended. On the NxOpinion project, we have over 600 individual NUnit tests that run during every build. The success of this suite of tests tells us that a majority of the basic functionality that is provided by our system is working as-expected. We consider the failure of a single unit test to be equal to a full build failure, and we work to immediately resolve any issues.

    With CruiseControl.NET, when a build or test failure is detected, an e-mail message is sent to the entire development team notifying them of recent check-ins along with the specific failure. When a check-in is successful, an e-mail is sent only to those developers who checked-in code for that build. This way, everyone is always kept in-the-loop on build status. CruiseControl.NET also provides a web site and system tray application to monitor the status of the continuous integration server.

    For our current project, we have configured the server to poll for changes every 15 minutes. If there have been no check-ins since the last build, the system sleeps for another 15 minutes. This cycle continues until new changes are detected. When this happens, the entire system is completely rebuilt. Because of the relatively short polling period, developers only have to wait a few minutes before learning the status of their most recent check-in.

    CruiseControl.NET can also automate FxCop. FxCop is a code analysis tool that uses reflection and IL parsing to check managed assemblies for conformance to the Design Guidelines for Class Library Developers. Think of it as an "after-the-fact code review" that doesn't depend on the original source code. The current version of FxCop evaluates over 200 individual rules, and custom rules can be created. FxCop is very comprehensive, and it is very rare to receive a completely clean report. You may need to ignore rules that don't apply to your project. Although we run FxCop as part of our build, we do not currently have it configured to cause a build failure if any violations are found.

    For a very thorough look at continuous integration, I recommend reading Continuous Integration with CruiseControl.NET and Draco.NET by Justin Gehtland.

    The Ambient Orb™

    Webster defines ambient as an adjective that means “existing or present on all sides.” When used as a noun, it means “an encompassing atmosphere.” Ambient science is an evolving field that attempts to convey low-bandwidth information by embedding it into our surrounding environment. The idea is that certain information isn’t worthy of interruption, so instead, it should available at a glance.

    Ambient Devices is a company that manufactures products that convey ambient information. The Ambient Orb is a $150 sphere of frosted glass that glows a nearly unlimited variety of colors. The Orb is capable of some simple color “animations” like pulsing and an effect called “crescendo” that slowly brightens a color, then immediately dims. Ambient Devices also makes a Beacon that is similar in concept to the Orb, and a recently-available Dashboard.

    The Orb is commonly sold as a device to monitor the stock market, local weather, or any of a number of other free channels of information. Fortunately, for the true geek, a premium account is available for around $20 a quarter that allows you to send your own custom information to the Orb. By calling a specific URL at Ambient’s web site, you can control both the color and animation of your own Orb.

    You might ask: “why do I need to call a URL to change the color of something that’s sitting on my desk?” Good question…this is where the geek magic steps in. You see, each Ambient device has wireless pager electronics built-in that allow it to receive signals over the wireless network (I’m not talking about 802.11x wireless…I’m talking about a wireless pager like the kind you’d wear on your belt to monitor those servers that like to go down in the middle of the night). This means that you only have to plug the Orb into a power outlet, register its device ID with Ambient Devices, and from that point on, the Orb will change colors and animations when its information changes. Pretty cool.

    The Orb is imminently touchable, and visitors always stop to feel its warm glow. It’s definitely a conversation piece. After they ask if you bought it at Target for $19.95, you can proceed to twist their brain by explaining how you’ve configured your system to send information over the Internet that the Orb wirelessly receives through the pager network. The look on their face is similar to the look you’d get if you said: “I’m an alien from Alpha Centauri and that’s my invisible spaceship parked out there on the lawn.” Priceless.

    Raising Build Visibility

    So I had this idea that we could configure an Ambient Orb to reflect the current status of our NxOpinion continuous integration build. A slowly pulsing green would mean that the build is currently okay, and a quickly pulsing red would indicate a build failure. I planned to put the Orb in the middle of our project team so that everyone would be aware of the build status. I hoped that by raising its visibility, everyone on the project team (including the customer) would be more aware of the project “health.”

    Now, when the build breaks and the Orb pulses red, it’s like a fire alarm around here. The first question out of everyone’s mouth is “who broke it?” After appropriate developer guilt has been piled on by the development team (all in good fun, of course), it’s usually a relatively trivial matter to discover and fix the problem. Because we continuously integrate our code and the automated build potentially runs every 15 minutes, determining what caused the failure is as simple as looking at what has been checked-in since the last successful build. Fortunately, CruiseControl.NET includes this information (along with check-in comments) in its e-mail and web page summaries.

    To-date, our solutions contain approximately 175,000 lines of C# code and over 600 unit tests. Since we consider the failure of a single unit test to be a failure of the entire build, if one test fails, the Orb pulses red. As you’d guess, CruiseControl.NET also includes unit test results in its e-mail and web page summaries which makes it easy to identify the problem.

    Although we haven't done it on the NxOpinion project, it would be trivial to configure multiple Orbs to reflect the build health at any location within wireless pager range. You could have an Orb in every development office. And even one at home. Okay, maybe I'm going overboard with that last suggestion, but you know what I mean.

    Configuration

    To configure the automated process to send build status information to the Ambient Orb, we need to add some properties and targets to the NAnt build script that CruiseControl.NET uses. NAnt has two built-in properties that can be leveraged to execute a task on build success and failure. The properties are called nant.onsuccess and nant.onfailure, and they need to be set to point to valid target elements in the build file. In our case, we define targets called OnSuccess and OnFailure, although any valid names will work just fine.

    To send information to the Ambient Orb, query string parameters are passed to an Ambient JavaServer Page. The format of the request is as follows:

        http://myambient.com:8080/java/my_devices/submitdata.jsp?

          devID=###-###-###&anim=#&color=#&comment=Comment+here

    Where:

    devID The device ID (serial number) of your Ambient Orb
    color A number representing the color (0-36)
    anim A number representing the animation style (0-9)
    comment A short comment that is logged at the Ambient web site


    For more information on the available colors, animation styles, and formatting requirements, see the Ambient Orb WDK documentation.

    We use the NAnt <get> task to send our build status. The <get> task queries a URL and copies the response to a specified file. In our case, we copy the response to a file in the temporary folder, then immediately delete it with the next task. This isn’t our entire NAnt script, but it does contain enough for you to figure out how to incorporate this into your own process:

    <?xml version="1.0" ?>

    <project name="Example">

     

        <!-- Load environment variables -->

        <sysinfo />

     

        <!-- Define targets for build status -->

        <property name="nant.onsuccess" value="OnSuccess" />

        <property name="nant.onfailure" value="OnFailure" />

     

        <!-- Set Ambient Orb for successful build -->

        <target name="OnSuccess" description="Build success">

            <get src="http://myambient.com:8080/java/my_devices/submitdata.jsp?

              devID=###-###-###&anim=4&color=12&comment=Build+success"

              dest="${sys.os.folder.temp}\delete.me" failonerror="false"/>

            <delete file="${sys.os.folder.temp}\delete.me" failonerror="false"/>

        </target>

     

        <!-- Set Ambient Orb for failed build -->

        <target name="OnFailure" description="Build failure">

            <get src="http://myambient.com:8080/java/my_devices/submitdata.jsp?

              devID=###-###-###&anim=6&color=0&comment=Build+failure"

              dest="${sys.os.folder.temp}\delete.me" failonerror="false"/>

            <delete file="${sys.os.folder.temp}\delete.me" failonerror="false"/>

        </target>

    </project>

    Conclusion

    We’ve been using CruiseControl.NET for automated continuous integration for the past year-and-a-half, and it has been a fantastic addition to the project. Although continuous integration is typically associated with the Agile development community, it is a technique that can provide major benefits to teams using any project methodology (even if the developers aren’t writing unit tests).

    It is rare that our build is in a failed status for more than 30 minutes, because CruiseControl.NET makes it so easy to determine what has changed since the last successful build. Plus, because our Ambient Orb is in a highly visible location within the project team, it is easy to see the health of our source code with just a quick glance. We can go home in the evening confident that our offshore team is working with a healthy build, and we have that same expectation when we come in to work the next morning.

    I firmly believe that once you’ve worked on a project with automated continuous integration, you won’t want to work on a project without it. To get started, download either CruiseControl.NET or Draco.NET. It might take a bit of effort to create your first build script, but once you have it up and running, you'll find that it requires relatively little care and feeding to maintain.

    Oh, and don’t forget to let me know about your experience!

    Resources

    Daily Build and Smoke Test by Steve McConnell: http://www.stevemcconnell.com/bp04.htm
    Martin Fowler's introduction to Continuous Integration: http://www.martinfowler.com/articles/continuousIntegration.html
    Continuous Integration with CruiseControl.NET and Draco.NET by Justin Gehtland: http://www.theserverside.net/articles/showarticle.tss?id=ContinuousIntegration
    CruiseControl.NET: http://ccnet.thoughtworks.com/
    Draco.NET: http://draconet.sourceforge.net/
    NAnt: http://nant.sourceforge.net/
    NUnit: http://www.nunit.org/
    FxCop: http://www.gotdotnet.com/team/fxcop/
    Ambient Orb: http://www.ambientdevices.com/cat/orb/
    Ambient Orb WDK: http://www.ambientdevices.com/developer/OrbWDK.pdf

  • Mike Swanson's Blog

    Code Review and Complexity

    • 35 Comments

    For the past year-and-a-half, I have helped manage the development team responsible for the NxOpinion diagnostic software. Although the methodology we're using for the project isn't 100% Agile, we have borrowed and blended a number of Agile tenets that have afforded us many benefits (Boehm & Turner's Balancing Agility and Discipline is a good book about effectively balancing traditional and Agile methodologies). We are using two techniques that aren't normally talked about when discussing Agile software development: formal code review and code metrics. A recent event prompted me to write this article about how we relate these two techniques on the NxOpinion project.

    Code Review

    One of the practices of eXtreme Programming (or "XP", an instance of Agile software development) is pair-programming, the concept that two people physically work side-by-side at a single computer. The idea is that by having two people work on the same logic, one can type the code while the other watches for errors and possible improvements. In a properly functioning XP pair, partners change frequently (although I've heard of many projects where "pair-programming" means two people are stuck together for the entire length of the project...definitely not XP's concept of pair-programming). Not only does this pairing directly influence code quality, but the constantly changing membership naturally has the effect of distributing project knowledge throughout the entire development team. The goal of pair-programming is not to make everyone an expert in all specialties, but the practice does teach everyone who the "go to" people are.

    Advocates of XP will often argue that pair-programming eliminates the need for formal code review because the code is reviewed as it is being written. Although I do believe that there is some truth to this, I think it also misses out on some key points. On the NxOpinion project, we have a set of documented coding standards (based on Microsoft's Design Guidelines for Class Library Developers) that we expect the development team to adhere to. Coding standards are part of the XP process, but in my experience, just because something is documented doesn't necessarily mean that it will be respected and followed. We use our formal code review process to help educate the team about our standards and help them gain a respect for why those standards exist. After a few meetings, this is something that can usually be automated through the use of tools, and having code pass a standards check before a review is scheduled is a good requirement. Of course, the primary reason we formally review code is to subjectively comment on other possible ways to accomplish the same functionality, simplify its logic, or identify candidates for refactoring.

    Because we write comprehensive unit tests, a lot of the time that we would traditionally spend reviewing proper functionality is no longer necessary. Instead, we focus on improving the functionality of code that has already been shown to work. Compared to a more traditional approach, we do not require all code to be formally reviewed before it is integrated into the system (frankly, XP's notion of collective code ownership would make this notion unrealistic). So, since we believe that there are benefits of a formal code review process, but we don't need to spend the time reviewing everything in the system, how do we decide what we formally review?

    There are two key areas that we focus on when choosing code for review:

    • Functionality that is important to the proper operation of the system (e.g. core frameworks, unique algorithms, performance-critical code, etc.).
    • Code that has a high complexity

    As an example, for the NxOpinion applications, most of our data types inherit from a base type that provides a lot of common functionality. Because of its placement in the hierarchy, it is important that our base type functions in a consistent, reliable, and expected manner. Likewise, the inference algorithms that drive the medical diagnostics must work properly and without error. These are two good examples of core functionality that is required for correct system operation. For other code, we rely on code complexity measurements.

    Code Complexity

    Every day at 5:00pm, an automated process checks out all current source code for the NxOpinion project and calculates its metrics. These metrics are stored as checkpoints that each represent a snapshot of the project at a given point in time. In addition to trending, we use the metrics to gauge our team productivity. They can also be used as a historical record to help improve future estimates. Related to the current discussion, we closely watch our maximum code complexity measurement.

    In 1976, Tom McCabe published a paper arguing that code complexity is defined by its control flow. Since that time, others have identified different ways of measuring complexity (e.g. data complexity, module coupling, algorithmic complexity, calls-to and called-by, etc.). Although these other methods are effective in the right context, it seems to be generally accepted that control flow is one of the most useful measurements of complexity, and high complexity scores have been shown to be a strong indicator of low reliability and frequent errors.

    The Cyclomatic Complexity computation that we use on the NxOpinion project is based on Tom McCabe's work and is defined in Steve McConnell's book, Code Complete on page 395 (a second edition of Steve's excellent book has just become available):

    • Start with 1 for the straight path through the routine
    • Add 1 for each of the following keywords or their equivalents: if, while, repeat, for, and, or
    • Add 1 for each case in a case statement

    So, if we have this C# example:

        while (nextPage != true)

        {

            if ((lineCount <= linesPerPage) && (status != Status.Cancelled) && (morePages == true))

            {

                // ...

            }

        }

     

    In the code above, we start with 1 for the routine, add 1 for the while, add 1 for the if, and add 1 for each && for a total calculated complexity of 5. Anything with a greater complexity than 10 or so is an excellent candidate for simplification and refactoring. Minimizing complexity is a great goal for writing high-quality, maintainable code.

    Some advantages of McCabe's Cyclomatic Complexity include:

    • It is very easy to compute, as illustrated in the example
    • Unlike other complexity measurements, it can be computed immediately in the development lifecycle (which makes it Agile-friendly)
    • It provides a good indicator of the ease of code maintenance
    • It can help focus testing efforts
    • It makes it easy to find complex code for formal review

    It is important to note that a high complexity score does not automatically mean that code is bad. However, it does highlight areas of the code that have the potential for error. The more complex a method is, the more likely it is to contain errors, and the more difficult it is to completely test.

    A Practical Example

    Recently, I was reviewing our NxOpinion code complexity measurements to determine what to include in an upcoming code review. Without divulging all of the details, the graph of our maximum complexity metric looked like this:

    As you can plainly see, the "towering monolith" in the center of the graph represents a huge increase in complexity (it was this graph that inspired this article). Fortunately for our team, this is an abnormal occurrence, but it made it very easy for me to identify the code for our next formal review.

    Upon closer inspection, the culprit of this high measurement was a method that we use to parse mathematical expressions. Similar to other parsing code I've seen in the past, it was cluttered with a lot of conditional logic (ifs and cases). After a very productive code review meeting that produced many good suggestions, the original author of this method was able to re-approach the problem, simplify the design, and refactor a good portion of the logic. As represented in the graph, the complexity measurement for the parsing code decreased considerably. As a result, it was easier to test the expression feature, and we are much more comfortable about the maintenance and stability of its code.

    Conclusion

    Hopefully, I've been able to illustrate that formal code review coupled with complexity measurements provide a very compelling technique for quality improvement, and it is something that can easily be adopted by an Agile team. So, what can you do to implement this technique for your project?

    1. Find a tool that computes code metrics (specifically complexity) for your language and toolset
    2. Schedule the tool so that it automatically runs and captures metrics every day
    3. Use the code complexity measurement to help identify candidates for formal code review
    4. Capture the results of the code review and monitor their follow-up (too many teams forget about the follow-up)

    Good luck, and don't forget to let me know if this works for you and your team!

    References

    Boehm, Barry and Turner, Richard. 2003. Balancing Agility and Discipline: A Guide for the Perplexed. Boston: Addison-Wesley.
    Extreme Programming. 2003 <http://www.extremeprogramming.org/>
    Fowler, Martin. 1999. Refactoring: Improving the Design of Existing Code. Boston: Addison-Wesley.
    McCabe, Tom. 1976. "A Complexity Measure." IEEE Transactions on Software Engineering, SE-2, no. 4 (December): 308-20.
    McConnell, Steve. 1993. Code Complete. Redmond: Microsoft Press.
    Martin, Robert C. 2002. Agile Software Development: Principles, Patterns, and Practices. Upper Saddle River, New Jersey: Prentice Hall.

Page 2 of 74 (369 items) 12345»