Mike Swanson

June, 2004

  • Mike Swanson's Blog

    Automated Continuous Integration and the Ambient Orb™


    How often have you or one of your fellow developers checked code into your source control system that causes a build failure for someone else? And how many times have you heard that “it builds just fine on my machine?” More importantly, how much time does it take to backtrack through all of the changes to uncover the root cause of the failure? If the last good check-in was two days ago, it can be difficult to figure out not only what caused the failure, but whose check-in did it.

    Integration problems like this happen every day on software development projects. Back in 1996, Steve McConnell published an article in IEEE Software magazine about the benefits of performing a Daily Build and Smoke Test that addressed this specific issue. Since then, the concept of a daily build has become more accepted in the development community, but it still amazes me how many projects don’t take advantage of this simple, yet powerful technique.

    Well, if we agree that integrating and building a project once a day is good, are there any benefits to continually integrating and building many times a day? How about every 15 minutes?

    Continuous Integration

    Continuous integration is a technique that encourages developers to check-in their code more frequently, have it compiled by an automated process, run a suite of unit tests, and report on the status. The idea is to tighten the feedback loop so that the effect of an integrated change is communicated back to the developer as soon as possible. By reducing the time between check-in and build status, developers find it much easier to identify faulty code.

    A continuous integration server is responsible for monitoring the source code repository for changes that are made during check-in. When a change is detected, the server automatically:

    • Performs a full check-out
    • Cleans the build output folder
    • Forces a complete rebuild of the entire project
    • Executes all unit tests (optional)
    • Reports on the build and unit testing status

    Of course, for this to work, all source code needs to be stored in a central location that is shared by the development team. Fortunately, source control tools like Visual Studio 2005 Team System, Visual SourceSafe, CVS, Perforce, Subversion, Rational ClearCase, SourceGear Vault, etc. are found on many projects and provide a single repository.

    For the continuous integration server to build the project, a build script needs to be created. NAnt is a popular and free Open Source project that is supported by both CruiseControl.NET and Draco.NET, two popular continuous integration systems for .NET development. Before configuring the continuous integration server, it is worth spending the time to create a functional build script. Although NAnt is not difficult to use, plan to invest a little time reading through the documentation before generating your first script.

    Last, although unit tests are optional, they are highly recommended. On the NxOpinion project, we have over 600 individual NUnit tests that run during every build. The success of this suite of tests tells us that a majority of the basic functionality that is provided by our system is working as-expected. We consider the failure of a single unit test to be equal to a full build failure, and we work to immediately resolve any issues.

    With CruiseControl.NET, when a build or test failure is detected, an e-mail message is sent to the entire development team notifying them of recent check-ins along with the specific failure. When a check-in is successful, an e-mail is sent only to those developers who checked-in code for that build. This way, everyone is always kept in-the-loop on build status. CruiseControl.NET also provides a web site and system tray application to monitor the status of the continuous integration server.

    For our current project, we have configured the server to poll for changes every 15 minutes. If there have been no check-ins since the last build, the system sleeps for another 15 minutes. This cycle continues until new changes are detected. When this happens, the entire system is completely rebuilt. Because of the relatively short polling period, developers only have to wait a few minutes before learning the status of their most recent check-in.

    CruiseControl.NET can also automate FxCop. FxCop is a code analysis tool that uses reflection and IL parsing to check managed assemblies for conformance to the Design Guidelines for Class Library Developers. Think of it as an "after-the-fact code review" that doesn't depend on the original source code. The current version of FxCop evaluates over 200 individual rules, and custom rules can be created. FxCop is very comprehensive, and it is very rare to receive a completely clean report. You may need to ignore rules that don't apply to your project. Although we run FxCop as part of our build, we do not currently have it configured to cause a build failure if any violations are found.

    For a very thorough look at continuous integration, I recommend reading Continuous Integration with CruiseControl.NET and Draco.NET by Justin Gehtland.

    The Ambient Orb™

    Webster defines ambient as an adjective that means “existing or present on all sides.” When used as a noun, it means “an encompassing atmosphere.” Ambient science is an evolving field that attempts to convey low-bandwidth information by embedding it into our surrounding environment. The idea is that certain information isn’t worthy of interruption, so instead, it should available at a glance.

    Ambient Devices is a company that manufactures products that convey ambient information. The Ambient Orb is a $150 sphere of frosted glass that glows a nearly unlimited variety of colors. The Orb is capable of some simple color “animations” like pulsing and an effect called “crescendo” that slowly brightens a color, then immediately dims. Ambient Devices also makes a Beacon that is similar in concept to the Orb, and a recently-available Dashboard.

    The Orb is commonly sold as a device to monitor the stock market, local weather, or any of a number of other free channels of information. Fortunately, for the true geek, a premium account is available for around $20 a quarter that allows you to send your own custom information to the Orb. By calling a specific URL at Ambient’s web site, you can control both the color and animation of your own Orb.

    You might ask: “why do I need to call a URL to change the color of something that’s sitting on my desk?” Good question…this is where the geek magic steps in. You see, each Ambient device has wireless pager electronics built-in that allow it to receive signals over the wireless network (I’m not talking about 802.11x wireless…I’m talking about a wireless pager like the kind you’d wear on your belt to monitor those servers that like to go down in the middle of the night). This means that you only have to plug the Orb into a power outlet, register its device ID with Ambient Devices, and from that point on, the Orb will change colors and animations when its information changes. Pretty cool.

    The Orb is imminently touchable, and visitors always stop to feel its warm glow. It’s definitely a conversation piece. After they ask if you bought it at Target for $19.95, you can proceed to twist their brain by explaining how you’ve configured your system to send information over the Internet that the Orb wirelessly receives through the pager network. The look on their face is similar to the look you’d get if you said: “I’m an alien from Alpha Centauri and that’s my invisible spaceship parked out there on the lawn.” Priceless.

    Raising Build Visibility

    So I had this idea that we could configure an Ambient Orb to reflect the current status of our NxOpinion continuous integration build. A slowly pulsing green would mean that the build is currently okay, and a quickly pulsing red would indicate a build failure. I planned to put the Orb in the middle of our project team so that everyone would be aware of the build status. I hoped that by raising its visibility, everyone on the project team (including the customer) would be more aware of the project “health.”

    Now, when the build breaks and the Orb pulses red, it’s like a fire alarm around here. The first question out of everyone’s mouth is “who broke it?” After appropriate developer guilt has been piled on by the development team (all in good fun, of course), it’s usually a relatively trivial matter to discover and fix the problem. Because we continuously integrate our code and the automated build potentially runs every 15 minutes, determining what caused the failure is as simple as looking at what has been checked-in since the last successful build. Fortunately, CruiseControl.NET includes this information (along with check-in comments) in its e-mail and web page summaries.

    To-date, our solutions contain approximately 175,000 lines of C# code and over 600 unit tests. Since we consider the failure of a single unit test to be a failure of the entire build, if one test fails, the Orb pulses red. As you’d guess, CruiseControl.NET also includes unit test results in its e-mail and web page summaries which makes it easy to identify the problem.

    Although we haven't done it on the NxOpinion project, it would be trivial to configure multiple Orbs to reflect the build health at any location within wireless pager range. You could have an Orb in every development office. And even one at home. Okay, maybe I'm going overboard with that last suggestion, but you know what I mean.


    To configure the automated process to send build status information to the Ambient Orb, we need to add some properties and targets to the NAnt build script that CruiseControl.NET uses. NAnt has two built-in properties that can be leveraged to execute a task on build success and failure. The properties are called nant.onsuccess and nant.onfailure, and they need to be set to point to valid target elements in the build file. In our case, we define targets called OnSuccess and OnFailure, although any valid names will work just fine.

    To send information to the Ambient Orb, query string parameters are passed to an Ambient JavaServer Page. The format of the request is as follows:




    devID The device ID (serial number) of your Ambient Orb
    color A number representing the color (0-36)
    anim A number representing the animation style (0-9)
    comment A short comment that is logged at the Ambient web site

    For more information on the available colors, animation styles, and formatting requirements, see the Ambient Orb WDK documentation.

    We use the NAnt <get> task to send our build status. The <get> task queries a URL and copies the response to a specified file. In our case, we copy the response to a file in the temporary folder, then immediately delete it with the next task. This isn’t our entire NAnt script, but it does contain enough for you to figure out how to incorporate this into your own process:

    <?xml version="1.0" ?>

    <project name="Example">


        <!-- Load environment variables -->

        <sysinfo />


        <!-- Define targets for build status -->

        <property name="nant.onsuccess" value="OnSuccess" />

        <property name="nant.onfailure" value="OnFailure" />


        <!-- Set Ambient Orb for successful build -->

        <target name="OnSuccess" description="Build success">

            <get src="http://myambient.com:8080/java/my_devices/submitdata.jsp?


              dest="${sys.os.folder.temp}\delete.me" failonerror="false"/>

            <delete file="${sys.os.folder.temp}\delete.me" failonerror="false"/>



        <!-- Set Ambient Orb for failed build -->

        <target name="OnFailure" description="Build failure">

            <get src="http://myambient.com:8080/java/my_devices/submitdata.jsp?


              dest="${sys.os.folder.temp}\delete.me" failonerror="false"/>

            <delete file="${sys.os.folder.temp}\delete.me" failonerror="false"/>




    We’ve been using CruiseControl.NET for automated continuous integration for the past year-and-a-half, and it has been a fantastic addition to the project. Although continuous integration is typically associated with the Agile development community, it is a technique that can provide major benefits to teams using any project methodology (even if the developers aren’t writing unit tests).

    It is rare that our build is in a failed status for more than 30 minutes, because CruiseControl.NET makes it so easy to determine what has changed since the last successful build. Plus, because our Ambient Orb is in a highly visible location within the project team, it is easy to see the health of our source code with just a quick glance. We can go home in the evening confident that our offshore team is working with a healthy build, and we have that same expectation when we come in to work the next morning.

    I firmly believe that once you’ve worked on a project with automated continuous integration, you won’t want to work on a project without it. To get started, download either CruiseControl.NET or Draco.NET. It might take a bit of effort to create your first build script, but once you have it up and running, you'll find that it requires relatively little care and feeding to maintain.

    Oh, and don’t forget to let me know about your experience!


    Daily Build and Smoke Test by Steve McConnell: http://www.stevemcconnell.com/bp04.htm
    Martin Fowler's introduction to Continuous Integration: http://www.martinfowler.com/articles/continuousIntegration.html
    Continuous Integration with CruiseControl.NET and Draco.NET by Justin Gehtland: http://www.theserverside.net/articles/showarticle.tss?id=ContinuousIntegration
    CruiseControl.NET: http://ccnet.thoughtworks.com/
    Draco.NET: http://draconet.sourceforge.net/
    NAnt: http://nant.sourceforge.net/
    NUnit: http://www.nunit.org/
    FxCop: http://www.gotdotnet.com/team/fxcop/
    Ambient Orb: http://www.ambientdevices.com/cat/orb/
    Ambient Orb WDK: http://www.ambientdevices.com/developer/OrbWDK.pdf

  • Mike Swanson's Blog

    Code Review and Complexity


    For the past year-and-a-half, I have helped manage the development team responsible for the NxOpinion diagnostic software. Although the methodology we're using for the project isn't 100% Agile, we have borrowed and blended a number of Agile tenets that have afforded us many benefits (Boehm & Turner's Balancing Agility and Discipline is a good book about effectively balancing traditional and Agile methodologies). We are using two techniques that aren't normally talked about when discussing Agile software development: formal code review and code metrics. A recent event prompted me to write this article about how we relate these two techniques on the NxOpinion project.

    Code Review

    One of the practices of eXtreme Programming (or "XP", an instance of Agile software development) is pair-programming, the concept that two people physically work side-by-side at a single computer. The idea is that by having two people work on the same logic, one can type the code while the other watches for errors and possible improvements. In a properly functioning XP pair, partners change frequently (although I've heard of many projects where "pair-programming" means two people are stuck together for the entire length of the project...definitely not XP's concept of pair-programming). Not only does this pairing directly influence code quality, but the constantly changing membership naturally has the effect of distributing project knowledge throughout the entire development team. The goal of pair-programming is not to make everyone an expert in all specialties, but the practice does teach everyone who the "go to" people are.

    Advocates of XP will often argue that pair-programming eliminates the need for formal code review because the code is reviewed as it is being written. Although I do believe that there is some truth to this, I think it also misses out on some key points. On the NxOpinion project, we have a set of documented coding standards (based on Microsoft's Design Guidelines for Class Library Developers) that we expect the development team to adhere to. Coding standards are part of the XP process, but in my experience, just because something is documented doesn't necessarily mean that it will be respected and followed. We use our formal code review process to help educate the team about our standards and help them gain a respect for why those standards exist. After a few meetings, this is something that can usually be automated through the use of tools, and having code pass a standards check before a review is scheduled is a good requirement. Of course, the primary reason we formally review code is to subjectively comment on other possible ways to accomplish the same functionality, simplify its logic, or identify candidates for refactoring.

    Because we write comprehensive unit tests, a lot of the time that we would traditionally spend reviewing proper functionality is no longer necessary. Instead, we focus on improving the functionality of code that has already been shown to work. Compared to a more traditional approach, we do not require all code to be formally reviewed before it is integrated into the system (frankly, XP's notion of collective code ownership would make this notion unrealistic). So, since we believe that there are benefits of a formal code review process, but we don't need to spend the time reviewing everything in the system, how do we decide what we formally review?

    There are two key areas that we focus on when choosing code for review:

    • Functionality that is important to the proper operation of the system (e.g. core frameworks, unique algorithms, performance-critical code, etc.).
    • Code that has a high complexity

    As an example, for the NxOpinion applications, most of our data types inherit from a base type that provides a lot of common functionality. Because of its placement in the hierarchy, it is important that our base type functions in a consistent, reliable, and expected manner. Likewise, the inference algorithms that drive the medical diagnostics must work properly and without error. These are two good examples of core functionality that is required for correct system operation. For other code, we rely on code complexity measurements.

    Code Complexity

    Every day at 5:00pm, an automated process checks out all current source code for the NxOpinion project and calculates its metrics. These metrics are stored as checkpoints that each represent a snapshot of the project at a given point in time. In addition to trending, we use the metrics to gauge our team productivity. They can also be used as a historical record to help improve future estimates. Related to the current discussion, we closely watch our maximum code complexity measurement.

    In 1976, Tom McCabe published a paper arguing that code complexity is defined by its control flow. Since that time, others have identified different ways of measuring complexity (e.g. data complexity, module coupling, algorithmic complexity, calls-to and called-by, etc.). Although these other methods are effective in the right context, it seems to be generally accepted that control flow is one of the most useful measurements of complexity, and high complexity scores have been shown to be a strong indicator of low reliability and frequent errors.

    The Cyclomatic Complexity computation that we use on the NxOpinion project is based on Tom McCabe's work and is defined in Steve McConnell's book, Code Complete on page 395 (a second edition of Steve's excellent book has just become available):

    • Start with 1 for the straight path through the routine
    • Add 1 for each of the following keywords or their equivalents: if, while, repeat, for, and, or
    • Add 1 for each case in a case statement

    So, if we have this C# example:

        while (nextPage != true)


            if ((lineCount <= linesPerPage) && (status != Status.Cancelled) && (morePages == true))


                // ...




    In the code above, we start with 1 for the routine, add 1 for the while, add 1 for the if, and add 1 for each && for a total calculated complexity of 5. Anything with a greater complexity than 10 or so is an excellent candidate for simplification and refactoring. Minimizing complexity is a great goal for writing high-quality, maintainable code.

    Some advantages of McCabe's Cyclomatic Complexity include:

    • It is very easy to compute, as illustrated in the example
    • Unlike other complexity measurements, it can be computed immediately in the development lifecycle (which makes it Agile-friendly)
    • It provides a good indicator of the ease of code maintenance
    • It can help focus testing efforts
    • It makes it easy to find complex code for formal review

    It is important to note that a high complexity score does not automatically mean that code is bad. However, it does highlight areas of the code that have the potential for error. The more complex a method is, the more likely it is to contain errors, and the more difficult it is to completely test.

    A Practical Example

    Recently, I was reviewing our NxOpinion code complexity measurements to determine what to include in an upcoming code review. Without divulging all of the details, the graph of our maximum complexity metric looked like this:

    As you can plainly see, the "towering monolith" in the center of the graph represents a huge increase in complexity (it was this graph that inspired this article). Fortunately for our team, this is an abnormal occurrence, but it made it very easy for me to identify the code for our next formal review.

    Upon closer inspection, the culprit of this high measurement was a method that we use to parse mathematical expressions. Similar to other parsing code I've seen in the past, it was cluttered with a lot of conditional logic (ifs and cases). After a very productive code review meeting that produced many good suggestions, the original author of this method was able to re-approach the problem, simplify the design, and refactor a good portion of the logic. As represented in the graph, the complexity measurement for the parsing code decreased considerably. As a result, it was easier to test the expression feature, and we are much more comfortable about the maintenance and stability of its code.


    Hopefully, I've been able to illustrate that formal code review coupled with complexity measurements provide a very compelling technique for quality improvement, and it is something that can easily be adopted by an Agile team. So, what can you do to implement this technique for your project?

    1. Find a tool that computes code metrics (specifically complexity) for your language and toolset
    2. Schedule the tool so that it automatically runs and captures metrics every day
    3. Use the code complexity measurement to help identify candidates for formal code review
    4. Capture the results of the code review and monitor their follow-up (too many teams forget about the follow-up)

    Good luck, and don't forget to let me know if this works for you and your team!


    Boehm, Barry and Turner, Richard. 2003. Balancing Agility and Discipline: A Guide for the Perplexed. Boston: Addison-Wesley.
    Extreme Programming. 2003 <http://www.extremeprogramming.org/>
    Fowler, Martin. 1999. Refactoring: Improving the Design of Existing Code. Boston: Addison-Wesley.
    McCabe, Tom. 1976. "A Complexity Measure." IEEE Transactions on Software Engineering, SE-2, no. 4 (December): 308-20.
    McConnell, Steve. 1993. Code Complete. Redmond: Microsoft Press.
    Martin, Robert C. 2002. Agile Software Development: Principles, Patterns, and Practices. Upper Saddle River, New Jersey: Prentice Hall.

  • Mike Swanson's Blog

    Mini Pac-Man with Sound


    As you may have read in my earlier post about mini-arcade machines, I did end up building the Pac-Man model. But, not satisfied with just a paper model, I decided I’d add some official Pac-Man sounds. Do you remember those greeting cards that record a few moments of sound and play it back when the card is opened? Well, I looked all over town for one of those, intending to tear it apart and use the audio chip. I couldn’t find one anywhere. Fortunately for me, Radio Shack came through again with their 9-Volt, 20-Second Recording Module (for only $10.49). I wasn’t thrilled about the idea of a relatively heavy 9-volt battery, but it actually makes the machine feel more authentic.

    As you’ll see in the photos, I had to use a heavier card-stock to carry the weight of the battery and endure the button presses on the front panel. I mounted the speaker behind the coin drawer on the front of the machine and the playback button where the joystick would have been in the real game. I thought about exposing the record button on the back panel, but I didn’t want people to accidentally record over the game sounds, so I left it dangling inside, just in case I need to re-record. I captured the coin “gulp” noise and the famous Pac-Man startup theme on the chip (thank you MAME). The speaker produces just enough volume to fit the size of the model.

    It took me more than a few hours to plan everything out and assemble the Pac-Man model, but it was an enjoyable experience that cements my geek status (as if I wasn’t there already). Fortunately, my wife thinks the little machine is “cute,” and she plans on staying married to me. What more can a guy ask for?

    Update: Quite a few people have asked for a video of its performance. Not wanting to disappoint, here you go.

  • Mike Swanson's Blog

    My Current Bookshelf


    People always ask me what I'm reading, so I figured I'd post a photo of the books that are currently on or above my desk. Some of these are older, but many of them are books I've either recently read or are currently referring to. This is a small subset of my book library. It would take more than a few blog postings to cover everything on my shelves. So, we'll start with this:

    The books above the shelf are interesting, but let's focus on the titles down below...they're the ones I reach for most often. Here are my quick impressions:

    The Pragmatic Programmer: An excellent book that discusses many core development practices including prototypes, domain languages, source control, debugging, exceptions, refactoring, requirements and specifications, testing, etc. Very easy to read, and one of my personal top 10.

    Professional Software Development: From a favorite author, Steve McConnell, a book about professionalizing software development. I originally thought this would be a book of practices, but it is more a discussion about the need for the software industry to mature and evolve to become a more rigorous profession (similar to doctors, engineers, lawyers, etc.). I like Steve's ideas, and I think the industry will eventually need to mature along these lines, but it wasn't what I expected...my own fault.

    Code Complete: Second Edition: Another McConell title that is a follow-up to his classic original. This book just showed up at my doorstep, and if it's anywhere near as good as the first edition, this will remain near the top of my must-read list for any developer. Code Complete is an excellent and detailed look at practical software construction. It even has its own site.

    Extreme Programming Pocket Guide: A short, sweet, and consice guide to the core tenets of Extreme Programming. It's not detailed enough if you're new to Agile techniques, but it's a very handy reference if you are. It's by O'Reilly, so we expect nothing but the best.

    Agile Software Development: Principles, Patterns, and Practices: I'm about half-way through this book, and I would highly recommend it to anyone who wants a very good introduction to Agile development practices. Robert Martin (of Object Mentor, Inc.) is an excellent writer, and the text is a very easy read. He fills the book with a number of informative, real-world scenarios and lots of code for illustration. He also covers design patterns.

    Refactoring: Improving the Design of Existing Code: This classic book by Martin Fowler is the best introduction to the concept of refactoring. After the introduction, the book becomes a very useful reference that helps guide the reader through many, many useful refactorings. Definitely one to have on your own shelf.

    Design Patterns: Another classic book that should be on every developer's bookshelf. Written by the “Gang of Four,” this book introduced the concept of a design pattern and contains 23 real-world, common patterns that any developer at any level can benefit from. You'll probably recognize many of them, but you'll also appreciate the thought they've been given and reach for it as a reference as you're architecting your software. Not to be confused with Dating Design Patterns.

    Pattern-Oriented Software Architecture, Volume 1: A System of Patterns: Commonly referred to as POSA, this is another book of design patterns. The patterns aren't as common as those found in the Gang of Four book, but it is a great addition and handy to have on your shelf. Model-View-Controller and Blackboard are only two of the patterns that might be used more frequently.

    Patterns of Enterprise Application Architecture: Another Fowler book, this is a comprehensive look at design patterns that are useful for enterprise development. If you're working in an organization that deals with specialized applications, transactions, databases, etc., and you're interested in things like queries or object-relational mapping, this book is for you. If you write shrink-wrapped commercial software, many of the concepts will still apply, but that is not the book's primary focus.

    Design Patterns in C#: A recent purchase that I've only started to read. From a quick scan, it looks like a good C# version of the 23 patterns presented in the original Design Patterns by the Gang of Four. The jury is still out, but it looks promising.

    .NET Framework Standard Library, Annotated Reference: Volume 1: Brad Abrams blog is a favorite of mine, and I'm always learning something new. He's mentioned this book a number of times, so I finally picked it up. I've only scanned it, but I really like the annotations that describe thoughts, facts, insight, and tips about the classes in the standard library. Not for the casual 9-to-5 developer, in my opinion.

    Domain-Driven Design: Tackling Complexity in the Heart of Software: This book rocks. It jumped immediately to my top 10 list, and I expect it to remain there for awhile. Eric Evans is eminently readable and extremely practical. It is obvious that he has had many years of real, large-project experience, and the book does an excellent job describing how to design an object-oriented system by leveraging and successfully interacting with domain experts. It will probably appear mundane to an average developer, but this is a must-have resource for developers and architects who work with domain experts and want to realize their vision with an extensible and maintainable object model. Go buy it now. It also has its own site.

    Object Design: Roles, Responsibilities, and Collaborations: If you've never done real object-oriented design (and there are many people who think they're doing OO, but in fact, they're just putting procedural code inside of objects), this book is a great introduction. It covers candidate selection, stereotypes, responsiblities, collaboration, and development of a flexible model. Not a “top 10” book, but a good one nonetheless.

    Object Thinking: Based on its description, I was really looking forward to this book. And after reading a couple of chapters, I decided to give it a little more time. And I kept reading. And yes, there is some interesting history, and there are mildly interesting tangents into philosophy, but overall, Mr. West seems to want to listen to himself talk and drop names. He makes some good points about what it means to truly be an object in an object-oriented system, and there are kernels of good information sprikled throughout the text. But, in the end, this book wasn't written for the audience. It was written for Mr. West. Read Domain-Driven Design instead (or Object Design for that matter).

    Test-Driven Development in Microsoft .NET: I haven't read this one yet, but it's coming up on my schedule (yes, I have to schedule these things). James Newkirk (of NUnit fame) is one of the authors, and I peeked at the chapter on testing with databases which looked pretty good. Otherwise, I can't say much until I've read it.

    Balancing Agility and Discipline: A Guide for the Perplexed: This book discusses both Agile software development techniques and more traditional methodologies. It then talks about successfully balancing different aspects of each for different project types and sizes. If you're in an organization that is interested in Agile development, but management isn't quite yet sold on the idea, this is a good book to read. It can either be used as evidence to support your case, or it can help you blend some Agile techniques into a traditional style which might make the transition easier for those who might be uncomfortable.

    Coder to Developer: Tools and Strategies for Delivering Your Software: A great book by Mike Gunderloy that I have reviewed elsewhere on my blog. It has its own site.

    Writing Secure Code: Second Edition: I'm sure you've heard about the Microsoft security push, and volume 2 of Michael Howard's book contains a lot of lessons-learned from our internal experiences. Many very good techniques are discussed, including security principles, threat modeling, buffer overruns, running with least privilege, defending against bad input, reviewing code for security, and many more. If you write code and you care about security (which should be every developer), I can almost guarantee you that this book will be very insightful. Highly recommended.

    I guess my “quick overviews“ weren't so quick, but hopefully they were helpful. If anyone would like more insight into any of these books, leave a comment, and I'll try to fill in some details. If enough people ask, I'll write a short book review as a posting.


  • Mike Swanson's Blog

    Make Your Own Mini-Galaga


    Inspired by the Paper Arcade that I've posted about twice now, I decided to try my hand at creating one of my favorites: Galaga. I owe thanks to Namco, Arcade Art, and everyone who helped create the original Paper Arcade. This is my first attempt, and I hope it can be refined. Comments and suggestions are welcome.

Page 1 of 5 (25 items) 12345