Tom Hollander's blog

patterns, practices and pontification

The Joy of Code Reviews

The Joy of Code Reviews

  • Comments 10

As I mentioned in my recent post about how my team does agile, one of the core ingredients of our process is that nobody is allowed to check in without having gone through a code review and a test review. No other team that I’ve worked on previously has had such a rigorous process around code reviews – while we’ve had ad-hoc pair programming and occasional code walkthroughs, there were no rules about all code being reviewed before check-in. So when I first joined my new team at the SDC, I was unsure what to expect or if I’d like it. But as you might guess from the title of the post, I’ve become a convert.

First, let me go into a bit more detail about how the process works. A developer prepares a change set which normally consists of one completed requirement, or one or more bug fixes. Once they believe they are ready to check in, they will shout out “Code Review!”, at which time any other developer who can spare some time will volunteer to be the reviewer. In some cases the “reviewee” will seek out a specific “reviewer” if they know them to be best qualified in the relevant technology or component.

A code review will typically take around 15 minutes, but they may be considerably longer or shorter. It takes place at the reviewee’s computer (we normally have our entire team working in the same room. For a while we did have one developer in another location - in this case we mimicked the “same computer” approach by using desktop sharing and speakerphones). Normally there isn’t any need to walk through the functional requirements or the high-level design in any detail, since the entire team is involved in the planning sessions and generally knows the critical details. However in some code reviews there may be some use of the whiteboard to communicate any design details that are needed to provided context to the code.

The review is performed by looking at the list of changed files in Visual Studio’s “Pending Changes” window, going through them one-by-one (sometimes from top to bottom, sometimes in whatever order the reviewee thinks is most logical), and performing a “Compare with Latest” operation on each file. Most of us have Beyond Compare or something similar installed to make this easier, but the Visual Studio text comparer works OK as well. We don’t have a specific checklist of things that need to be reviewed, but some typical areas for discussion include:

  • Quantity and quality of unit test coverage
  • Code readability, method and line length
  • Opportunities for reusing existing functionality, or merging new code with existing functionality
  • Naming conventions
  • Consistent application of patterns
  • Globalisation (appropriate use of culture classes, resource files etc.)
  • Hacks, shortcuts or other “smelly” code

If the reviewer is happy with everything in the change set, it’s ready for a test review (or if that happened first, it’s ready to be checked in). Alternatively, the reviewer can veto the check-in and insist that any uncovered issues are fixed first. In rare cases the reviewer may allow the check-in even if there are known issues, with TFS bugs or tasks created for tracking purposes. This option is most commonly used when the issues are minor and there are other developers waiting for the check-in before they can complete their own tasks.

So why did we choose to impose this additional workload across the development team? Well, it’s certainly not because the developers make a lot of mistakes or need close supervision – the team is as experienced and capable as any I’ve worked with. And in fact it is quite rare for a code reviewer to veto a check-in – I don’t have hard statistics but it probably only happens 1 time out of 10. Nevertheless I think the process extremely valuable for the reviewer, the reviewee and the quality of the codebase. First, each developer writes code with full knowledge that it will be scrutinised, so they take extra care to follow established patterns and avoid ugly hacks. Second, it helps “share the wealth” around who understands different parts of the solution. And finally it provides a very personal way for developers to learn from one another, whether it be a new Visual Studio shortcut key, a .NET API they didn’t know existed, or a new architecture, design or testing technique.

One more interesting observation about how this process works in my team: at our “retrospective” meetings that we run at the completion of each iteration, there have been a number of occasions where people have called out that it takes too long to check in code. However I’m not aware of any situations where anyone in the team has suggested abolishing (or even streamlining) the code review or test review processes to achieve this outcome. And having the support and confidence of the team is the ultimate measure of the success of any process.

  • I am a big fan of"code review". I have been pushing this along within my team as well. However, without trying it myself, does your team feel that the approach of yelling out "code review" may be quite an interrupt to the whole team. For instance, this could break everyone's zone.

    My team uses VSTS shelveset. Reviewee emails the set name to a reviewer. Reviewer does it for a break. Reviewer grabs the reviewee if there is any clarification needed. But my team approach has one big issue is that the reviewee may be waiting for quite a while until the review is done, which is quite annoying sometime.

    What do you think?

  • What about continuous integration?  I would think that the efficiency of the process you describe could be improved if devs first checked into a CI branch where automated unit and functional tests were run.  On success devs could  then peer review merges to the main line of development and trigger a new round of automated testing.

  • Hal you make an excellent point. What he is describing could definitely be automated at least partially through a continuous integration server.  

  • Daniel - noise is not uncommon in our team room, and the odd shout is probably a lot less distracting than other things that happen (my singing, for example). Having someone review a shelf-set on their own computer would have some value, but it doesn't address the goal of our code reviews. The developer-to-developer interaction is the most significant part of the way we do this.

    Hal/Bob - as per my earlier post on how we do agile, we do run a CI build, but this happens after the reviews are complete. The CI build is important but picks up a different class of issues to code reviews. Developers will only ask for a code review if they have high confidence that the CI build will succeed (i.e. all the code builds and tests pass on their own machine).

  • James Whittaker on New Year's Resolutions Tom Hollander on The Joy of Code Reviews Ayman Badawi on...

  • Code review is awesome (done in the right spirit anyway). Have you considered an online code review tool like Reviewboard, Codestriker, Atlassian Crucible, Smartbear Codecollaborator.

  • Alby

    I agree, the right spirit is key to the success of code reviews.  Being collocated is IMHO another key ingredient for optimizing the cost/benefit factor of doing the review.  I am currently working in a distributed team and this makes the review process a lot more painful and cumbersome.

  • Did you consider having everyone pair program all the time (sort of an ongoing code review)?  

    Have you noticed a difference in quality (fewer bugs) since you started doing the code reviews?  Have you noticed if it's taking longer to get things done?

    Just curious because I've been thinking about this kind of thing a lot lately and I think I'm ready to take the plunge.

  • Jon - The code review process I've described here was entrenched in the team's culture before I joined, so I can't offer a "before and after" comparison within the same team. I think the major benefit of the approach is to improve code quality and maintainabiliy more than to reduce bugs, but there have definitely been cases where potential bugs have been picked up by code reviewers.

    We do use pair programming at times (primarily for complex tasks), but in my experience it isn't an effective use of development resources for more routine requirements.

  • I don't think code reviews can be automated using continous integration since the purpose of the code review is to have another set of eyes take a look at the code and see if the design can improve and/or spread knowledge of the code. The only thing that does this "better" is pair programming IMHO.

    And regarding the shout outs vs. having people review code using tools/shelveset by them selfs it takes an extremly competent reviewer to actually ask those questions at a later time. But if the reviewer is sitting with the author the reviewer more easily ask questions. In my experience face-2-face reviews result in more comments and discussion than if the reviewer sits alone. And more comments and more discussion are generally better I think.

Page 1 of 1 (10 items)