In part I of this series, we looked at some of the benefits of the SDET discipline being fully plugged into the code inspection pipeline.  These benefits include such outcomes as pushing quality upstream and raising SDET awareness of the code that is being tested.  Plus, it’s fun!  J

 

Now we will share the general process and approach that we take in one of our SDET Code Inspection virtual teams.  Please note that the format and style of the code inspection process described below may differ from what you are familiar with (especially if you have used a formal code review process in the past).  We participate in a type of lightweight group review, which takes bits and pieces from several formal inspection processes.  Also, please note that the approach being outlined below is for post-check-in inspections in which we are able to review code on a larger scale across multiple components (as opposed to pre-check-in inspections which tend to be narrower and focused more around particular isolated changes).

 

Ownership 

As often as possible, we push to have the SDET who owns the feature area also own and drive the end-to-end code inspection process for that area.  This is helpful in several ways, such as helping reinforce ownership of the area, introducing more SDETs to the process, and also in getting the SDET owner to think about questions like: 

  • What materials/specs/information should I provide to the participants of the review in order for them to gain an understanding of the feature that is being reviewed?   Am I able to distill the essence of the feature and describe this to the participants?
  • What are the most critical pieces of code to review?  Oftentimes, we must scale back the review in order to meet our maximum lines-of-code guidelines.  It is a worthwhile exercise for SDETs to be able to make qualitative judgments about what code area are riskier and what code requires more attention

 

Setting Up the Meeting

In general, the feature test owner will do the following prior to the review meeting:

1.       Decide what code will be reviewed

  • Usually 1500-2000 lines-of-code is optimal for a one hour review meeting.  Anything more than that is counter-productive, and is unlikely to be finished in a single review meeting.  Sometimes multiple review meetings are needed depending on the size/complexity of the feature.  Smaller features can sometimes be reviewed completely in an hour.  Other features may require multiple meetings (3, 4, 5+) to cover all of the critical code
  • Areas of code to review are decided upon at least 3-4 days in advance and are clearly stated (included files(s), line numbers, etc) in the meeting invite.  To allow for SDETs to work at their own pace, it is important to give them several days to complete the review
  • It is oftentimes useful for the code selection process to be a collaborative effort between the disciplines to figure out which are the most critical areas to review 

2.       Decide upon supporting materials

  • Additional materials can/should be provided in order to inform the reviewers about the feature(s) under review.  This can include functional/design/test specifications or a non-technical synopsis of what the code is trying to do.  It is amazing how useful this can be during the inspection process, as we have seen numerous bugs come out of reviews where the only thing wrong with the code was that it wasn’t doing what the spec said it should be!  J
  • It is important that all reviewers provide feedback using the same process so that folks are on the same page (both literally and figuratively) during the actual review meeting.  In some cases, this  means that physical printouts using a common tool are made for all members who will participate in the review.  In other cases, this means that an online tool will be leveraged to provide feedback.  Whatever the process is, it should be consistent for all reviewers and designated by the test owner.  Otherwise, you run into confusion over exactly what lines-of-code are being discussed at particular times during the review meeting. 

3.       Send out the meeting request

  • The meeting request goes out to all members of the code inspection virtual team as well as other members of the feature area’s crew
  • The meeting request includes the code to be reviewed as well as any supporting materials

 

Reviewing the Code 

In this post, I won’t dive into the mechanics of reviewing the code (this will be covered in its own future post).  In our inspection process, the actual heads-down inspection happens offline at the reviewer’s pace, and this happens before the code review meeting occurs.  Reasons for this approach include: 

  • This allows for different styles/paces of review to take place
  • This provides more time during the actual meeting to discuss the defects and other potential problems

 

The Meeting 

As mentioned above, in our process the meeting itself is not where the inspection takes place, but rather where the defects are tabulated and discussed.  In general this is accomplished during the meeting by moving through the code, page by page, and recording the defects and which participants found them.  Since one of our main goals in this process is education, in many cases we try to take the time to talk about particular patterns of defects found in the code and how these might be avoided altogether with a holistic approach.  We also spend some time talking about the design decisions that were made as well as interdependencies between features, as this is a common area where defects tend to crop up.

 

During the review meeting, the feature test owner owns ensuring that: 

1.       The meeting keeps moving and does not stall over incidental discussions

2.       All defects/comments are recorded

3.       Any open issues or unclear areas are noted

 

The Aftermath 

After the meeting is completed, the feature test owner owns: 

1.       Opening bugs and/or work items to ensure that the issues are tracked to completion 

2.       Following up on any open issues uncovered during the review.  Sometimes particular areas of the code need more discussion and/or research before the issue is considered completely understood.  All issues marked for follow-up during the review should be driven to closure by the feature test owner

3.       Providing a summary of the review to the Code Inspection virtual team, including data about the area reviewed and the issues found

 

Tracking Statistics 

We keep running totals of various types of data from the review sessions.  In general, we look at the following statistics/metrics across multiple reviews and multiple areas

  • Areas of code reviewed
  • Total lines-of-code reviewed per area
  • Number of issues found (total yield)
  • Issue efficiency (#bugs resolved fixed)/(total yield)
  • Special “High Impact“ bucket bugs – such as security bug yield

 

At the end of particular milestones, we look at the overall statistics and may alter the approach based on the data.  In addition, we will frequently provide “post mortem” feedback to the feature team of particular areas if there are coding/design patterns that the code review team deems to be pervasive enough to warrant special attention.  For example, if memory leaks due to a particular coding style/convention are running rampant in  a given area, it is helpful to provide feedback to the feature team about the ways in which these types of leaks could be addressed via a slight coding practice adjustment.  These suggestions can then be rolled up into development coding guidelines, and in some cases incorporated into static analysis tools. 

 

In a future post in this series, we will discuss some of the guidelines that we use during the review of the code in order to make for an efficient and “high-yield” inspection.  Thanks for reading!

 

-Liam Price, Microsoft