In part I of this series, we looked at some of the benefits of the SDET discipline being fully plugged into the code inspection pipeline. These benefits include such outcomes as pushing quality upstream and raising SDET awareness of the code that is being tested. Plus, it’s fun! J
Now we will share the general process and approach that we take in one of our SDET Code Inspection virtual teams. Please note that the format and style of the code inspection process described below may differ from what you are familiar with (especially if you have used a formal code review process in the past). We participate in a type of lightweight group review, which takes bits and pieces from several formal inspection processes. Also, please note that the approach being outlined below is for post-check-in inspections in which we are able to review code on a larger scale across multiple components (as opposed to pre-check-in inspections which tend to be narrower and focused more around particular isolated changes).
As often as possible, we push to have the SDET who owns the feature area also own and drive the end-to-end code inspection process for that area. This is helpful in several ways, such as helping reinforce ownership of the area, introducing more SDETs to the process, and also in getting the SDET owner to think about questions like:
Setting Up the Meeting
In general, the feature test owner will do the following prior to the review meeting:
1. Decide what code will be reviewed
2. Decide upon supporting materials
3. Send out the meeting request
Reviewing the Code
In this post, I won’t dive into the mechanics of reviewing the code (this will be covered in its own future post). In our inspection process, the actual heads-down inspection happens offline at the reviewer’s pace, and this happens before the code review meeting occurs. Reasons for this approach include:
As mentioned above, in our process the meeting itself is not where the inspection takes place, but rather where the defects are tabulated and discussed. In general this is accomplished during the meeting by moving through the code, page by page, and recording the defects and which participants found them. Since one of our main goals in this process is education, in many cases we try to take the time to talk about particular patterns of defects found in the code and how these might be avoided altogether with a holistic approach. We also spend some time talking about the design decisions that were made as well as interdependencies between features, as this is a common area where defects tend to crop up.
During the review meeting, the feature test owner owns ensuring that:
1. The meeting keeps moving and does not stall over incidental discussions
2. All defects/comments are recorded
3. Any open issues or unclear areas are noted
After the meeting is completed, the feature test owner owns:
1. Opening bugs and/or work items to ensure that the issues are tracked to completion
2. Following up on any open issues uncovered during the review. Sometimes particular areas of the code need more discussion and/or research before the issue is considered completely understood. All issues marked for follow-up during the review should be driven to closure by the feature test owner
3. Providing a summary of the review to the Code Inspection virtual team, including data about the area reviewed and the issues found
We keep running totals of various types of data from the review sessions. In general, we look at the following statistics/metrics across multiple reviews and multiple areas
At the end of particular milestones, we look at the overall statistics and may alter the approach based on the data. In addition, we will frequently provide “post mortem” feedback to the feature team of particular areas if there are coding/design patterns that the code review team deems to be pervasive enough to warrant special attention. For example, if memory leaks due to a particular coding style/convention are running rampant in a given area, it is helpful to provide feedback to the feature team about the ways in which these types of leaks could be addressed via a slight coding practice adjustment. These suggestions can then be rolled up into development coding guidelines, and in some cases incorporated into static analysis tools.
In a future post in this series, we will discuss some of the guidelines that we use during the review of the code in order to make for an efficient and “high-yield” inspection. Thanks for reading!
-Liam Price, Microsoft