When working with customers who are struggling to improve the quality of their code, it almost always comes to light that they've been failing to conduct code reviews on a regular basis, if at all. Code reviews are an important and often overlooked step towards meeting the needs of medium to large development teams working on complex applications. This fairly simple step in the development process can help improve readability, maintainability and scalability of software, as well as help detect defects and problems much earlier in the software life-cycle.
So how do you conduct a code review? Well, the answer can differ widely based upon internal practices and the nature of the code that is being reviewed, but the fundamental idea is that you should have more than one pair of eyes look at your code before submitting a change or adding new functionality. That sounds simple enough, but I'll do my best to highlight some common practices that can help make your code reviews more effective and productive.
1. Use SCM Tools to Track Changes
Effective code reviews are typically coupled with the use of some form of software configuration management (SCM) to identify and track changes to the source code and track the status of the project. Source code control tools such as a Subversion, Perforce and ClearCase (just to name a few) make it possible to store previous revisions and perform side-by-side comparisons to identify exactly what has been modified. In addition to preserving and protecting prior revisions, this makes it easy to isolate reviews to the new sections of code. When new bugs are detected (possibly during regression testing), it also makes it easy to compare with the last known working version to see what changes may have introduced a behavioral change. For information on how to use the graphical differencing feature in LabVIEW to help track changes, click here.
2. Perform Reviews Often
Most structured development environments have well-defined milestones and different phases of the project prior to release of a finished product. There is typically a check-list of activities and reviews that must have been completed before moving between phases (ie: from Alpha to Beta). However, the required minimum number of code reviews (if you have one) should not be your only reason for conducting a review. Any major change or addition to an existing application is reason enough to get a colleague to look at your code. Some groups even require that someone sign-off on a change before it can be checked into source code control. While the timing of a review is largely a matter of preference, frequent reviews help ensure that reviewers are familiar with the code and can help minimize the amount of time required to explain complex modifications.
3. Have the Developer 'Walk-Through' the Application
One mistake to be avoided is throwing code 'over-the-wall' to a peer to get their approval, as this does not encourage a deep level of exploration of the code and does not guarantee that everyone has the same level of familiarity with the code. The most effective reviews are led by the primary developer, who is responsible for walking through the logic of the application and explaining the design decisions behind the implementation. The role of a peer is to ask questions that ensure the following questions can be answered to their satisfaction:
Is the code easy to maintain?
What happens if the code returns an error?
Is too much functionality located in a single VI?
Are there any race conditions?
Is the memory usage within acceptable limits?
What is the test plan for this code?
4. Select Qualified Reviewers
Instead of treating it as red-tape, reviews should be seen as an opportunity for developers to showcase their skill in front of peers who can appreciate and understand the craft of engineering software. Even if it isn't the most exciting event in your day, you can make the most of the review by selecting well-qualified and experienced peers who (ideally) are already familiar with the project. The worst mistake is selecting under-qualified developers who do not know what to look for or what questions to ask. Generally, for large-scale development, National Instruments recommends having at least one Certified LabVIEW Developer or a Certified LabVIEW Architect participate in a code review.
5. Define and Enforce Style Guidelines
There are a lot of LabVIEW style guides and recommendations throughout the community. Examples include:
but ultimately, many development teams chose their own standards and guidelines. The most important practice for ensuring long-term readibility and maintainability (in other words, when you find a bug in two years, can a new developer step in and fix it) is to pick a style and stick to it. One of the most helpful tools for enforcing these practices is discussed next.
6. Perform Static Code Analysis (Automated Code Reviews) Before a Peer Review
The NI LabVIEW VI Analyzer Toolkit was designed to automate the review of coding practices and coding styles. This static code analysis tool lets you define and configure over 70 of your own tests to enforce custom style guidelines and the reports this tool generates can serve as a valuable starting point for code reviews, especially on large, complex applications. Examples of the tests include:
VI Documentation—Checks for text in the VI description, control description, and/or tip strip fields on all controls.
Clipped Text—Checks that any visible text on the front panel is not cut off. This includes text in control labels, control captions, free labels, and text controls such as strings and paths. The test cannot check the text inside listboxes, tables, tree controls, and tab controls.
Error Style—Checks that error connections appear in the lower-left and lower-right corners of the connector pane. This part of the test runs only if the connector pane wires a single error in and a single error out terminal. The test also checks whether an error case appears around the contents of the block diagram.
Spell Check—Spell checks VIs, front panels, and block diagrams.
One of my colleagues once made the statement, "the best code reviews are the ones that actually get done." This is absolutely true, but the more you can perform regular, structured reviews, the more you can mitigate the risk of bugs and help ensure the longevity of the software.
Please share your thoughts, suggestions and questions on the topic of code reviews below. What are your best practices and what are the lessons you've learned over the years?