LabVIEW Development Best Practices Blog

Community Browser
Showing results for 
Search instead for 
Did you mean: 

Improving Code Quality and Performing Code Reviews

Active Participant

When working with customers who are struggling to improve the quality of their code, it almost always comes to light that they've been failing to conduct code reviews on a regular basis, if at all. Code reviews are an important and often overlooked step towards meeting the needs of medium to large development teams working on complex applications.  This fairly simple step in the development process can help improve readability, maintainability and scalability of software, as well as help detect defects and problems much earlier in the software life-cycle.

So how do you conduct a code review?  Well, the answer can differ widely based upon internal practices and the nature of the code that is being reviewed, but the fundamental idea is that you should have more than one pair of eyes look at your code before submitting a change or adding new functionality.  That sounds simple enough, but I'll do my best to highlight some common practices that can help make your code reviews more effective and productive.

1. Use SCM Tools to Track Changes

Effective code reviews are typically coupled with the use of some form of software configuration management (SCM) to identify and track changes to the source code and track the status of the project. Source code control tools such as a Subversion, Perforce and ClearCase (just to name a few) make it possible to store previous revisions and perform side-by-side comparisons to identify exactly what has been modified.  In addition to preserving and protecting prior revisions, this makes it easy to isolate reviews to the new sections of code.  When new bugs are detected (possibly during regression testing), it also makes it easy to compare with the last known working version to see what changes may have introduced a behavioral change.  For information on how to use the graphical differencing feature in LabVIEW to help track changes, click here.

2. Perform Reviews Often

Most structured development environments have well-defined milestones and different phases of the project prior to release of a finished product.  There is typically a check-list of activities and reviews that must have been completed before moving between phases (ie: from Alpha to Beta).  However, the required minimum number of code reviews (if you have one) should not be your only reason for conducting a review.  Any major change or addition to an existing application is reason enough to get a colleague to look at your code.  Some groups even require that someone sign-off on a change before it can be checked into source code control. While the timing of a review is largely a matter of preference, frequent reviews help ensure that reviewers are familiar with the code and can help minimize the amount of time required to explain complex modifications.

3. Have the Developer 'Walk-Through' the Application

One mistake to be avoided is throwing code 'over-the-wall' to a peer to get their approval, as this does not encourage a deep level of exploration of the code and does not guarantee that everyone has the same level of familiarity with the code.  The most effective reviews are led by the primary developer, who is responsible for walking through the logic of the application and explaining the design decisions behind the implementation.  The role of a peer is to ask questions that ensure the following questions can be answered to their satisfaction:

  • Is the code easy to maintain?
  • What happens if the code returns an error?
  • Is too much functionality located in a single VI?
  • Are there any race conditions?
  • Is the memory usage within acceptable limits?
  • What is the test plan for this code?

4. Select Qualified Reviewers

Instead of treating it as red-tape, reviews should be seen as an opportunity for developers to showcase their skill in front of peers who can appreciate and understand the craft of engineering software. Even if it isn't the most exciting event in your day, you can make the most of the review by selecting well-qualified and experienced peers who (ideally) are already familiar with the project.  The worst mistake is selecting under-qualified developers who do not know what to look for or what questions to ask.  Generally, for large-scale development, National Instruments recommends having at least one Certified LabVIEW Developer or a Certified LabVIEW Architect participate in a code review.

5. Define and Enforce Style Guidelines

There are a lot of LabVIEW style guides and recommendations throughout the community.  Examples include:

but ultimately, many development teams chose their own standards and guidelines.  The most important practice for ensuring long-term readibility and maintainability (in other words, when you find a bug in two years, can a new developer step in and fix it) is to pick a style and stick to it.  One of the most helpful tools for enforcing these practices is discussed next.

6. Perform Static Code Analysis (Automated Code Reviews) Before a Peer Review

The NI LabVIEW VI Analyzer Toolkit was designed to automate the review of coding practices and coding styles.  This static code analysis tool lets you define and configure over 70 of your own tests to enforce custom style guidelines and the reports this tool generates can serve as a valuable starting point for code reviews, especially on large, complex applications.  Examples of the tests include:

  • VI Documentation—Checks for text in the VI description, control description, and/or tip strip fields on all controls.
  • Clipped Text—Checks that any visible text on the front panel is not cut off. This includes text in control labels, control captions, free labels, and text controls such as strings and paths. The test cannot check the text inside listboxes, tables, tree controls, and tab controls.
  • Error Style—Checks that error connections appear in the lower-left and lower-right corners of the connector pane. This part of the test runs only if the connector pane wires a single error in and a single error out terminal. The test also checks whether an error case appears around the contents of the block diagram.
  • Spell Check—Spell checks VIs, front panels, and block diagrams.

A full list of tests can be found here.

One of my colleagues once made the statement, "the best code reviews are the ones that actually get done."  This is absolutely true, but the more you can perform regular, structured reviews, the more you can mitigate the risk of bugs and help ensure the longevity of the software.

Please share your thoughts, suggestions and questions on the topic of code reviews below.  What are your best practices and what are the lessons you've learned over the years?

Elijah Kerry
Chief Product Manager, Software Platform
Follow my Software Engineering for LabVIEW Blog
Knight of NI Knight of NI
Knight of NI

Thanks for that blog Elijah,

This raises a bit of a conundrum for soe LV developers in this sense. LV is such a powerful development language, good LV developers can do what it takes a team of C developers to do. As i have witnessed with many of my customers (which are LV developers as well) is that they are "One-person-Shops".

So if you are a "One-person-shop" how do you do Code Rfeview ?


Active Participant

Hi Ben,

Thank you for all you do to support our community!  I'm not replying to you directly, but I like what you bring up, so I'll branch the train of thought with some generalities about knowing my own weaknesses.

I can't tell you how many times I've proof read my own email before hitting Send, only to spot an obvious grammar error when someone replies to it the next day.  So the first thing I know works for me is, "Take a step back, and take some time away." With contract deadlines what they are, how do I take time away?

I think the root cause of my inept proofreading is that I know what I intended to say.  One time I was debugging my LV2 Action Global for over 15 minutes.  I swallowed my pride and asked a coworker to take a look.  Two seconds later he pointed out that I was initializing the shift register.  I was unable to objectively read the code because I was aware of my intention when I wrote it.  The second thing that works for me is, "Remove the intent."  The more reference components I incorporate into the application, the less of my own intentions go into the modules.  The overall application is very specific, but the fundamental blocks were written in the abstract.  This begs a type of testing that's more abstract as well.

It's also implied in NI Labs that VI Analyzer functionality is not static.  The extra set of eyes a single developer needs might someday be his own VI analyzer code.

Regards and thanks,

Steve K

Active Participant


I would add two items to your list.  The first item would be to hold architecture brainstorming meetings.  In our process the developer(s) of the project will hold a meeting with others independent of the project to solicit advice when it comes to architectures, data structures, file saving, etc.  This allows for people who will be in the code review to be involved in the process from the beginning and to have a better understanding of how the code was created.  An for the best quality design to be used.  Brainstorming as a group has really improved the code quality in my group.

The second item I would add is to only hold code reviews after the verification and validation procedures have been completed.  I can't count how many times the final code is changed after the test procedure is developed.  Most of the time these changes result from UI bugs that were found when testing limits or simply doing things that weren't anticipated.  In our process it is required for the test plan and a formal demo to be held before the final code review.




Lead Test Tools Development Engineer

Philips Respironics

Certified LV Architect / Instructor
Active Participant

Steve and Ben,

You both bring up good points.  If you absolutely cannot find someone to review your code, I would still recommend allotting regular time to revisit/audit code that was modified recently (ideally not immediately afterwards - as Steve points out, you do tend to spot problems after allowing your brain to focus on other things for a while).  If you're the only developer, you obviously have a much better shot at being able to make sense of your own code, but I know that I'm much more likely to take a shortcut or leave documentation for later if I know that no one will be looking at my code.  I would also still strongly encourage that you enforce a uniform style across all your VIs. 

Finally, make use of tools like VI Analyzer to help screen for issues.  Steve, you're right - the tests are not static (we tend to add a few with every version of LabVIEW - it's over 70 as of 2009), and we may one day unlock the possibility for users to create their own.  If this is something you're interested in garnering support for, I'd strongly recommend submitting a request to

Elijah Kerry
Chief Product Manager, Software Platform
Follow my Software Engineering for LabVIEW Blog
Knight of NI Knight of NI
Knight of NI

To further help the on-person-shops...

If all else fails try turning your screen upside down. I don't know how many times I was concidering my next move when walking around to the other side of the chess board gave me a fresh out-look.


Active Participant


I agree with your first point: we have formal spec reviews before beginning development, which includes people from a variety of groups to weigh in on topics like architecture and UI design.

I would say that having a code review after the test plan is completed is important, but I wouldn't exclude regular reviews prior to that.  Ideally, some form of a V&V plan is available very early in development - obviously this can't help but change (ie: 'evolve') throughout work on a project.

Elijah Kerry
Chief Product Manager, Software Platform
Follow my Software Engineering for LabVIEW Blog
Active Participant LuI
Active Participant


you definitely need a teddy-bear ;-)

Seriously, grab anyone from your team and try to explain a bit of your code. If you can manage to stay serious, it even _can_ be a teddy. Its not important that the one that gets explained your code does understand it nor sees those small glitches. YOU will see it while you explain the code.

At least thats what worked for me here at getemed ...

Knight of NI Knight of NI
Knight of NI

LuI wrote "Its not important that the one that gets explained your code does understand it nor sees those small glitches. YOU will see it while you explain the code."

How can I argue with that since I acted as a sounding board to people who were talking way over my head for years.

I agree with that point but I was thinking about dragging this subject in a new direction >>> Independent code review orgrnaizations.

There is a pile of challenges in making that happen starting with my own situation where I can do the code review but am restricted to only doing them for money. Other complication get into the intellectual property area and running the risk of your competition seeing and slamming your code.

I don't know what a good solution looks like.

I do see a need and historically speaking where there is a need someone steps foraward and fills it.


Active Participant LuI
Active Participant

Ben wrote: "I agree with that point but I was thinking about dragging this subject in a new direction >>> Independent code review orgrnaizations."

I myself think of this as a very good but nonworkable (does this word exist in english?) idea. As a member in a QM and test team I have experienced a lot of formal audits. My impression was that almost all of the auditors seems to have no clue on software devellopment and shy away.

They usually simply ask formal questions a la 'Show me the your process definitions. And the Rsik mitigation plan And the requirements specification doc. And its revision history file. And the test plan. Now show me some of the test specs. Show me unit test procedures (TPs) and test reports (TRs). And the smae for integration tests and so on.

This all depends, of course, on the risks involved with the (sw) product,

AND there are so much different devellopment environments and human languages and style guides both for SW and the respective documentation that an independent code review organization simply could not do what you expect from it. Or it would cost so much money that allmost nobody could efford their services.

We have some small sw teams acting mostly independently. I am the only wireworker here. We usually ask members of the other teams to act as teddybear for a code review.

I am, for example, the scrum master for a 3-man team and am invited to do so for another 2.5-man team. Allthough it is not really and formally correct scrum, we use it to plan and verify are tactic devellopmental actions in 10-workday-sprints.


I am now a LabVIEW programmer but for many years I have been a Software Configuration Control and Build Release Manager. I have been involved in doing Code Review's now for over 20 years. When I have started at new company in the Configuration Manager role, one of my first acts is to push to introduce a code review process if one does not exist.

Code reviews we call them "peer reviews" need to be flexible and tailored to the programming language though a lot of the checks are the same be the code C /C++ or LabVIEW. Often at the start people complain about the extra work invloved and extra time as well. But in all case there have been clear examples of Bugs found before deployment and a improvement over time of the style and consistency of the code that gets into the code base. Though I have often had to fight to get a code review process in place, it is something that has never been removed as being seem as a waste of time.

I am now working  in a team of 4 / 5 LabVIEW developers and "peer review's" on ALL  LabVIEW code before it gets merged into our main project.

We run the VI analyzer over ALL new LabVIEW files and changed LabVIEW files, we look for top level VI to open with unbroken run arrows and they must close with no save being required. We split the various Analyzer tests in categories

  • "do not care if test fails"  these tests are not run
  • "must pass"  there are some tests the MUST PASS
  • "case by case" there are tests that should pass under normal circumstance, but we accept there are case whereby leway can be given

A developer then does a detailed check of the code, using the LabVIEW diff tool for changed files. The developer who wrote the code will be on hand to anwser questions and exaplain things if needed. A peer review must pass to the satisfaction of the peer review before the code is merged into the main code base, if it fails the developer must rework the code to fix the peer review fail points and then it is peer reviewed again.

There was a really code NI webcast on quality code reviews that appears to have been removed.  look at the botton of this page there is a link to Performing Technical Code Reviews to Improve LabVIEW Code Quality webcast.

Could this webcast be made available again please

Danny Thomson