From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Unit Testing Group

cancel
Showing results for 
Search instead for 
Did you mean: 

Unit Test Tools in LabVIEW

jon_mcbee wrote:

With regard to CI, would you prefer to have an API that gives you the hooks you need to integrate into whatever CI tool you like, or a complete Jenkins integration that you can just use, or both?

Since Jenkins is currently the dominant automation server for CI and CD, this is probably where the initial focus should be. There really needs to be a start-to-finish example for teams to get started with. For example, no one has yet published a "GUnit" plugin to push results back into Jenkins.

jon_mcbee wrote:

Would you (or anyone else reading this thread) be interested in coded UI test at the system test level (moving out of scope of a unit test tool, but why not)?

I know this is a highly debated topic but I tend to agree with those who think end-to-end tests are brittle, time consuming, and generally a waste of resources. UI testing should be for just that, the interface, and not the underlying functionality. Posts I regularly refer to are TestPyramid and Just Say No to More End-to-End Tests.

I'm really interested in your thoughts on test doubles. That's an area where I haven't really seen much aside from the LabSpy library from James McNally. As a whole, the community needs more examples, libraries, and frameworks to really push unit testing and software engineering.

0 Kudos
Message 21 of 31
(3,714 Views)

Adding on to what Fab has said:

I see unit tests as another client of your codebase. Just as your caller (be it the application build around your code or someone elses via API) has requirements for how they expect your code to function (ie. contracts) so do unit tests. Making code "testable" is another way of saying that you have a new client that has a defined, and highly detailed, contract - and you value that client enough to make changes. How much you value that client is really down to you.

Of course, "testable" code does not mean "correct" or "maintainable" code. Being testable does not imply correctness and vice versa. If often implies that it esposes those lofty goals of low coupling, high cohesion etc.etc. All the attributes of supposedly "good" code.

But as always we have to look along the curve of pragmatism: Goldilocks Abstraction - how much abstraction is just right? (bang for buck). When refactoring code purely for the sake of improving our feedback to this new client we have to balance the business cost with the quality confidence gain. Only experience can really provide those answers.

Message 22 of 31
(3,714 Views)

I'm going to take a diffferent tangent here.

I think one of the biggest obstacles to wide-scale adoption of unit testing (and the architecture paradigms that tend to follow) is the lack of sophisticated refactoring and re-structuring tools in LabVIEW. Besides "Create SubVI", any code re-structure to improve testability is an expensive exercise. Perhaps this is a consequence of the visual paradigm or simply the result of the origins of LabVIEW. In any event, designing up front for testability is a lot more effective than refactoring afterwards. Here are some common examples I hit against all the time:

  • Extracting out a cluster from a connector pane deep in a VI hierarchy (in order to remove the dependency on details) requires me to manually modify the connector panes of every caller
  • Extracting out a common connector pane to allow reference or async execution requires manual modification of the affected VIs
  • Refactoring to components (eg. project libraries or even referencing to VIPM packages) is time-consuming and error-prone with the current search and replace system.

Any thoughts?

0 Kudos
Message 23 of 31
(3,714 Views)

The best solution I have seen is doubles generated dynamically via scripting - they expose the same connector pane as the target VI but contain the internal logic to perform the necessary stub and spy behaviour. This works best with classes with an "interface" base, of course. Even in this scenario, changes to the VI / Class method being "mocked" require the test double to be re-generated. The ideal solution would be creation of such a test double at run-time.

0 Kudos
Message 24 of 31
(3,714 Views)

tyk007 wrote:

The best solution I have seen is doubles generated dynamically via scripting - they expose the same connector pane as the target VI but contain the internal logic to perform the necessary stub and spy behaviour.

Why is a dynamically generated VI for stub better than a dedicated VI? Maybe I don't see the whole picture, but in my universe all test code that are not written as code you can view and debug directly is doomed to be misunderstood and when someone else must maintain the code it can go wrong

0 Kudos
Message 25 of 31
(3,714 Views)

kjeld wrote:

tyk007 wrote:

The best solution I have seen is doubles generated dynamically via scripting - they expose the same connector pane as the target VI but contain the internal logic to perform the necessary stub and spy behaviour.

Why is a dynamically generated VI for stub better than a dedicated VI? Maybe I don't see the whole picture, but in my universe all test code that are not written as code you can view and debug directly is doomed to be misunderstood and when someone else must maintain the code it can go wrong

I would say that if a unit test framework could generate the spy for me so that all i needed to do was add the logic I wanted I may be more inclined to use it.  This should be code that you could still debug and maintain as the code you are testing changes, but the creation and usage of the spy should be transparent. 

0 Kudos
Message 26 of 31
(3,714 Views)

tyk007 wrote:

I'm going to take a diffferent tangent here.

I think one of the biggest obstacles to wide-scale adoption of unit testing (and the architecture paradigms that tend to follow) is the lack of sophisticated refactoring and re-structuring tools in LabVIEW. Besides "Create SubVI", any code re-structure to improve testability is an expensive exercise. Perhaps this is a consequence of the visual paradigm or simply the result of the origins of LabVIEW. In any event, designing up front for testability is a lot more effective than refactoring afterwards. Here are some common examples I hit against all the time:

  • Extracting out a cluster from a connector pane deep in a VI hierarchy (in order to remove the dependency on details) requires me to manually modify the connector panes of every caller
  • Extracting out a common connector pane to allow reference or async execution requires manual modification of the affected VIs
  • Refactoring to components (eg. project libraries or even referencing to VIPM packages) is time-consuming and error-prone with the current search and replace system.

Any thoughts?

This is interesting, I can see a common reason not to unit test being that there is an existing pile of untestable code.  On one hand the recommendation would be that everytime you touch the codebase to fix a big or add a new feature you refactor that unit of work so that it can be tested.  But this is testing out of convenience as oppoed to strategic testing, it raises the question of should every unit have unit tests? 

I've wandered off topic a bit.  I too would like to get people's thoughts on if it is the graphical nature of LabVIEW that makes this kind of refactoring difficult, and if so what changes to the environment could be added to address these difficulties.  I am having a hard time thinking of a clean way to address this, I'm hoping someone else can find a thread to tug on.

0 Kudos
Message 27 of 31
(3,714 Views)

Wow great discussion! Just as we have a bank holiday in the UK!

As a few of you know, I have lots of opinions on this!

In general my approach is what I wrote up in at https://devs.wiresmithtech.com/blog/how-we-unit-test-labview/. My requirements for a tool are really:

  1. Fast to run.
  2. Flexible - One of the problems I find with UTF is you have to it is easier to test according to NI's definition of a unit test. If that is what is needed then it's great not to write code but as my definition of a unit test leans more to what Fab describes I have more of a need for more setup or test results that can't be checked on a single VI output.
  3. Reliable and Low Friction - I don't need any reasons to not test or leave tests broken.
  4. Certification and Code Coverage - This is interesting to me, it is not something I have had a specific requirement for yet since I am normally testing off my own back but what I have learnt from this thread is it is a lot more important than I had appreciated.
  5. I intend on putting most code through CI this time next year so APIs are important.
  6. Target support is very interesting. The ideal tool would allow you to select whether to run on desktop (for the CI mode) or on the actual target. I wouldn't really ever expect to run it on the FPGA for those targets but using NI's simulated modes.

jon_mcbee wrote:

This is the thread of thinking that got me writing the unit test blog post, does the ability to write good unit tests require code that was written with testability in mind?

Yes and No in my opinion - As you eluded elsewhere code that has low coupling is much easier to test. Writing with testing in mind forces you to decouple your code well but if you have code that already has low coupling testing should be easy.

The case where I can see changing a design for testing might be adding an "interface" class rather than straight to a concrete class. There are probably different fields of though over whether - in this case - the interface should have been there already or not. It may also force more of a dependency injection pattern - again it can be debated whether this is a good thing!

In general this ties in with the discussion on test doubles. I do use them but (specifically with spies) I have seen with myself and others that they can easily be overused as a way to break the encapsulation of the module. Ultimately what this means is the coupling between your test code and your implementation is too high and it is much more likely to break due to innocent refactoring

jon_mcbee wrote:

I've wandered off topic a bit.  I too would like to get people's thoughts on if it is the graphical nature of LabVIEW that makes this kind of refactoring difficult

I've often looked for and considered where the differences lie - I'm currently starting Martin Fowlers Refactoring book so perhaps I will understand more after that.

I think the answer here is that it may be marginally harder but we probably see the text based work with rose-tinted glasses.

  • Fundamentally because LabVIEW is graphical - refactoring any VI becomes a 2D problem instead of a 1D problem. I can't just hit enter a couple of times and copy and paste in some code.
  • The fact that we use variables less probably makes it harder since we have more complex connector panes than your average text based code - this pays off in readability and parallel programming double though in my opinion.
  • I bet text based programmers would love the "create subVI" shortcut though!
  • I think in the example of extracting out a cluster through multiple hierarchies - I'm thinking this should be hard since you are changing a lot of code and coupling that code to your type def so it should be a deliberate act that you are thinking about - even so in text based it is probably easier just because copy and paste is easier - I bet every layer should still be considered though.

jon_mcbee wrote:

But this is testing out of convenience as oppoed to strategic testing, it raises the question of should every unit have unit tests?

Unit testing is difficult and takes time. When you are writing code you get immediate benefits from using it - it helps you with the immediate problem as well as sticking around for regression testing.

For this reason - if you have untested code - the approach you describe is what I would recommend since it breaks it down into bitesize chunks, making the task much less daunting - as well as giving the immediate benefit.

My goal with this approach over the long term is to increase testing everywhere and as you touch other bits of code for refactoring I would start writing tests and your coverage would grow quite quickly so I would argue it can still be strategic - it is just addressing the limitations of taking days out to get testing done in one go (which is a very boring job!)

Since I test to make my job easier though testing out of convenience is a perfectly valid approach IMHO. I will prioritise tests out of value to me or value to the customer. If the value on both is low and it is difficult or time consuming to test I will skip it. It is a case of simply remembering why you are testing in the first place and whether that test gets you to that goal.

Wow that was longer than planned! Sorry for the big read but I hope it helps. I look forward to reading your post!

James Mc
========
CLA and cRIO Fanatic
My writings on LabVIEW Development are at devs.wiresmithtech.com
Message 28 of 31
(3,714 Views)

Text-based programmers already have the "Create SubVI" option in most popular development environments (Extract Method) for much the same reasons you stated - text-based refactoring is a lot easier than graphical no matter how it is done. If something can be easily repeated or altered logically following the language rule-set then you can bet that there is a refactoring somewhere for it. The later Visual Studio IDEs already come with many refactoring tools built into the IDE, and that's ignoring the many refactoring tools that can be plugged in. Many of them exist becuase of that dependency on variables you mentioned but a lot of them are more architectual in nature.

I suspect the reason for the lack of significant refactoring tools in LabVIEW is simply a case of priorities from an R&D prespective.

0 Kudos
Message 29 of 31
(3,714 Views)

I have been thinking a lot more about Unit Testing and the current tools we have.

Focusing more on this part of Roy Oshervore's definition: " A unit test can be written easily and runs quickly. It's trust-worthy, readable, and maintainable."

The Unit Test Framework makes it easy to create tests and since the *.lvtest format is human readable, one can edit/create .lvtest from excel or any other text editor. However this is only true for very simple inputs/outputs. As I have mentioned on another post on this forum, the format has changed between LabVIEW versions and some of the inputs/outputs are formatted as gibberish (at least that is how it looks to a human eye)   This makes the tests hard to maintain.

Also, UTF tests do not cope well with changes to the connector pane to the VI. Also, if there are lots of changes to the unit under test, the unit test definition might be lost. One can retrieve the original definition by opening the .lvtest on a text editor, but opening it in LabVIEW UTF might show the old test vectors as being empty.

JKI VI Tester tests take more steps to create (as you can see on the videos here). However, once they are setup, creating new tests from the same testcase class is straight forward. And since the tests are LabVIEW code as well, they get upgraded to new versions with the rest of the code. When there are changes on the connector pane, the developer deals with them on the same way they do with any other piece of code without the risk of losing the unit test definition.

Both UTF tests and JKI VI Tester tests can be executed by pressing one button.

I have not had a chance to try Caraya, nor AST Unit Tester, nor Peter Horn's assertions on a large project. What I have gathered from all three is that unit tests are easy and fast to create with these tools. My concerns (which might be unfounded, I will know when I get to play with them on a real project) are:

* Caraya: It is easy and fast to create new Unit Tests. But the developer still has to remember to create the code to run all the unit tests in a project. There is no out of the box "button" to run all the unit tests present in the project. There is an API to create the VI to run them all, but it requires extra work. I have enough troubles convincing developers that they need to create Unit Tests, any extra step will be left undone.

* Peter Horn's assertions: It is easy to create assertions, they leave with the code. I know that the tool coments out the assertions when the project is built into an exe. It makes me uncomfortable to add extra code that might be affecting the existing code performance. We have to run the entire code to exercise the assertions, which is great to check that the code works well. However, I like to use unit tests more as an insurance. If I make a change in A, I want to run all the unit tests and be notified that B is no longer working. B could be an obscure feature that doesn't execute all the time within the application.

* AST Unit Tester: I like that it is fast and simple, however it is a young tool and it is rough on the edges.

For an opportunity to learn from experienced developers / entrepeneurs (Steve, Joerg, and Brian amongst them):
Check out DSH Pragmatic Software Development Workshop!

DQMH Lead Architect * DQMH Trusted Advisor * Certified LabVIEW Architect * Certified LabVIEW Embedded Developer * Certified Professional Instructor * LabVIEW Champion * Code Janitor

Have you been nice to future you?
Message 30 of 31
(3,714 Views)