Unit Testing Group

cancel
Showing results for 
Search instead for 
Did you mean: 

Using JKI VI Tester and an array of child classes for multiple (named) test cases

With the aim of finally getting some better understanding of unit testing, I'm now trying James McNally's "Test First" development style, as detailed in How We Unit Test LabVIEW.

 

I've decided that I want to write a new data logging class, which will implement storage and retrieval of (numeric) data with some simple(ish) interface. Additionally, I have public functions to read and write a collection of parameters, and open, close and iterate the data file (iteration used when the file becomes too large, if the max size parameter is positive).

 

As a starting point, I've created a parent "DiskLogger" class, which gives this interface, but most of the methods are largely unimplemented. Then, I have a "TDMSLogger" child class, which should in the future implement all of these functions for a TDMS file. Future children will use different logging formats, as needed.

 

I'm a little confused as to the best way to set up the hierarchy for JKI's VI Tester, to best allow me to test all of the children without creating mountains of duplicate code.

 

It seems like I want to be able to test only the public interface (perhaps) and with the goal of writing the test code only once, I defined a TestCase class with a DiskLogger object as a private data item, and a Write DiskLogger accessor method.

 

In the DiskLoggerTestSuite:New.vi, I can place the DiskLoggerTestCase constant on the left position, along with an array of different child classes, then autoincrement the child classes (with indexing disabled on the TestCase, which is no longer an array) calling the Write method and generating one test case for each child class. This is nice, and fits exactly the sort of coding that would be ideal for this in my (uneducated) opinion - to test a new child implementation using my nicely (not yet, but...) written testThisFeature.vi and testThatFeature.vi from DiskLoggerTestCase, I need only to drop a new child class constant, add to the Bundle Array, and reload the VI Tester window. Hurray!

 

The problem becomes, all of the test cases have the same name (this is taken from the Class Name, seemingly via Type Descriptor). Is there anything I can do about this? Clearly I should just test frequently, then when something doesn't work, it's whatever I didn't write yet, or just changed, but not being able to identify the tests doesn't seem ideal.

 

Perhaps I've gotten the entire Suite/Case setup wrong - I also considered creating a TestSuite for each child, each using the same TestCase (I was surprised to learn this was possible - thanks to Daklu on LAVA -  Unit Testing Strategies with xUnit?) but then I end up creating lots of basically identical Suites (which eventually I'm sure to change some part of one, and not another), basically just so that I can have a new class name... This seems to be the definition of bad coding!

 

Any tips/thoughts?


GCentral
0 Kudos
Message 1 of 6
(5,582 Views)

I always bring this back to the "units of work" or use cases for the Logger hierarchy. In your example I would suggest that, by definition, this entails knowing what the children are and what is expected of them. Of course in this example it seems you are performing Integration tests (you're touching the disk via a volatile dependency on file access through LabVIEW -> Win32) but that doesn't make the tests less useful by any means. For the purposes of what follows I am assuming that we are Integration testing your public API end-to-end which also makes it tolerant of any design changes you make under the API surface, making your tests more maintainable. You could add seams in to make the tests "pure unit tests" but I don't see that is adding any value - your dependencies, while not under your control, are pretty stable - and adding more abstraction just increase complexity for no real gain since you'll still want those Integration tests to disk either way.

 

I'll assume we have a DiskLogger class that exposes some public API. Internally it calls some protected dynamic dispatch methods that your child classes implement. As a client I could use your API by:

  1. Dropping the child class constant or using the appropriate child class constructor (taking in the necessary child class configuration)
  2. Using the methods in the DiskLogger class to perform the functions I need
  3. Using a "Close" DiskLogger method (or similar) that allows each child class to close based on whatever volatile API you are using internally.

When I look at units of work I am thinking about what I want the system to achieve through the public API that is exposed. In your case there is only one API to worry about - lets say for now a "Write Value to Disk" and "Read Value from Disk". Let's also say I have two children - a TDMS variant and a XML variant. As the user of your API the units of work that I can think of are:

  1. The "x" variant can write the data to disk as a "x" file. We want to verify that the file created as the right format etc.
  2. The "x" variant can read the value back from a "x" file. We want to verify that, given a file in the right format, the right value is returned

This results in four tests in this instance, two for each child class. Each child type will generate a logged file that needs verifying in some way and each child type will need to read in a "test" file of each type and the test verifies that the value is correct. The test code for the former will be unique to the child variant and the latter test code will be virtually identical (separate files to read back are needed but other than configuring this and the child type the actual API call and return value verification is the same) and could be encapuslated into a common SubVI that is independent of the child. 

 

How you structure the tests is up to you but the TestSuites sharing TestCases is a useful strategy. It depends on how maintainable you want your test suite to be. Much like production software design, you should only add the necessary complexity to your test "architecture" if it is warranted; YAGNI and all. In this example it could really be as simple as adding the "two" tests for each child to a single test case and performing the setup inside each test method.

Message 2 of 6
(5,574 Views)

Firstly, thanks for the response.

 

You're quite right, although I hadn't considered it, this particular application wouldn't be true Unit Testing, but rather Integration. Oops... None-the-less, I'll go ahead and try get it going as my first significant attempt at Unit Testing since it's what I'm trying to write at the moment, and there's no time like the present for self-improvement.

 

How you described the client using the API is what I had in mind too - it was my goal to allow the client to simply switch the constant in one place (or perhaps use a config file and get default class object by name/path, etc) and have a different logging format without changing any of the other code.

 

I had imagined tests as follows - perhaps you could suggest if these seem reasonable, or if I'm missing the point entirely:

  • In the Suite setUp, create a new directory for testing. Store the path in private data, allowing cleanup (i.e. deletion) in the tearDown VI. 
  • Test the "Open File" VI by calling the VI then using the File I/O functions to list the files in the new directory, and establish that a file of the appropriate name and extension was called. The (parent) class has a Get Extension DD VI, since it has the code for name iteration, which should allow this without knowledge of the specific child.
  • Test the Write and Read as a pair. I'm not certain how I can separate these out, unless I compare hashes (or similar) of a written file with one that I create manually. My idea here was to create some set of data, Write to file, then Read the full set and check they match.
  • Additional Read tests can check that reading subsets of the data provide the expected subsets from the original array, perhaps stored in private data by the setUp.vi. I need to be careful here to avoid depending on a written file in an earlier test, since that would prevent calling the tests independently. Perhaps I can create some files with a different name in the Suite setUp case, then force the path in specific testThisTypeOfRead.vi cases (not Cases).
  • The iteration of file name could be most simply checked by examining the private method that only outputs a path, but in the spirit of only testing the public API, I guess creating several files one after the other (without reading or writing) and checking for files as in the basic Open case would work.
  • I don't know how to go about testing the Close functionality. Perhaps I can try a SysExec call to determine open file handles in the directory, but a) I don't know if that's true, b) I don't know if it would tell me anything about if my Close method worked and c) it definitely wouldn't tell me I didn't just wipe all the data in the process of closing, for example. Reopening and Reading again might be one solution, but it doesn't test closing either - presumably I could drop handles all day long and still obtain another to read successfully for many formats (including TDMS, and XML as an example).

Apologies for the long post. I appreciate any further guidance you would be willing to offer!


GCentral
0 Kudos
Message 3 of 6
(5,540 Views)

Some thoughts:

  1. Keep it simple. Requirements and your architecture will change.
  2. "Open" - Using the File I/O palette to check a file is created seems reasonable
  3. "Read" - You should be able to manually verify a known file of the child format and then copy this to the folder as part of your test setup. This is a data-driven test and you need to verify the data is correct; the file then lives with your test in source control.
  4. "Write" - This one can be tricky since it depends on whether the data being written has any "context". If the file has no context at all (ie deterministic content) then a simple comparison to a file you already know is valid wold be adequate. If it has context then the comparison is more complex. Like many Integration tests, it can be unreasonable to fully automate the process and generally speaking that's ok - many Integration tests are only run every so often when interaction with your external dependencies changes. This is opposed to unit tests which are challenging the interaction within your system and are subject to change during refactoring or adding new internal requirements.
  5. "Close" - I have a simple albeit crude proposition. If the file is open then you won't be able to delete it (and you will get an error). If it is closed then you can.
Message 4 of 6
(5,519 Views)

So I managed to create a handful of tests for the Open operation using each of the possible file naming formats I've allowed by enum (along the lines of timestamped, incrementing number, etc) and I managed to catch a mistake in one of the cases handling incrementing the path.

 

First win for my VI Tester testing!


GCentral
Message 5 of 6
(5,497 Views)

Thats great to hear you have already had a win!

 

It sounds like you are on the right track. I have to say I've always kept to the simpler question of keeping seperate test VIs for these cases rather than trying to do too much with test suites and inheritance. Driven from keeping the test code simple so I'm not spending too long on it.

 

For files, while strictly an integration test the divide is wherever you want it so I often include these in unit tests (the downside is file access time but that hasn't been a problem for me).

 

I would tend to skip testing of close since it is normally simple and hard to test - that is unless it is doing something like removing files which I would test by looking for files on disk.

 

For read write I think tyk007's advice is bang on. If it is a simple file format (and that format is important for external support e.g. a user editable file) then I will often write to disk, read it back in and compare to a known good content. If it is just for your program sometimes I will skip this step and test the read write pair (since the actual data on disk is not important as long as read and write provide the same data).

James Mc
========
CLA and cRIO Fanatic
My writings on LabVIEW Development are at devs.wiresmithtech.com
0 Kudos
Message 6 of 6
(5,489 Views)