From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW Development Best Practices Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

Unit Test Framework Toolkit Users: We want your feedback!

This is a call for any and all feedback regarding the LabVIEW Unit Test Framework Toolkit.  Please, if you own or have evaluated this product, take a moment to share your candid thoughts. 

Some questions to consider:

- why did you use or evaluate this tool?  what problem were you hoping to solve?

- what motivated you to try the Unit Test Framework?

- what did you like most about the product?

- what was your overall impression of the tool?

- what didn't you like, or what do you wish it could do better?

And of course, this is a great opportunity to share any features or capabilities you would like to see added or changed.

Thanks in advance!

Eli

Elijah Kerry
NI Director, Software Community
0 Kudos
Message 1 of 57
(30,238 Views)

We would like to see UTF work on Linux. We still have not received our Linux version from NI for testing.

Thanks,

Pat Falkner

303-977-5082

0 Kudos
Message 2 of 57
(6,463 Views)

Elijah,

I am the one and only LabVIEW develloper here at getemed. I do create test systems for medical devices in a tighly regulated environment. The LV projects are quite large, up to ~1000 VIs and CTLs, and I have a lot of 'em and they share _some_ libs.

I have evaluated the UTFW-TK and was quite impressed. Good idea and good implementation. I have, however not checked the integration into our application lifetime management system, which is Polarion (see http://www.polarion.com/products/alm/index.php). As most of my 'projects' are still in LV711 (allthough I have all other LV versions since 4.x), I have decided to stay with what I do use at this time (individual test code created when develloping modules that in most cases create a small text file as test result).

What could have been done to improve the UTFW-TK was to have a quite generic integration link to and from ALM systems, probably based on XML-templates or, in case of polarion, MS-Word templates. Maybe you have it already and I have not found it, but than it should be made more recognizable.

We need our ALM SW and need to provide it as complete coverage on all related processes as possible in order to keep a reliable inpact analysis and traceability.

Just my € 0.02 from an evaluation of UTFW-TK that took place a few month ago.

Greetings from Germany!

--

Uwe

0 Kudos
Message 3 of 57
(6,463 Views)

Elijah,

I'm using UTF toolkit, among other labVIEW project, to review our SQLiteVIEW toolkit before each new release.

I like the way it's integrated in LabVIEW project.

One thing, I think you have to improve is the documentation. Not how to use UTF toolkit, it's well documented and not difficult to understand but the way to develop unit test is not easy to learn. In the first time I've use it, I've sometimes done more work on rebuilding my different tests than really work on adding features to my projects. It's really discouraging and can be a bad reason to not use UTF.

We can find a lot of document over the web about Unit Test concept but I've never find a document to apply this concept with UTF toolkit when I've started to use it.

Also, improve execution speed of UFT toolkit will be great.

Regards,

Olivier


Olivier Jourdan

Wovalab founder | DQMH Consortium board member | LinkedIn |

Stop writing your LabVIEW code documentation, use Antidoc!
0 Kudos
Message 4 of 57
(6,463 Views)

Elijah,

I have used the UTF quite extensively to test code written in a controlled environment. I quite like having the tool integrated right into my LabVIEW project as it provides a good central location for handling the test runs.

There are two significant warts on the UTF though:

- It leaks VI references like a sieve. Test a simple VI using the UTF while running Desktop Execution Trace to see what I mean (this is current as of LV2010, don't know about 2011).

- The file format is too prone to corruption if edited by hand/in Excel. I personally would prefer XML, but even INI would be nice.

Neither of these issues have been showstoppers though - I still continue to use the UTF regularly.

Nick

0 Kudos
Message 5 of 57
(6,463 Views)

We are currently using the UTF Toolkit for LabVIEW 2009 SP1 and have some things worth mentioning. We are obviously are using LabVIEW 2009 and use SVN as our repository. I would like to address Elijah's 'original questions to consider'.

Question: Why did we use/evaluate this tool and what problem(s) were we hoping to solve?

Answer: We deploy code on to RT targets all over the world and want need to eliminate deploying code that has obscure bugs that rarely show up.

Question: What motivated us to try the UTF?

Answer: The concept of unit testing is common among other software development languages and we were eager to incorporate unit testing within the LabVIEW project at the end of the development cycle and prior to integration into a real system. This will be a check that the developers have done their due dilligence and we can trust them when a developer says that "their code is done".

Question: What did we like the most about the UTF?

Answer: We like a) how it is incorporated into the project, b) either individual Unit Tests can be executed or all of them and c) the computation of code coverage.

Question: What was our overall impression of the UTF?

Answer: We have numerous projects that compise an entire set of code that is to go onto a RT target. It is a great start to validation of LabVIEW code within these projects at the developer level before it is integrated into larger systems.

Question: What didn't we like, or what do we wish the UTF could do better?

Answer: We have unit tests that are composed of numerous test cases.

a) The unit test properties have 4 categories - Configuration, Test Cases, Setup/Teardown and Advanced. The Comment and Requirement ID that resides in the Configuration category is for the entire unit test and not each test case. We need to have the ability to assign a Requirement ID to an individual test case and have an associated comment about the test case.

b) The progress bar that is displayed during the execution of a unit test does not update until the very end and think that this is actually a bug. The progress bar should be updated according to what test cast is executing related to how many test cases there are in total. A feature that would be nice is an indicator showing the test case number being executed.

c) There is currently no way to reorder the test cases. Lets say that we created 5 test cases within an unit test and there is a need for a new test case based on Test Case #2 but with simply some different values of the inputs. Test Case #2 is selected in the Test Properties window and we duplicate Test Case #2. It automatically will be assigned Test Case #6 now. We would like be able to reorder the test cases (similar to how you can edit the order of an enumeration) and put the new test case where it belongs.

d) This one is difficult for NI to help resolve without the actual code but we occasionally get the following error that shows up in the (HTML) report.

Test Case 18: Error

RepetitionVIError Information
1VI Under TestPossible reason(s): Test Case Data Error.

We have no idea what is going on. Here is a link to the NI Discussion forum we created about this error:

http://forums.ni.com/t5/LabVIEW/Unit-Test-Error/td-p/1756368

These are some of the things we would like to see the UTF do better. In addition, the changes the 3 or 4 previous comments suggested would be beneficial to the UTF product. So obviously the UTF is a great start to helping with the overall validation process but it can be made better.

0 Kudos
Message 6 of 57
(6,463 Views)

I just started evaluating the UTF yesterday and came across a problem. I've been searching for a solution, but haven't found anything yet.

I was testing inputs that would generate an error condition and saw that the test was always failing. In the VI under test I use 'Error Cluster From Error Code.vi' with 'show call chain? (False)' set to True. The issue is that the UTF would show up in the call chain with a unique identifier each time causing the comparison of the error description string to fail.

0 Kudos
Message 7 of 57
(6,463 Views)

Good feedback.  In the short term the best workaround is probably to change the cluster comparison type to "by element" and then set the source field to string length > 0.  That will at least validate that the source is there although it won't be able to validate the actual call chain (which is obviously going to vary when using outside of UTF anyhow).

Regards,

Ryan K.

0 Kudos
Message 8 of 57
(6,463 Views)

Oliver,

I completely agree with you that learning how to develop good unit tests is difficult and time consuming.  On the other hand, I don't think it's realistic to expect NI to have all the answers on how to go about developing unit tests for the UTF.  That's the kind of knowledge that develops over time as users experiment and share code and experiences with each other.

You may have already found this, but the best resource I've found to help me with the learning curve is the book xUnit Test Patterns.  Granted, it is more applicable for those using LVOOP and JKI's VI Tester, but it may have some insights for traditional LV developers using UTF users as well.

To answer Elijah's question, I tried UTF a couple years ago when I was evaluating both the UTF and VIT.  I thought VIT was more suitable for OOP developers while UTF seems more aligned with LV procedural programming.  (As an aside, how about arranging a name swap with JKI?    NI's unit tester is geared to testing single vis, so "VI Tester" is a more natural fit, and JKI's unit tester is more of a framework that you build on top of, making "Unit Test Framework" a more accurate name.  [No, I don't expect this to actually happen.])

0 Kudos
Message 9 of 57
(6,463 Views)

Daklu wrote:

To answer Elijah's question, I tried UTF a couple years ago when I was evaluating both the UTF and VIT.

As an aside, it's difficult (if not impossible) to truly tie traceable test artifacts to third party tool results where the third party isn't ISO 9001 certified. That's a big deal for us when we choose toolkits to use on our regulated projects (which is a large chunk of what we do), and a big plus for NI's UTF.





Copyright © 2004-2023 Christopher G. Relf. Some Rights Reserved. This posting is licensed under a Creative Commons Attribution 2.5 License.
0 Kudos
Message 10 of 57
(6,463 Views)

Daklu wrote:

I thought VIT was more suitable for OOP developers while UTF seems more aligned with LV procedural programming.

I'm not sure I agree with that - we've been able to acheive both with NI's UTF pretty easily.





Copyright © 2004-2023 Christopher G. Relf. Some Rights Reserved. This posting is licensed under a Creative Commons Attribution 2.5 License.
0 Kudos
Message 11 of 57
(2,696 Views)

Christopher Relf wrote:

I'm not sure I agree with that - we've been able to acheive both with NI's UTF pretty easily.

Example code and or white paper?    It's been a couple years sinice I've played around with UTF and my skills have improved, so perhaps I wouldn't find it so cumbersome this time around.  Still, It'd be nice to avoid having to reinvent the wheel if possible.

0 Kudos
Message 12 of 57
(2,696 Views)

Daklu wrote:

Christopher Relf wrote:

I'm not sure I agree with that - we've been able to acheive both with NI's UTF pretty easily.

Example code and or white paper? 

Sorry Daklu, no can do





Copyright © 2004-2023 Christopher G. Relf. Some Rights Reserved. This posting is licensed under a Creative Commons Attribution 2.5 License.
0 Kudos
Message 13 of 57
(2,696 Views)

Sorry Daklu, no can do

What about one of your lackys minions cabin boys coworkers like asbo or hoover?  Or maybe just a couple paragraphs describing what you do?

No?  Booooo! 

0 Kudos
Message 14 of 57
(2,696 Views)

Daklu wrote:

Sorry Daklu, no can do

What about one of your lackys minions cabin boys coworkers like asbo or hoover?  Or maybe just a couple paragraphs describing what you do?

It's not that I don't have time, it's that it's something I'd consider company IP, and I'm not comfortable sharing such.

PS: Cabin boys?! ;D





Copyright © 2004-2023 Christopher G. Relf. Some Rights Reserved. This posting is licensed under a Creative Commons Attribution 2.5 License.
0 Kudos
Message 15 of 57
(2,696 Views)

We're starting a new development effort, and it seemed like a good opportunity to try out the UTF and see what it could do, how it might integrate into our workflow, and what kind of cool stuff was possible with it.

It seems pretty solid for basic input/prcoessing/output testing. It'd be neat if there was a 'results vector' to go along with the input vector - that is, a table of expected results that would map one to one to the provided table of inputs.

Is there any way to customize/clean up the reports it generates? There seem to be some typos in the header information - like it reports an 'Operation System,' and there's one place that has a 'Targert IP'.

0 Kudos
Message 16 of 57
(2,696 Views)

Daklu, UTF also allows you to create a "user-defined test" that gives you a blank slate VI to completely define how your test is run and validated.  This, to me, is similar to what JKI's tool gives you.

0 Kudos
Message 17 of 57
(2,696 Views)

mc_vibraphone,

For customizing the report, to write a program to parse  "ATML" report which is in XML format, by using a XML parser, then combine the data in the way that you want.

If you have specific requirement in "customizing", could you let us know more detail such as how you want to combine the data (or what kind of flexibility you need)so we can analyze further to see if we can have the feature in the future release.

Thanks

HaiJun

0 Kudos
Message 18 of 57
(2,696 Views)

reidl wrote:

Daklu, UTF also allows you to create a "user-defined test" that gives you a blank slate VI to completely define how your test is run and validated.  This, to me, is similar to what JKI's tool gives you.

Yes, but wrapping a unit test for OOP code in a vi for UTF seemed like an extra step.  Granted, I have spent far more time with VIT than I have with UTF, and perhaps I just didn't discover good workflows with UTF.  In fact, Chris Relf's comment indicates that is likely the case.

I'd be willing to give the UTF another try if someone can confirm the bug in 2009 causing LV to crash when trying to remove a unit test from the project has been fixed.

0 Kudos
Message 19 of 57
(2,696 Views)

Daklu,

> I completely agree with you that learning how to develop good unit tests is difficult and time consuming.

OK, agree.

> I don't think it's realistic to expect NI to have all the answers on how to go about developing unit tests for the UTF.

Agree as well.

BUT: One or the other example for non-trivial usage of any given tool would be considered VERY helpfull, especially when those examples are well documented. I'm not expecting explanation on WHY some test vectors where choosen, but why a special sequence of tests was chosen and what - if any - special comparisions or test criteria have been chosen.

This would help to understand what the devellopers had in mind how to use the tool. So one would work _with_ the tool and not _against_ it.

0 Kudos
Message 20 of 57
(2,696 Views)

LuI wrote:

BUT: One or the other example for non-trivial usage of any given tool would be considered VERY helpfull, especially when those examples are well documented. I'm not expecting explanation on WHY some test vectors where choosen, but why a special sequence of tests was chosen and what - if any - special comparisions or test criteria have been chosen.

This would help to understand what the devellopers had in mind how to use the tool. So one would work _with_ the tool and not _against_ it.

I understand what you are saying, but I respectfully disagree.  One of the problems with examples is the community in general view them as the NI *blessed* way of doing something.  I do agree many of the examples included with Labview are poorly written, but there is also an element of unreasonable expectations on the part of the community.  I can't tell you how many times I've talked to developers who complain that an application they built based on example code doesn't work right, is difficult to maintain, etc.

Any advanced example has design decisions baked into it that map back to specific (and often undocumented) project requirements.  The more advanced the example, the less chance it will apply to an arbitrary developer's project.  When developers copy the UTF examples and run into problems they will complain about how the example is wrong, inadequate, or whatever.

Learning how to use the UTF and learning how to develop a unit test strategy, write testable code, etc. are very different things.  NI can (and should) provide examples showing *how* to do certain things.  The *when* and *why* is not (in my experience) something that can easily be taught, and often it cannot be demonstrated in an example.

0 Kudos
Message 21 of 57
(2,351 Views)

Fully agree with you here, Daklu. Examples should be simple and show a specific feature. They never can be meant as an example how a complete application should be written, or they get to complex very quickly and still do not match with how 99% of the developers think things should be done anyways.

If you download the Windows SDK and look at those examples it is the same. They show about how to do a specific thing, but if you go about writing an app based on that code you end up with a very poorly written app.

Rolf Kalbermatter
My Blog
0 Kudos
Message 22 of 57
(2,351 Views)

Daklu wrote:

I'd be willing to give the UTF another try if someone can confirm the bug in 2009 causing LV to crash when trying to remove a unit test from the project has been fixed.

I verified that UTF in LV 2011 and LV 2012 does not crash when removing an .lvtest file from the project.

0 Kudos
Message 23 of 57
(2,351 Views)

One issue I came across, trying out the UTF, was that the output from VI under test is evaluated only after the teardown vi is run. Why would this become an issue?

Suppose you were testing a VI, that communicates with a database. In order not to open and close the database every time you need to query the database, you could construct the vi as a sort of functional global or action machine, where you have one action to open the database reference and store it in a shiftregister. Other actions to query the database using the stored database reference. And finally an action to close the database reference.

In order to test the database query actions, you need to have a setup vi that ensures that the vi's database reference is valid. Then run the query testcase.

And finally you would run a teardown vi to ensure that the database reference is closed.

But because teardown calls the 'vi under test' to close the reference, the outputs are altered. And because the outputs are evaluated after this step, it results in a failed test.

Is this intended behavior?

0 Kudos
Message 24 of 57
(2,351 Views)

I'd like to understand this use case a little better.  Let's say you have a setup.vi, test.vi, and teardown.vi.  You also have a "dbmanager.vi" which is a functional global.

setup.vi would call dbmanager.vi with the action to open the database connection and put the ref on a shift register.

test.vi would call dbmanager.vi with the action to query the database ref and use it to perform database operations.

teardown.vi would call dbmanager.vi with the action to close the database connection.

What I am having trouble understanding is why you say "But because teardown calls the 'vi under test' to close the reference, the outputs are altered".  Why is teardown calling test.vi?  Shouldn't teardown.vi just be calling the dbmanager functional global?  At the point teardown.vi is run, UTF has already cached the values returned by test.vi and any action by teardown.vi could not affect those values.

If you could provide an example (perhaps accessing a file instead of a database) along with the UTF project, I could take a look.

Alternatively, you could also create a "User-Defined Test" which gives you complete flexibility over what VIs are run (setup, test, cleanup) and you determine the pass/fail conditions.

0 Kudos
Message 25 of 57
(2,351 Views)

Hi reidl

The thing is that dbmanager is the vi under test. Following your example:

  1. setup.vi would call dbmanager.vi with the action to open the database connection and put the ref on a shift register.
  2. the testcase would call dbmanager.vi with the action to query the database ref and use it to perform database operations. (The resulting data is on the outputs of dbmanager.vi)
  3. ?
  4. teardown.vi would call dbmanager.vi with the action to close the database connection. (now outputs of dbmanager.vi changes)
  5. ?

If the output from dbmanager.vi, which is the vi under test, was evaluated at step 3, the testcase would pass. But it is evaluated at step 5, and at this point the output from dbmanager.vi is changed and the testcase fails.

I have constructed a small example that illustrates this, and will send it to you.

As promised: a small project illustrating the issue:

https://www.sugarsync.com/pf/D7341326_4490840_810798

The test named "unittest" calls setup.vi, that in turns calls db-query.vi, with the "Open DB" commands. Then the testcase runs, running db-query.vi with the Query command and at last the teardown.vi runs again calling db-query.vi but with the "Close DB" command. This alters the Query result so the test fails.

The test named "same test omitting teardown" does as the name suggest, and passes.

Well, maybe I just have misunderstood the purpose of setup and teardown vi's?

u
Message 26 of 57
(2,350 Views)

Thanks for providing the project.  I reproduced the issue and know why.  I will try to explain some of the reasoning behind why it behaves this way (and I had a wrong assumption on my part).

First, let's look at how UTF runs your setup, test VI, and teardown:  In order to support values being passed from the setup VI to the test VI, and from the test VI to the teardown VI, UTF will create a new VI and script the 3 VIs on its block diagram and do any necessary wiring between the VIs (in your case, there is nothing being passed because data is communicated via the functional global).

Second, UTF runs this scripted VI (which means it cannot stop between the test VI and teardown VI... it runs everything).  After execution, we obtain a VI reference to the VI under test (in your case, the functional global).  We then query the control values of the VI under test.  Unfortunately, as you pointed out, calling the VI under test from the teardown VI alters the front panel values of the VI under test.  My assumption that we could cache the values between running the VI under test and the teardown VI was incorrect because all 3 VIs are run from the scripted VI.

How can we get around this?  I would suggest having two VIs (ref manager and "query database").  The ref manager would be the functional global and its responsibility is simply opening/closing the database connection and returning the db reference.  The second VI would be the "query database" VI (VI under test).  In it, it would call the functional global to obtain the db reference, then perform the actual query.  The teardown VI would now no longer call the VI under test "query database".  It would be calling the functional global that closes the db reference.

0 Kudos
Message 27 of 57
(2,350 Views)

Thanks reidl, that does explain it.

Your work around would work, I think - but it arises the question - how do you then test the functional globals?

As I see it there is missing a way to call a vi to put data into it, and to call it again to test if the data is still there. You could call setup, and then test the vi, then the output would be valid. But in the database case, you need to close the reference using the teardown vi and results are lost.

My proposal:

Cache values after the testcase runs, before the teardown vi runs.

Or:

Include an option to keep the vi in memory in between test cases.

Then you could use fx test case 1 to put data into the vi. Test case 2 tests that the data is correct or as in my case that querying the database works. Test case 3 could then be used to close the reference.

Or:

Both 🙂

Of course I could create a user defined test, but I think it is pretty tedious, and I'd rather go with TestStand which has the options to control whether the vi is unloaded or kept in memory in between tests. Sadly I then manually would have to control code coverage 😞

I hope you take this into consideration, because this is what made me leave UTF and actually use TS instead.

0 Kudos
Message 28 of 57
(2,350 Views)

One other solution to your particular problem right now... can the data (indicators) that are being validated be put on a shift register?  That way, the "Close database" action would just pass those values through instead of altering them, which is what your current example VI does.  I realize that this may not be ideal depending on how large that data is.

Of your proposed solutions, I would be in favor of seeing the values validated before the teardown VI runs.  Unfortunately, this is not a trivial fix because the scripted VI that runs the setup, VI under test, and cleanup is done in one atomic operation.  I can envision ways of fixing this but the changes may be far reaching.  We will file a CAR (corrective action request) regarding this behavior.

I would be against your second solution regarding keeping tests in memory.  The very nature of a unit test should not have dependencies on test cases being run before/after it or the order of tests being run.  There should be no residual state between test runs.

Message 29 of 57
(2,350 Views)

Yes, you could put the data on a shift register of course. And I see your point against having dependecies between test cases.

But I'm glad you've decided to file a CAR, because from my perspective - it just is the correct way to do it - caching values (or validating them) immediately after the test case run.

So... thanks.

0 Kudos
Message 30 of 57
(2,350 Views)
0 Kudos
Message 31 of 57
(2,052 Views)

Hey Eli - first of all, thanks to NI for adding UTF and Desktop Execution TraceTool Kit to LabVIEW Core.

Not sure if there already is a way to do this - if there is - I have not found it. In order to simplify test cases, it is very convenient to change the contents of a wire. For example, to force an error case on return from a sub-vi. Creating stubs is painful and clunky. Modifying the code, or complicating the code (e.g. with diagram disable structures) to force an error on the wire - well - that sort of defeats the purpose of a Test Framework.What I was hoping to find was an 'active' probe (as opposed to 'passive') that I could then place on a wire as part of a test case. My active probe would allow me to over-ride values in the wire. End result? I define my test case - place active probes on whatever wires I choose, define the values that should be placed on the wires, define expected outputs, and done! No changes to the underlying code.

In addition to making UTF much more efficient, this would be a very useful debugging tool. Add it to conditional probes, and now we are talking power.

Here's hoping there is a way to do this already.

Message 32 of 57
(2,052 Views)

Are there any CARs or known issues with running Unit Tests on VIs with Formula Nodes? We're running LV201SP1, and we pretty consistently see hard crashes when we try to run any tests on VIs with large (say, >100 lines) formula nodes.

[We've actually seen a lot of bizarre behavior with bigger formula nodes--we've found a pretty effective way to force quit LabVIEW is to attempt to probe an input/output wire in a VI containing a large formula node.]

Message 33 of 57
(2,052 Views)
  • [+] The built-in reporting system is nice, if we can get the tests to function properly
    • For the functional tests we do have, we automate running them nightly and email out a report. That's super cool for continuous integration.
  • [-] I just wasted an hour trying to generate an *.lvtest file for a VI, but it simply will not correctly access the controls and indicators on the VI front panel (there are three--a string control, a string array indicator, and an error out).
  • [-] I have seen the UTF crash numerous times.
  • [-] It takes a very long time to load.
Message 34 of 57
(2,052 Views)

I am also bothered by long load times, but I also have a problem described here http://forums.ni.com/t5/LabVIEW/Source-distribution-unit-test-dependency-problem/m-p/2246000#M713189 and still waiting for an answer.

0 Kudos
Message 35 of 57
(2,052 Views)

Hi Turbo,

Do you have VIs that can reproduce UTF crash? Do you mind to share it?

[-] I just wasted an hour trying to generate an *.lvtest file for a VI, but it simply will not correctly access the controls and indicators on the VI front panel (there are three--a string control, a string array indicator, and an error out).

Could you be specific about the steps about this?

0 Kudos
Message 36 of 57
(2,052 Views)

I would like to see the character restrictions on control labels removed.  Anything that is allowed in LabVIEW should be accepted by the Unit Test Framework. 

Message 37 of 57
(2,052 Views)

It would be nice if the Run Tests from File.vi could avoid opening the "Save Changes?" dialog upon completion.

As part of our continuous integration system, we have an automated system that will run all unit tests prior to our nightly builds. It sucks that sometimes the process hangs up because Run Tests from File.vi triggered some recompiles and the "Save" dialog is waiting on user input. I presume that happens when it closes the VI references for the files under test.

Maybe there could be an option to bypass that step--just force close (effectively do the "Don't Save - All" option) the VI once the test is complete--for when we run unit tests automatically. It shouldn't require any direct user interaction.

0 Kudos
Message 38 of 57
(2,052 Views)

Hi Turbo,

Is this what you mentioned "crash" in the previous note?

For this problem that you encountered, the problem might be that you are loading some VIs which need to be compile during run time.  As you know when LabVIEW starts to compile the VI, the code has be changed then LabVIEW requires to make the decision of "save" or not. Could you please do a mass compile for the files that you want to test before using the Run Tests from File.vi?

Thanks

HaiJun

0 Kudos
Message 39 of 57
(2,052 Views)

Hi Sachsm,

UTF allows characters for control labels that LabVIEW allow. What restrictions do you see?

0 Kudos
Message 40 of 57
(2,052 Views)

Two more problems:

  1. I have LV2010 and LV2012 installed on my machine, with 2010 & 2012 versions of the UTF. I have been working primarily in LV2012. I just created a new unit test in LV2012, but if I look in the *.lvtest file, it shows the version as "2010" (opening in Notepad++/Excel). Is that just a bug, or is it possible that my LV2012 is still somehow calling the 2010 version of the Unit Test Framework? If so, is there any way to tell/fix the problem? It may be a relatively moot point, but I'd like to ensure I'm running the latest version in case there are any bugfixes that make it more stable (since whatever version I'm running clearly isn't).
  2. I have a couple realtively simple VIs that I absolutely cannot create unit tests for. When I attempt to create the file, the "creating unit test" dialog shows a full progress bar indefinitely, but just hangs until I kill my labview.exe instance through task manager. I have tried many times. There is a "stop" button in that dialog that does nothing. My only guess is that the UTF is getting stuck because I have a relatively complex input to the VI under test--one of the inputs is a typedef cluster consisting of three typedef'ed sub-clusters. Total number of elements is ~20, mostly strings and DBLs.
0 Kudos
Message 41 of 57
(2,120 Views)

Turbo

#1, The '2010" in *lvtest file is not LabVIEW or UTF version string. It is the version of lvtest file format. LV 2012 does not update lvtest file format. The version string keeps consistent with 2010.

#2, It might be caused by CAR #368000. Please upgrade LabVIEW 2012 to LabVIEW 2012 SP1 to see if you can create unit test for nested cluster.

0 Kudos
Message 42 of 57
(2,120 Views)

Penar, #2 is not an option yet for the mere mortals like us out there. Maybe you should wait making such suggestions until it is really available for download.

Rolf Kalbermatter
My Blog
Message 43 of 57
(2,120 Views)

Oops...

Historically we have SP1 release between late January and early March. Please keep an eye on NI Web news.

0 Kudos
Message 44 of 57
(2,120 Views)

StevenGarcia wrote:

a) The unit test properties have 4 categories - Configuration, Test Cases, Setup/Teardown and Advanced. The Comment and Requirement ID that resides in the Configuration category is for the entire unit test and not each test case. We need to have the ability to assign a Requirement ID to an individual test case and have an associated comment about the test case.

b) The progress bar that is displayed during the execution of a unit test does not update until the very end and think that this is actually a bug. The progress bar should be updated according to what test cast is executing related to how many test cases there are in total. A feature that would be nice is an indicator showing the test case number being executed.

c) There is currently no way to reorder the test cases. Lets say that we created 5 test cases within an unit test and there is a need for a new test case based on Test Case #2 but with simply some different values of the inputs. Test Case #2 is selected in the Test Properties window and we duplicate Test Case #2. It automatically will be assigned Test Case #6 now. We would like be able to reorder the test cases (similar to how you can edit the order of an enumeration) and put the new test case where it belongs.

In UTF 2013, the three features above have been implemented.

a) Each test case can have an associated comment. (Requirement ID still can only be assigned to the whole test file)

b) You can reorder test cases.

c) The progress bar when executing tests also is improved. Progress bar will be updated according to permutation number(when test vector is used in one test case)/test case/test file.

As UTF 2013 Beta has been posted, you can find installer here.

You can have a try with UTF 2013. Any comments and feedback are welcome, thanks!

0 Kudos
Message 45 of 57
(2,120 Views)

mc_vibraphone wrote:

It seems pretty solid for basic input/prcoessing/output testing. It'd be neat if there was a 'results vector' to go along with the input vector - that is, a table of expected results that would map one to one to the provided table of inputs.

UTF 2013 allows assigning test vector to the expected values and compares it with the vector of resulted outputs. The vector of resulted outputs is got by input vectors.

As UTF 2013 Beta has been posted, you can find installer here.

You can have a try with UTF 2013. Any comments and feedback are welcome, thanks!

0 Kudos
Message 46 of 57
(2,120 Views)

In UTF 2012, the ATML report (when viewed in Excel, see column BM) indicates that the UTF ver is 2010. This is even the case on a PC with ONLY LV 2012 installed where the MAX Software group ONLY shows UTF ver 2012. Is this a bug or is UTF 2012 only a recompiled UTF 2010? I don’t believe this to be the cases since there are bug fixes listed for UTF 2011, 2012 & 2013. This is critical for our medical device validation effort since we list the versions of third party tools we are using (LV 2012, UTF 2012, etc.) and the ATML report suggests we are using something different (UTF 2010) than the tool we have listed (UTF 2012)

0 Kudos
Message 47 of 57
(2,120 Views)

We fixed bugs as we listed in UTF2012. It is not a recompiled version of UTF 2010.  The design intension for the "version" number in ATML and lvtest file is to mark the version of the file type. There was no change of ATML and lvtest file formats in UTF 2012, so the version of file has been kept at 2010. (We need the file version number as we need to consider foward compatbility when we develop new version)  In 2013, the file formats has been changed,  accordingly the file format version has been changed to 2013.   As the file versions leads to confusion somehow, we've record this as a point that we might improve in the future.  Thanks HaiJun

0 Kudos
Message 48 of 57
(2,120 Views)

[ + ] automatic report generation

[ - ] long opening time

[ - ] long test running time

[ - ] I am not allowed to use [, ] and / inside control labels. (Edit: You can change these parameters if you right click on the lvproj in the project viewer, properties, Unit Test Framework, under Test Creation, Default array Brackets and Default name Seperator. I changed them to {} and | which in my opinion would better fit for default parameters.

[ - ] code coverage seems not working properly for me. Diagrams are indicated not covered, but I made sure they are covered by placing a one button dialog

[ - ] Not existing or badly visible undo button. If you set all of your expected values and then by error click on import from vi, you can only go back by source control.

[ - ] Can't rename a *.lvtest. Right click => rename has no influence

0 Kudos
Message 49 of 57
(2,120 Views)

Hi Eli,

hope this thread is still kind of alive.

Finally, I've made it to use the UTF more actively...

There are a lot of usability / speed issues already mentioned, but the IMHO most important improvement would be to have a better help system: the error messages popping up do not allow the user to identify the errors easily.

I really do like the tool, please make it a bit easier to use!

0 Kudos
Message 50 of 57
(2,120 Views)