We have a sequence I would like to verify by running it in a loop (say 100 times) to check the results of the steps in the sequence are repeatable on one UUT.
I am aware you can configure a sequence call to loop 100 times, and record results of each itertation. However the question is how to go about getting a table of data out so we can do some statistics on it to figure out the repeatabiity and amount of noise in the measurements.
The standard report format creates a list rather than a table. We could log to a databse - but then have to write a report to get a table out... is there an easy way of tacking this problem? (Our Test results are just single numeric limit tests).
I think there a some solutions to solve this task.
On the one side there are Callbacks, which were called when the step is in a serveral conditions.
Just refer to the manual there is a quite good table that shows when a callbacks are called.
On the other side you are calling a sequence in a loop. What about creating in a .csv - file
in side a your sequence and adding a new line on each iterating? That might be quick and simple
hope this helps
One of the possible solutions :
Create the table and do post analysis within TestStand only.
For example :
Create a array in TS.(2d array of dimension 100 x nooftests)
Modify your test sequence to update the values in this array using the corresponding loop number as reference.
It can be a simple post expression in your test steps ( arrayexample[loop][testno] = sourcevariable)
You can do post anlysis using TS expressions or call a VI and pass this two dimensional array for further analysis.
You can create this variable in station globals so that values are retained even after testing is done.
Hope this helps.
Thanks for you suggestions 🙂 I will try some options and post back where I got.
I quite like the callback idea - as it could be easily added / removed for verification testing in a generic way that will be useful going forward.
Expression idea could work too...
@SachaE Yes - your reminded me I have done the plain text csv into Excel and filter in the past. A bit annoying as names and results are not all in one column as sequence calls push all sub tests right a column (I.e. they are indented). So Excels filter gets filled up with test data and data headers. Easy enough for one offs - but not a good solution going forward.
@Dennis_Knutson We are using MS SQL 2012 as a database. I haven't done any SQL since a small Uni project over 10 years ago... My manager is a bit more fluent - but he is struggling as well. But that's a topic for another post as there seems to be a lot of info about customizing databases and getting data into the database - but very little getting information out.
This looks interesting... http://lavag.org/topic/15072-teststand-sql-database/ however it doesn't work since the unique ID's of the steps seems to have changed (I have LV/TS2013). Will have to have a play... I will have to learn SQL sometime as you correctly point out.
Do you just leave your validation tests in the "production" database or do you clean it out before deploying it? If so - do you also delete the data after revalidating the tester at a later date? Or do you send validation tests to a secondary database? Do you mark your validation tests in the database in any way? e.g. I could see a prefix or suffix to the SN do the trick easily or a golden serial number .e.g. 123.
The teststand courses I have been on have made me realize that there are many ways to do stuff in TestStand... thanks for your suggestions.
Managed to get the SQL example from the Lava link working by changing the where line
WHERE (STEP_RESULT.UUT_RESULT =@UGID)
WHERE (STEP_RESULT.STEP_ID = @UGID) /*TS 2013 schema SET @UGID = 'ID#:HX6cYoNLqUu3g1+/Q5oDuD' */
That wasn't too bad - will have a play with the SQL...