NI TestStand

cancel
Showing results for 
Search instead for 
Did you mean: 

How to propagate failures with test using sequence call adaptor

Solved!
Go to solution

Hello all,

 

I am having issues getting failures to propagate from a sequence call being used as a test.  Basically what is happening is that I am getting failures within the sequence.  However, when using a sequence call as a test, TestStand ignores any failures within the sequence call and only reports whether the test was a pass or fail based on the output parameters.  I would like TestStand to propagate the failure within the sequence.  So to get a pass the sequence call being used as a test must be error free AND return the correct values.  I would like to find a global solution to this just in case it comes up again for different sequences.  But if that is messy I am happy to try adding a statement to the end of my sequences or making a custom step type.  However, I have not had luck with these types of solutions yet. I have also tried making a poststep sequence callback that looks for failures, but I had issues with that solution, and I would hate to use a callback (Timing issues).

 

I have attached an example I made that recreates this scenario. I made this bit of code so it would be easier to understand.  Just step through the main sequence and it should show what is happening.

 

Thanks,


Brady James

0 Kudos
Message 1 of 9
(2,928 Views)

Can you post a picture of what you are seeing?  When I run your sequence file I seem to see all of the results down to the Pass/Fail Test step.  I'm using the ATML report.

 

What report are you using?

 

 

jigg
CTA, CLA
testeract.com
~Will work for kudos and/or BBQ~
0 Kudos
Message 2 of 9
(2,927 Views)
0 Kudos
Message 3 of 9
(2,907 Views)

If you make the TestThatShouldFail step inside of Generic Test sequence a normal sequence call you will get the propagation you want.  Does it need to be a Pass/Fail test type?

jigg
CTA, CLA
testeract.com
~Will work for kudos and/or BBQ~
0 Kudos
Message 4 of 9
(2,893 Views)

In the actual code "TestThatShouldFail" is a multiple numeric limit test.  With what I do, about 90% of tests are multiple numeric limit tests.  I would like "TestThatShouldFail" to stay a test rather than just having a test outside of it that checks the values (it will make my reports look better.  I have a few custom ones that it would make look real ugly).  

0 Kudos
Message 5 of 9
(2,886 Views)

If TestThatShouldFail step is a test step type then I would argue that none of the steps in the sequence it is calling should log to the report. 

 

In fact I would turn off all result recording for the steps in TestThatShouldFail sequence.  They should just be collecting data and passing it back via parameters.  So only use action steps in there as well.  Essentially at that point you are treating TestThatShouldFail sequence as a step module that just retrieves data for you.

 

Doing the evaluation on the step will then yield a failed result if done correctly.  If you want to see this then in your Set Variables step change the first statement to False.

jigg
CTA, CLA
testeract.com
~Will work for kudos and/or BBQ~
0 Kudos
Message 6 of 9
(2,846 Views)

Maybe my example wasn't the best. 

 

In the problem sequence call that the example is modeled off of (which does an XCP_Upload command), there are no tests that occur within the sequence.  Instead what is happening is that we are getting runtime errors in this sequence (for many reasons, some are custom ones because an invalid address is being requested from the UUT, or some are being the UUT is not properly responding, ect.).  I have a custom error handler that sets sequences containing errors as failures (because errors don't propagate).  

Result recording is mainly turned off for the problem sequence (To be ASPICE compliant I need some parts to still report).  

I have some custom reporting that would look ugly if I used my XCP_Upload command to just collect data and then used another step as a test to analyze the data. 

Is there no way to get failures to propagate down a sequence call being used as a test?  If so I might be able to do some strange work-around, but they would be messy and I would like to avoid them if I could.

0 Kudos
Message 7 of 9
(2,817 Views)
Solution
Accepted by topic author BradyJames

Errors and Failures are completely different things.  I would highly encourage you not to do this.  If the error is truly a failure then ignore the error and set your failure data to be outside of the limits.   Basically in your code you need to check for the error and then set your measurement data to make the caller fail. 

 

How are you setting the failure based on the error now?

jigg
CTA, CLA
testeract.com
~Will work for kudos and/or BBQ~
0 Kudos
Message 8 of 9
(2,809 Views)

I found that I can write NaN as a value to numbers.  So I just have it writing NaN to values if error occurs which sets test to fail. 

0 Kudos
Message 9 of 9
(2,705 Views)