My situation is like that:
I execute my sequence using sequential model. Usually, the sequence is executed "in the loop" which means when once the sequential model is launched we can test unit after unit till the moment we want to stop for some reason. The sequence is loaded dynamically by the sequential model and with unload after step executes unload option.
In the sequence we are using limits. The default TS limits looks like below.
All our limits get changed during the execution depending on what unit we are testing and on what stage the unit is.
I thought that if the sequence is loaded dynamically and it is unloaded when executed, the next time the next unit will be tested the default values of the limits container will be loaded (as a result of being loaded dynamically and unloaded when step which calls the main sequence is executed). Unfortunately, it looks like when we perform continous testing the test limits are not reverted back to the default, as expected, but they are a superposition of the limits loaded by previous executions. Of course
1. Are my expectations regarding TS behaviour are correct?
2. How to impose the default limits to be "reset" to the default for every sequence execution, not only the first sequential model execution?
Solved! Go to Solution.
[...]The sequence is loaded dynamically by the sequential model and with unload after step executes unload option.[...]
Can you please post your modified process model?
The general setup is that you load the client file and start the execution by using a process model execution entry point. Your statement above requires quite some sophisticated modification on the process model to do what you think it should do.
This is not sufficient modification. You still, at least this is obvious to me, start your execution based on the LOADED CLIENT file...
Which means that you never unload the file.
For me the Unload option: Unload after step execute is ambiguos. For me if object is unloaded that means that the object is not in memory.
OK, What would be enough to reset the the limits to default each time the Main sequence is called?
As already mentioned, it would be a significant modification of the process model.
Because of that, i think it is easier to evaluate the reason for your request first.
As modification is done to several properties, like limits, within your sequence(file), you already have code which accesses these. Is that a property loader?
What is the problem if you do modify the values before entering the step if you restart the sequence for a new UUT? I mean, you will change settings again before using those for evaluation, right?
If I understand your problem correctly, there is a simple solution. In the sequence properties dialog for you sequence, uncheck the option to "Optimize Non-Reentrant Calls to this Sequence". The reason why the problem you are having occurs is that, as an optimization, by default, TestStand re-uses runtime copies of the sequence created for previous runs in order to avoid the overhead of having to create a new copy each time a sequence is called. It does re-initialize all of the local variables when it does this, but it does not reinitialize all of the step properties. If you uncheck this option you will get a fresh copy of the sequence based on the original edit time copy of a sequence each time your sequence is called (though it will be slightly slower to make the call since a new copy of the sequence has to be made each time) which is exactly the behavior I think you are asking for.
Hope this helps,
Thank you for your input. Your solution is the closest the solution I need.
I'd say even your tip is perfect but one thing stops me saying that. You said:
(...) you will get a fresh copy of the sequence based on the original edit time copy of a sequence each time your sequence is called (...)
I'm worry about not the speed of loading you mentioned associated with your solution but the potential problems with the memory. Let say, I call the main sequence 1000 times without interrupting of the execution. So instead of having one copy of the sequence called with cleared/reinitialised all variables and properties I have 1000 copies in the memory. I'd call it a waste to have 1000 copies only because I need to have the the properties and variables cleared.
However, as I said, your solution is the closest the solution I need.
I still think that this idea needs to be impemented.
MimiKLM wrote:I'm worry about not the speed of loading you mentioned associated with your solution but the potential problems with the memory. Let say, I call the main sequence 1000 times without interrupting of the execution. So instead of having one copy of the sequence called with cleared/reinitialised all variables and properties I have 1000 copies in the memory. I'd call it a waste to have 1000 copies only because I need to have the the properties and variables cleared.
The copy only exists while the sequence is running. Once the execution of the sequence completes, the copy will be destroyed/freed. At least as long as your code modules don't hold their own reference to it somewhere (which typically they should not be doing).
So if you call main sequence 1000 times in a loop, you should still only ever have one runtime copy in memory at a time. The difference is a new copy is created (and destroyed once the sequence is done running) each time, rather than creating a copy only once and reusing it. There is a performance hit to this per call of your sequence that is somewhat proportional to the size of your sequence, but unless your sequence is really huge, or you test execution is extremely fast, you probably will not notice the difference.
Hope this helps clarify things,