From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

NI TestStand Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

When test cycles run and fail for some reason, sometimes LabVIEW will either crash or get hung and wont close.

 

Even when exiting or restarting Teststand, LabVIEW does not close properly...or re-open properly, locking any HW resources. Non-technical savvy operators are unsure / do not know what to do.

 

If the operators were notified with a pop-up informing them of a windows restart is needed to clear LabVIEW PRIOR to even the login prompt, that saves a lot of false start/false fails and headaches.

 

We have implemented this in the "FrontEndCallbacks.seq" and works great for false-fail runs.

 

Teststand should have this implemented built-in.

When user opens the Offline Processing Utility and at the same time starts to type on the keyboard the user can accidently rename the profile.

(Attached a screen-recording to visualize)

It would be great if there was no selected row or column when starting/opening ORPU. The renaming of the profile disturbs the production since the database logging will not work as expected.

 

When we start ORPU we already have the '/tray' enabled but somehow its still possible to accidently rename the Profile.

I think it can a be a really good idea to review and give feedback on all Idea exchange.

 

You ask us to give feedback but you don't finish the loop.

 

Some idea are mark as new for many years now...

Since TestStand 2019, it's possible to configure an action step with a LabVIEW Module to switch between using a source VI and using the same one but compiled into a LabVIEW Packed Project Library (PPL).

The option, accessible in two ways, is called "Always run VI in Packed Project Library".

 

That's a neat possibility since it's allow to switch between a development version with easy debugging of a classical VI, and an optimized and locked production version with PPL.
One non compiled VI
One compiled VI into a PPL
One LabVIEW project
One TestStand step

 

However, when LabVIEW Adapter is set to Run-Time, a tight coupling between the compiled VI and the non-compiled VI is maintained for no reason.

 

Example 1)
-A VI is developed and compiled on a development machine A
-It is called as the module of an action step
-The VI, the PPL, the .lvproj and the .seq are pasted on a production machine B with fresh installations of LabVIEW and TestStand
-LabVIEW Adapter set to Run-Time on machine B
-Always run VI in Packed Project Library set on machine B
--> The execution will not start, since the classical error -17600 appears on the call. The reason is because the LabVIEW cache of the machine B does not contains data from the .lvproj. Simply opening then closing the .lvproj updates the LV cache, which solves the issue. However, it makes no sense to depend of the LabVIEW development environment on this production machine since the LabVIEW Adapter is set to Run-Time and "Always run VI in Packed Project Library" is enabled.

 

Example 2)
-LabVIEW Adapter set to Run-Time
-Always run VI in Packed Project Library
--> If source VI is deleted, it takes a long time to preload the modules. See here

 

Proposition :
When the LabVIEW Adapter is set to Run-Time and "Always run VI in Packed Project Library" is enabled, it should be possible :
- To not install the LabVIEW development environment (only the LV Run-Time)
- To keep only the PPL (and eventually the .lvproj) and to delete the source VI (no source code on production machine)

It would be nice to have the option of making an Enumeration of Expression type and be able to use the Enumeration directly in an Evaluate() expression.

vrv_0-1605611723913.png

 

The expression might look like this investigating if "operator" have technician rights Evaluate(Enums.MyConstants.Techincian)

The TestStand API doesn't provide a simple, robust mechanism allowing developers to programatically run sequences outside of the ActiveX UIs.

 

On many an occasion I've wanted to wrap the following basic functionality:

  • Run a specific sequence file (with or without a [typically custom] process model)
  • Wait for it to complete.
  • Retrieve the result.

It's something I've needed to do in all of the following situations:

  • Integrating into a customer's existing framework
  • Integrating into my own automated test framework
  • Providing a simple API to a customer
  • Creating customized UIs that rely on UI messages and events rather than the ActiveX Controls

The solution I've ended up defaulting to in the past has been some variation on:

  • Start with the full-featured C# UI.
  • Scrape out all visible ActiveX Controls, and hide the window so that it's running in the background.
  • Integrate a TCP/IP (or equivalent) client into the application that has the ability to listen for requests and then implement them through the AxApplicationMgr.
  • Build a TCP/IP server assembly that launches the client application and exposes the necessary API for simple interactions.

The approach above is time-consuming, error-prone, and feels like a hack -- but given that TestStand does not expose any easy mechanism for simply running a sequence, this is what I've ended up having to resort to.

I'm trying to pull out all parameters from a sequence when it errors and save it as an additional result. This is to help the debugging process in a custom report plugin.

 

It falls over when enums are used as they can't be pulled out as GetValVariants then converted to strings.

 

I suggest that the if a variant for enum is called it keeps the number and string text inside of it "[5] Item number 6" for example. This can then be interpretted into a string by Str().

 

Example expression:

Locals.ConcParams = Locals.ConcParams + "Parameter: " + RunState.Caller.Parameters.GetNthSubProperty("",Locals.X,0).Name + " Value: " + Str(RunState.Caller.Parameters.GetNthSubProperty("",Locals.X,0).GetValVariant("",0))

Custom TestStand UI suggestion.

 

I would like the same functionality as with Sequence Editor when double clicking on Sequence Call step (already configured) and it goes into the Subsequence directly and you can view the SubSequence.

 

I would like to see that functionality in the TestStand Custom UI when double clicking on the Sequence Call step with Sequence View Manager and it goes into the Subsequence so it can it viewed. 

 

This would come in handy when debugging a subsequence and setting break points in the subsequence and then viewing variables during execution.

In synchronization  Notification with Wait as an Operation, then there should be Time Out status (Optional Output, Available as a property now) based on which the user can take decision.

Here are some suggestions to improve the preloading task and reduce debugging expense:

- abort the process at the first unloadable module with keeping the loading window open to get the related module information

- return a list of unloadable modules

- return an error when preloading is not successful

Trying to minimize memory used by Teststand, I have selected from sequence properties "Disable Result Recording for All Steps".  This works great except for sequence which DO need to record results.  There is currently no way to override this sequence option to enable recording for the specific measurement steps.  I would recommend added an option which allows the step to 'Override' the sequence disable result recording setting.  In TestStand 2014 there is an option under the Step / Run Options / Result Recording Options to "Enabled (overriding sequence setting)" but this does NOT work to enable / override the sequence setting, the results don't get recorded.  The only solution I found was to NOT enable the sequence "Disable Result Reporting" option on sequences with measurements, and select each step, disabling the Result Recording, except for the measurement steps.

In the first TestStand tutorial,  one suggestion for tracking local variable values is to apply a breakpoint to the code, then step through the execution and observe the variable values on the Variables tab in the Execution window.  One problem I see with this is that with each Step Over in the execution, the focus switches back to the Steps tab.  I have to keep switching back to the Variables tab to see the values.  And this method is taught in the NI tutorial.

 

Perhaps NI can provide a way to lock focus on one of the tabs, in this case Variables, as we step through the execution (such as when debugging code).  Maybe allow the user to jump to another view manually, but then set the focus such as when the next Step is requested, the view automatically jumps back to the view that was set in focus lock.  This would make such a method truly helpful when debugging a TestStand sequence.

For our test we use 48 TestSockets in a Batch process model.

Every TestSocket will gather data for every millisecond while the test of maximal 3 minutes is preformed. A few times per second we like to call a LabVIEW VI and preform some tests on the last few seconds of this data. To give the CPU some time to do other things a 100msec wait time is in between all the tests. While LabVIEW only needs the last few seconds of the array to preform the test, TestStand will take a subset of the array and give this to LabVIEW. But this subset it already taken more than 1.5 seconds in TestStand.

 

Attached is a small Benchmark test that shows (and hopefully explain) this behaviour.

We just make an local array of 180000 data points. (3 minutes with 1msec sample rate)

A for loop of 100 times is done to average the results.

In the loop two VI's are called.

  • One with 100msec wait.
  • The second to receive the array. In LabVIEW the array isn't touched.

 

If we just start with an array from 10000 data points. (the first 10 seconds)

This will take 107.5 msec and about 12.5% of my CPU resources.

Seems good, but the data grows to about 3 minutes, lets test 180000 data points.

This will take 138.5 msec and about 40% of my CPU resources.

We already use 40% of my CPU without doing anything more than give LabVIEW the data.

 

As we don't need the complete data array, it seems not smart to copy everything to LabVIEW. TestStand is capable to take a subset from the numeric array and send this part to LabVIEW.

So if we want to analyse the last 5 seconds, we give the data to LabVIEW like this:

Locals.Array[175000 .. ]

This is only half of the data as the first test, so expected it will be about same in execution speed.

The average execution is now 1.6 seconds, so 1.5 seconds is used for the array subset.

Also the CPU is fully taken by this process. This way our application can't work.

 

As a workaround I send in the complete array into LabVIEW and take a subset in there. This is at the moment faster than take a subset in TestStand, but I would expect that this process can be faster done inside TestStand.

 

I would like to post the idea of an optimized array subset function.

This will optimize the performance of TestStand greatly while working with larger array's.

Especially if you have more TestSockets than CPU cores, like me.

Hope it will be good to have AddArrayElements in TestStand (OperatorslFunctions) which will reduce effort using Loops to do the same function.

Add Array Elements.PNG

There should be an Engine Callback that is executed as the very last step in the order of step execution that will execute regardless of any and all settings or step results: the PostStep "No-Matter-What" Engine Callback.  I have a requirement to perform certain actions at the end of every client sequence step regardless of step, station, report settings as well as step results (Status, Error).  Right now, the callback that gives the best coverage is ProcessModelPostResultListEntry, but this does not fire when client sequence developers set Result Recording Option for a step to False.  My requirements call for my actions to execute whether or not the developer of the client sequence desires the results to land in the TestStand report or not.  As with other callbacks, if it is blank, the engine can skip it. 

 

The PostStepNoMatterWhat callback would execute regardless of all these, but as an aid to Framework developers, NI should provide a matrix for each of the default Process Models that shows which of the engine callbacks will execute given the following data:

  1. Step.Result.Status {Done, Skipped, Passed, Failed}
  2. Result Recording Option {Enabled, Disabled}
  3. Step.Result.Error.Occurred {True, False}
  4. Run Mode {Normal, Skipped}
  5. Ignore Run-Time Errors {Enabled, Disabled}
  6. On-The-Fly Reporting {Enabled, Disabled}
  7. Error Dialog Selection {Ignore, Run Clean-up, Abort}

 

Hello,

 

If you get memory problems, you had to tune your reports options, result collecting, load / unload modules  .... Smiley Embarassed

 

These tasks are very long, you have to point all memory consumers first ... It pollutes your test sequence only for memory purposes ! Smiley Frustrated

 

When you try to modify the result reccording, you will also have problems for your report generation ... 

 

It should be nice to add a new feature allowing an automatic result list removing, after onTheFly reporting, ontheFly database writing have treted them ...

 

A kind of "OnTheFly and remove unused results"

 

When ontheFly reporting, and The OnTheFly database writing are over, the treated resultList should be put in a garbage structure !

Older test results could be removed if memory is needed ... Smiley Wink

 

I know this could be not simple ... but this could help very much, for big sequences creation. Smiley Happy

 

Thanks a lot.

 

Manu.net (TestStand memory dustman !)

 

Hello,

 

It would be nice to have a tool such "Trace Toll kit" for TestStand, in order to be abble to view the currently loaded modules. 

 

When you have big Sequences, with many loops, you can get memory problems. Smiley Sad

 

Then you'll have to play with the load options, the results recording, the on the fly reporting .... Smiley Mad

 

It should be nice to had a tools which could show us the memory used by every modules, structures, Globals, fileGlobals, parameters, locals ...

So it would be easier to point to the main memory consumers  !!!! Smiley Wink

 

Or better ... let TestStand access the 64bit world Smiley Happy !

Get rid of the ActivX architecture !Smiley Wink

Memory management should not influence Test creation ... 

 

Thanks a lot.

 

Manu.net

 

Because of the way .NET applications and assemblies are invoked in TestStand they are a child process of TestStand.  This means that they share TestStand's resources.  For most applications this is not an issue but if the application or library being instrumented by TestStand is resource intensive this creates a significant problem.  In the scenario that served as the impetus for this suggestion we saw performance 1/10 that when running the target application outside of TestStand.

 

To correct this I recommend the .NET adapter architecture be changed or be able to be configured such that instead of directly instantiating target applications a call to create an object with a .NET adapter would create a separate process that consisted of a TestStand WCF client wrapper process that would host the target .NET process and communicate with the parent TestStand instance via WCF.

 

Here is a simple block diagram of the intended architecture:

 

 

TestStand_dotNET.jpg

If there's a way to do this already feel free to ignore this suggestion 🙂

 

I have a process that spawns N parallel measurements for a given DUT at one point during code, and then has a series of wait steps later, to collect all the results / errors / etc.

 

I have a reaonable timeout, with error generation enabled on the Wait steps as a safety mechanism in event a process hangs (rare, but not impossible). During normal execution these timeouts never trip, and all runs beautifully.

 

If I set a breakpoint/single-step my execution to troubleshoot one of the measurements during design/debug, often times it takes a while before I 'resume', and when I do, the timeout on the parallel wait immediately trips with error.

 

would there be some way to make the Wait step's timeout logic 'pause' while execution paused and 'resume' when sequence is executing normally? that way the error I'm trying to trap during debug won't get confused with an irrelevant error for 'you took too long'...

 

i know I can put conditional logic on the timeouts to discard the error if running inside the debugger, but it'd be cool if the step was just smarter in general

 

--Elaine R