Okay, there is no easy way to describe the steps leading up to the this error.
I run a system that has a PXIe-5122 and a PXIe5442. I use TClk to allow me to add multiple results together to increase signal to noise.
Typically, all of the data can be collected in direct mode (no streaming) and that works fine.
Sometimes the data to be collected from the 5122 does not fit in the memory, and streaming is set up and that works fine.
Sometimes the data to be produced by the 5442 does not fit in memory, and streaming is set up and that works fine.
One can even create a scenario where both are streaming and that works fine.
However, any time streaming has been activated on the 5442, it is no longer possible to return to the mode where one does not stream on 5442.
In fact, the system gets to a point where one must shut down the PC and reboot to bring the cards back online.
Anytime a new script must be run on the awg I run (this happens each time on streaming since buffers need to initialized each time.)
before I reload the awg. I have tried reset in NIMAX. I have tried restart under Win10.
Something gets broken in a weird way.
The only reported error occurs not at any NiFgen function call but at
Here is a sample error stream.
NI-TClk initiated a session, and the instrument driver reported an error.
Error reported by the instrument driver:
The specified waveform is invalid.
Waveform Name: www10003
Status Code: -200312
Session index (starts at zero):
www10003 is the name of the previous streaming waveform / waveform location (4 by 1MSample for streaming) but it should not be included in the current script. This is a ghost in the system. Do I need to specifically deallocate this waveform memory because niFgen_ClearArbMemory() does not? If so what command to use?
Another indication of a memory error is that the returned streaming waveform name from niFgen_WriteBinary16Waveform() changes each time, we run. The first run is www10000, the second www10001, the third www10002...
This sequential renaming of the waveform make me suspect that the niFgen_ClearArbMemory() may not be releasing either the memory, namespace, etc.
Any help is appreciated.
I see you posted your configuration but would you be able to share your code with us? This isn't something we see a lot, and it would be very helpful to see how you have your code set up.
If I had to guess I would say there may be sessions that aren't being closed out of properly.
I am working on two fronts.
Front 1 is cutting down the code to remove all of the GUI overhead and non pertinent (and some non NI) hardware etc.
Front 2. I am also looking at my management of the thread safe queues to make sure there isn't something there (possible abandoned streaming queues) that flew under the wire for years and now in my move to CVI2017 or latest drivers rears it head to offend the compiler or drivers in some way. I will get something posted by the end of the week if the gods align and the grad students are held at bay.
I appreciate you taking the time to remove all the non pertinent information. Let's hope those gods align and the code comes through quickly. I look forward to going through it.
Okay. No joy on code stripping yet but I may know the problem.
I am using waveform scripting (niFgen_ConfigureOutputMode (vi, NIFGEN_VAL_OUTPUT_SCRIPT) ) mode with niFgen_WriteNamedWaveformI16 for local waveforms .This allows the user to create mostly arbitrary length waveform and number of waveform loops.
When I (aka user) add streamed waveforms, I use niFgen_AllocateWaveform (still with script output mode). Current help documentation states that niFgen_AllocateWaveform requires the output mode to be sequence. Although this generates a usable output, there is no "niFgen_DeallocateWaveform or similar". This seems to leave an issue with on board memory that is not resolved by my faithful memory annihliation tools of niFgen_AbortGeneration and niFgen_ClearArbMemory.
In short for streaming I use
niFgen_SetAttributeViInt32 (vi, VI_NULL, NIFGEN_ATTR_STREAMING_WAVEFORM_HANDLE, waveform_handle));
niFgen_WriteBinary16Waveform() //loads first chunk of data to stream
niFgen_GetAttributeViString (vi, "0", NIFGEN_ATTR_STREAMING_WAVEFORM_NAME, 512, streamwfname);
This gives me the necessary name to populate the script.
As I type this I wonder if I can use niFgen_DeleteNamedWaveform () to fix this mess. Typically this is paired with niFgen_AllocateNamedWaveform () but since I have the went through the trouble to get the name and all.
More to follow tomorrow..
I noticed that there is a niFgen_ClearArbWaveform function that will clear the memory for an individual waveform. I would expect niFgen_ClearArbMemory to clear all the arbitrary waveforms loaded in memory, but it may be worth a try to see if you experience the same behavior with the other function.
Also I see in your first post that the error is thrown for 'www10003', which would be the 4th run, or waveform streamed. Does the error always come up on the 4th run? Does it vary?
Something else you could try is streaming to the card using the built-in examples and see if you still get the errors while streaming from the examples.
Long aniticpated (lol) code for y'all to play with. Sorry about the copious overhead. It was the best way to assure the similar performance.
Okay this code should build as either Debug64 or Release64 CVI 2017 on windows 10 Education- no testing / guarantees on 32 bit .
I added a Desktop snip (WorkSpaceview.png) for additional "default NI" headers that might be needed but not included.
One should be able to activate "Run Direct" button as many times as you want and see a digitized waveform captured.
Similarly, one should be able to active "Run Streamed" button as many times as you want and see a different waveform captured.
However, now going back to "Run Direct" will generate the error.
(Note: in real application each waveform is sequentially externally triggered and this necessitates the waveform scripting.)
I would like to be able to switch back and forth between streamed and regular script at will in an ideal world.
That's about it. Apparently, the crash in my main project comes from my laziness on error checking as I let the rest of the cards continue to load and those cards end up in a state that requires system reboot.
Sidenote: if you turn off the "Safe Delay" checkbox, then "Run Direct" broken. Safe Delay switches on and off line 174 of "SetupDetect.c" which is a simple delay. I suspect that on modern fast computers niTclk_Initiate() returns too soon for everything to settle (at least on my system.) Maybe R&D should check that out.
Just in case something changed, here is today's NIMAX System Report.