From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Intermittent error -200292 during AO buffer update

Solved!
Go to solution

I have an application designed to allow continuous regeneration of a short signal (e.g. 512 samples generated at 100kHz) on several analog output channels.  I am currently using a cDAQ 9174 and 9263 AO card for development though this would migrate to a PXI system eventually.  The output signal is menat to be identical on all channels but the start of generation on each channel is staggered  by a few seconds, and I would like to avoid stopping and restarting the task in between updates.  For example channel 1 would begin generating the waveform immediatly and 5 seconds later channel 2 should produce the same waveform. Inactive tasks would generate zero volts.  

 

I have had some success in setting up a single task  using a fixed sized output buffer sized to the waveform sampel size and setting the AO data transfer property "Use Only Onboard Memory" to false in order to occasionally update the output buffer from the host PC.  When the time comes to enable certain channels I would simply write a 1D array of waveforms consisting of either the desired waveform or zeros.  For some of the longer duration waveforms such as 2 seconds I see the write function blocks for approximatly that amount of time until the buffer is completly empty before the new values are generated while shorter waveforms switch almost immediatly as expected. 

 

This works reasonably well except for the fact I occasionally get error -200292 when attempting to update the output buffer and it never recovers until I restart the task.  The task may run fine for anywhere between a few minutes to an hour with waveform updates occuring ever 5 seconds.  The timeout on the write function is set to 10 seconds which should be considerably longer than it should take to empty the output buffer.  I am not sure why it would work sometimes and fail at other times though it sounds like it may be a communication or timing issue between the host PC memory and the hardware.  Currently I work around this by restarting the task and test sequence when an error occurs, but is there a more appropiate way to ensure this error never happens in the first place?

 

Thank you for you help.

0 Kudos
Message 1 of 5
(2,985 Views)
Solution
Accepted by topic author AndrewJ

While I'm generally a fan of fully hardware-timed solutions, I gotta ask if it's really crucial for you that the ~5 seconds between channels is precise to microsecond level or better?

 

If not, it seems you'd greatly simplify the struggle you're having by simply reconfiguring the task during the 4.998+ seconds of idle time.  Instead of a continuous non-regenerating multi-channel AO task that must be fed data at 100 kHz on average, you'd have a sequence of finite single channel AO tasks that must be fed data at about 100 Hz on average (512 pts per 5 sec).

 

There are probably solutions to the all-hardware approach you're trying, but you should consider whether it's worth the effort.   If so, I tend to favor approaches that oversize my output buffers so that the new data I'm feeding it represents about 1/3 of the buffer size.  This helps avoid some of the conflicts that can arise when writing the entire buffer size in a single DAQmx Write call.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 5
(2,951 Views)

Thanks for the reply.

 

The timing between the start of each channel being turned "on" is not that imporant, but there is an overlap in the time each channel is enabled. For example, channels 1, 2, and 3 might repeat the same waveform for 10 minutes, but start and end 5 seconds apart in sequence.  It is more important that the output waveform is maintained continuously for the full duration of the test rather than when it changes states or precisely how long it is maintained. 

 

If I were to increase the buffer size to 3 times the waveform size, then wouldn't I still need to write a waveform 3x larger consisting of zeros to make sure the waveform is not partially regenerated on the disabled channels, or do you mean to perform the updates in smaller chunks until the buffer is fully updated? 

 

-Andrew

0 Kudos
Message 3 of 5
(2,947 Views)

I think I misunderstood your signal timing.  Thought only 1 channel was active in a given 5 second interval, and only for a small fraction of a second.  (Though not as small as I mentioned earlier -- I got mixed up on your sample rate before b/c of simultaneous involvement in a thread with 500 kHz sampling).

 

The basic idea for oversizing the buffer is to write things into the buffer much farther ahead of the time they're actually generated.  This extra latency isn't important in apps where the output pattern is pre-decided ahead of time.  So if you had 5 second bursts from 1 channel at a time, my advice would be to create a 15 second buffer.  Write your 1st 15 seconds of data to it initially, then update 5 second chunks at a time in your AO loop.  You would generally be writing that data about 10 seconds before it gets generated.  This gives you lots of "breathing room" in case your PC bogs down and you temporarily fall a little behind.

   When you always write an entire buffer full, your asking DAQmx to manage the process of receiving a full bucket's worth of water into a bucket it's also responsible for emptying, without overflowing and without going dry.  Timing can be tricky and sometimes leads to errors.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 4 of 5
(2,935 Views)

I extended the buffer size as you suggested since that was actually the easiest solution for me to implement currently and it seems to have greatly improved the stability of the system. 

 

Thanks again for the suggestion.

0 Kudos
Message 5 of 5
(2,928 Views)