LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

High Speed Data Acquisition. The application is not able to keep up with the hardware acquisition.

Hello,

 

I am trying to import pressure readings at a rate of 1kHz. I am using the NI USB 6366 and DAQmx. I am getting the error reading -200279 that "The application is not able to keep up with the hardware acquisition."

 

However when I probe the queue it is empty. Which buffer is filling? Is it the buffer in the memory of the USB 6366 device? Any suggestions for how to speed this up? Where can I even view this buffer on the device? I have verified that it is not the data export that is slowing it down. I have the same issues when I remove the Write Measurements to File block entirely.

 

 

Download All
0 Kudos
Message 1 of 5
(3,194 Views)

Posting your actual code verses picture will get you help a lot faster. What pressure sensor are you trying to talk to?

Tim
GHSP
0 Kudos
Message 2 of 5
(3,169 Views)

That error # is a DAQmx error.   It suggests that your DAQ reading loop isn't extracting samples from the task buffer as fast as the device and driver are pushing new ones in.

 

But to my eye, your reading loop really couldn't be any leaner.  I don't see any reason why it can't keep up, unless you've gone out of your way to have an oddly small buffer size.  (Note: I'm looking at the DI task reading loop in vi2.png)

 

The only other thing catching my eye are all the probes.  Sometimes debug probes can have a pretty drastic effect on execution speed, especially custom probes like graphs and history probes.   However I'm guessing you only started adding all the probes *after* experiencing the problem in the first place.

 

If you were using a fixed-length queue for your producer - consumer loops (and it doesn't appear that you are), there'd be a chance for the consumer loop to bog down on file writes, making the queue fill up, thus stalling the producer loop (which is also your DAQ reading loop).

 

How soon does the error occur?  Where is that DI task started?  Perhaps some stuff buried inside your custom config vi's adds too much delay from when the task starts until you are able to start reading in the loop? 

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 3 of 5
(3,138 Views)

Kevin,

 

Thank you so much for your help. You are correct that I added the probes after having the issue.

 

The DI starts in the "Start CorDIO" Protocol. It has a few initialization steps but the last step within it is the DAQmx start task. Then LabVIEW goes directly to the producer loop so there really isn't anything causing delay in between. However, because of your suggestion, I noticed the queue was waiting to initialize until the start task produced a "No error" output. I have removed this link so the queue can initialize immediately on startup of the vi. This has increased the running time from about 10 min to 20 minutes but something is still causing a crash. However I need to collect data over 24 hours so we sill have a long way to go.

 

Sometimes it will run for an hour, sometimes just 30 secs. Most times I can collect 10 minutes of data.

 

I have verified with an oscilloscope that it is not an issue with the sensor. the sensor is not outputting any error codes when the crash occurs. LabVIEW just stops sending the read commands to the sensor.

 

The queue is not limited in size. When I had consumer speed issues before, the buffer expanded to millions of data points before it crashed.

 

The queue is not filling, even when I am running the file write command it still stay close to empty, so I don't believe the error is caused by the consumer loop. The issue still occurs even when I remove the file write block entirely. The vi also isn't crashing at intervals of 20000 while the data is saving. This shouldn't be the issue.

 

I believe it is the memory of the buffer on the USB 6366 device. Unfortunately I'm not sure how to access this memory since it is on the hardware. Do you have any suggestions on how to probe this memory?

 

Any other ideas are very welcome. I appreciate all of your help.

 

0 Kudos
Message 4 of 5
(3,090 Views)

I guess the next think I'd try is to change some task properties and/or interactions.  The first thing I'd probably try is to read more samples from the task each iteration.  As a troubleshooting exercise, try reading 320 samples rather than 32.

 

I don't have a strong theory here, it's just a general observation that it's often best to deal with a given bandwidth of data by using larger chunks and a slower iteration rate rather than vice versa.

 

There are also deep-down DAQmx Channel properties related to "Data Transfer" and "USB Transfer" you might try tweaking.  Again, the error # suggests a different problem -- that the driver IS transferring data but your app isn't keeping up.  It's just the only other sorta relevant knob to twiddle that I know of.

 

Do you have any desktop-based PC's you could use to troubleshoot?   I've long preferred to avoid USB devices due to their latency and bandwidth limitations, as well as their occasional "acting up."  On the other hand, hearsay evidence suggests that the more capable USB devices (like yours) and cDAQ systems are *far* less prone to the inconsistencies seen in some of the lower-end and older devices.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 5 of 5
(3,080 Views)