From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

processing between daq measurements

I’m having a problem doing some continuous measurements, and I was hoping someone could point me in the right direction.

The basic setup is a PXI-4461 card, with an AO channel going through a DUT and into an AI channel.  What I need to do is take a series of measurements, altering either the AO signal amplitude or a feature on the DUT between each measurement.  In some cases, I need to examine the data from the previous read in order to setup the amplitude or DUT correctly for the next read.  The AI/AO block size is set to # samples data needed + # samples to write ahead.  The first time I read, I just get the # samples data needed. The ‘write ahead’ portion is there so that the AO device doesn’t run out of samples to generate before I can write more samples (as there is some processing to do between writes).  Then the next read gets a whole block, but it is ‘write ahead’ samples behind the writing task and I discard the 'write ahead' portion of the waveform.

The problem is, if ‘write ahead’ is too short, I get an underflow error.  If ‘write ahead’ is too long, I get an overwrite error.  There isn’t much wiggle room between the two with the measurement settings I need (duration 1s, sampling rate ~40kHz), and I’m worried that depending on how much processing the computer is doing at the time, I will get the error.

What is the best solution? Increasing read buffer size (what is the recommeneded size if so)? Using multiple finite data acquisitions? Taking shorter measurements and appending the data of consecutive reads?  The reason I didn't start with multiple finite data acquisitions is that I thought starting and stopping the tasks all the time would be inefficient, but perhaps it doesn't add that much overhead?

Thanks for any help

0 Kudos
Message 1 of 4
(2,401 Views)
What I'm saying probably may not address the real issue....
 
Will it be possible for you to read smaller blocks of data (instead of the 1sec block say every 250ms) and process it...so that processing will be faster.
 
kallis
BR
0 Kudos
Message 2 of 4
(2,391 Views)
It's possible.  I've been experimenting a bit though and doing multiple finitie reads seems to be the least complex solution, and the most robust.  Is there any reason why this wouldn't be the best solution (overhead etc)?
0 Kudos
Message 3 of 4
(2,377 Views)
Hello Lila K.
 
Thank you for contacting National Instruments. 
 
It seems for your application, the finite method is your best solution.  You talk about numerous options and state that the multiple finite reads is the most robust solution for your application.  There is some overhead for starting and stopping the task, but as long as your application is functioning properly now and the task isn't being restarted often, I would stick with your current approach.  Is it indeed functioning properly?  I know that the overflow/underflow can be a tricky issue so let us know if you are still having issues with this. 
 
Have a great day!
 
Brian F
Applications Engineer
National Instruments
0 Kudos
Message 4 of 4
(2,339 Views)