LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

CPU usage on buffered analog input and output

I'm developing two separate applications in LabVIEW: one that uses continuous buffered analog input and another that uses continuous buffered analog output.

In the continuous input application, I've noticed that by placing a delay in the loop that contains the AI READ VI, the CPU usage decreases to almost nothing once the delay approaches the time it takes for a buffer to be filled--example: delay time of 500 msec for a buffer of 1000 samples at a sample rate of 2000 samples/sec. This makes sense because, when the AI READ is called, the buffer is almost already filled by the time the delay is over, thus the AI READ doesn't have to sit there and wait for the buffer to be filled before returning data.

In my continuous
output application, I put a similar delay in the loop that contains the AO WRITE; however, the CPU usage remains at 100% no matter what the delay is. The sample rates and buffer sizes I'm using on both the continuous input and output applications are the same. The data I am writing to the output buffer is continuously updated with new data. I'm generating non-repeating data for each buffer write; I'm not just regenerating a sine wave every period for example. I've profiled the VI time statistics and it shows that the vast majority of the time is spent executing the AO WRITE VI.

Any ideas why AO WRITE uses so much more CPU than AI READ? What can I do to decrease CPU usage on an AO WRITE?

Another thing I've noticed is that I have regeneration turned off in AO WRITE, but when I set the delay time in the loop containing AO WRITE to a value greater than the time it takes to empty the buffer, I never get a buffer underrun error. It seems like I should get an error though because
if I make the delay long enough, I'll get to the point where the buffer empties before I write new data to it. Perhaps this behavior is a cause or a symptom of the excessive CPU usage problem I mentioned above.
0 Kudos
Message 1 of 12
(4,834 Views)
Superconductor-

Thank you for posting on National Instrument's Discussion Forums.

Ideally, if you wanted to perform true, simultaneous input and output you would use LabVIEW RT with RT Hardware to guarintee determinism in your system. The windows environment is not deterministic, which makes the problems you are describing a tough issue (when using a non-deterministic operating system.

The approach you are taking to reduce processor time on the Analog Input is perfect. The time delay VI you should be using is Wait Next Milisecond Mutiple.vi. AI Read is used to poll the input buffer until the specified number of samples have been input to the card's memory and then read that buffer into computer's memory. The delay prevents AI Read from being executed
until just before it needs to read the card memory again. (AI happens at the end of the acqusition)
On the other hand, Analog Output is different. You have to write data to the output buffer, then wait until all data in the buffer has been sent out, then overwrite the output buffer again with new values. (AO happens at the beginning of the operation)

Because AI happens at the end, and AO happens at the beginning, the two processes truely consume lots of processor time to monitor and update either the AI or the AO all of the time.

Your second question about Regeneration is more straightforward. "Allow Regeneration" may be set to false on AO processes, and AO write will show an error if regeration occurs. This can be seen in the following example. Note that regeneration will most likely occur because the example uses error clusters to specify order of operation, forcing the AI to happen before the AO operation. If regeneration occurs, you will get an error. Open your LabV
IEW application to find the following example:
LabVIEW>>Find Examples>> Search>> keyword: simultaneous >>Simul AIAO Buffer(E-Series).vi
0 Kudos
Message 2 of 12
(4,826 Views)
dear superconoductor

I have a similar problem with  dynamic AO acquisition in my "synchrous" AO/AI-system. 

My problem: every ~10th sycle (for loop), it needs much more time to write data into the buffer than in a "good" cycle.

unfortunatly I'm not very familar with labview.
Would your send me an example of your solution?

peter

**********
LabView 7.1
traditional DAQ Driver
WinXP:(

Message Edited by labfritz on 06-15-2006 05:17 PM

0 Kudos
Message 3 of 12
(4,633 Views)
labfritz,

I am not superconductor but hopefully I can help you out.  This behavior is not completely surprising given that the loop is running in a not determinisitic operating system (Windows).  The loop time can easily vary based on a variety of factors including what other things are running on the system and how Windows decides to allocate resources.  To better understand what might be happening could you tell me what hardware you are using?  Also how many samples are you reading and writting in each iteration of the loop?  Is there any specific reason you have chosen to use the Traditional NI-DAQ driver?  In some applications your loop time can be improved by moving to the NI-DAQmx driver.

Regards,

Neil S.
Applications Engineer
National Instruments
0 Kudos
Message 4 of 12
(4,617 Views)
Thanks Neil,

Im using NI-DAQPad-6052E(firewire) which is not supported by NI-DAQmx driver.

In my application,  the AO Buffer (2058 S) has to be filled each loop interation with different ramp-data. At the moment I call AO Write + AO Start continuously. This work very bad and I'm looking for a better (i.e faster) way to update the AO Buffer.
In each loop interation
1200 samples have to be written+read synchronously.  One day, I have to do experiments with more samples. I achiev "a kind of  synchrony" by  inserting  different delays. Unfortunately, this  boosts up the cycle time:(
I also have little problems with the AI steam:  Calling AI Read + AI Start each loop 6 times (AI Buffer: 512 S) doesn't acquire proper data. If I would read buffer continuously, I don't know how to start + stop it each loop interation.

Stupidly I have to start my experiments on Tuesday...

Regards,

Peter


0 Kudos
Message 5 of 12
(4,614 Views)

Hello Peter,

Are you using FIFO mode for your analog output?  If not, I would give this a try.  You can read more about how to do this in this knowledgebase and try it out in an example program.

I do not understand your comment about the analog input portion of your program.  Why are you not doing a continuous analog input, such as in the example Cont Acq&Graph (buffered).vi?  You can find this example in the Example Finder in LabVIEW by going to Help >> Find Examples.  This way, you do not need to stop and start the acquisition each loop iteration.  You can always discard that data you do not need before graphing it or saving it to file. 

If you do want to start and stop the analog input acquisition once each loop iteration, you can use AI Control with the control codes of start and pause immediately to pause the operation between loop iterations.

Hope this helps,

Laura

Message 6 of 12
(4,590 Views)
Yes, thanks!
My problems are almost solved 🙂 I  attach a little example of my solution. I don't use FIFO at the moment.  Residual Problem: Prefilled AO buffer is written twice. I have no idea why!

Explanation: AO buffer is 2000 S. The first 2000 samples flows directly to the buffer. Any 2000+ samples (new) are updatet during the  loop  in 200 S portions.  Stupidly my program starts writing new data at sample 4000 (twice buffer size) and not as expected at 2000.


Message Edited by labfritz on 06-20-2006 06:11 AM

0 Kudos
Message 7 of 12
(4,581 Views)

Hi Peter,

I am glad to hear you are making progress.  I have taken a look at your VI and I do not see anything that should be keeping the buffer from updating.  However, I can't run the VI because the subVIs in your ramp VI are missing.  Can you post them?

Thanks,

Laura

0 Kudos
Message 8 of 12
(4,547 Views)

Message Edited by labfritz on 06-21-2006 11:43 AM

0 Kudos
Message 9 of 12
(4,546 Views)

Hi Peter,

I think I see the problem.  It's not that the DC voltage 2000 sample buffer is being outputted twice, but that the 200 sample ramp waveform alternates with the DC voltage until the buffer is full of the ramp signal.  On your first few loop iterations, the buffer is filled up with 2000 samples of DC signal, and then you add 200 samples of your ramp signal that overwrites part of the DC signal.  This continues until the buffer has been replaced with your rampl.  My recommendation is to fill the buffer for the output operation on the first iteration of the while loop with a 2000 sample ramp signal.  This way, you will start outputting the ramp as soon as the 2000 samples of DC voltage has been outputted.  Another recommendation is not to write to the buffer every while loop iteration, but only when you need to update that waveform. 

Regards,

Laura

0 Kudos
Message 10 of 12
(4,529 Views)