Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

daq mx buffer overwritten

Hi,
 
I'm using a really simple daq-mx program found in the "find example" ressource of Labview help.
It's the "Cont Acq&Graph Voltage-Int Clk.vi"
 
I just chose another polymorphic vi for the Daq-mx read function. Instead of the Analog 1D Wfm NChan NSamp, I chose the Analog DBL 1Chan 1Samp.
I tried it with a rate of 10000 kHz and it works but the rate is not exactly constant, sometimes it seems to be 10000kHz, but sometimes it jumps 3ms.
The other problem is that I tried at 40000kHz but I have problems with the buffer. The error message says that The buffer is overwritten and that I have to increase it.

My questions are: How can I get a perfectly constant rate and how can I increase the buffer?

I'm using only one channel now but I'll have to use three channel at the same time later.
My computer is a PXI (relatively new) and my OS is Win2000.
 
Thank you!
0 Kudos
Message 1 of 4
(3,397 Views)
Poly Meca,
 
When you are dealing with hardware-timed buffered analog input, there are several things going on.  The rate at which the hardware samples the channels in your task will be dictated by the rate you set on the DAQmx Timing VI.  Basically this rate controls a clock on your device.  When this clock pulses, all of the channels in your task will be sampled (the exact timing of your samples will vary depending on whether you are using an M-Series, E-Series, or S-Series device).  But your hardware will give you a sample on all of your channels every time this clock pulses.  This will be a very constant rate.  Once the sample is taken, it passes through a FIFO on the device, over the PCI bus, and into a buffer on your computer.  This transfer of data can take varying amounts of time, depending of factors such as your data transfer request condition, your data transfer mechanism, the amount of bus traffic, and so on.  When you call read, the Read VI will transfer data out of the buffer on your computer, and return it to you.
 
From reading your question, you wrote, " tried it with a rate of 10000 kHz and it works but the rate is not exactly constant, sometimes it seems to be 10000kHz, but sometimes it jumps 3ms."  Are you referring to the amount of time that your loop with the read function inside takes?  If so, keep in mind that this can be very drastically affected by other factors, such as whether the operating system gives processor time to some other process in the middle of your loop, does LabVIEW have to re-draw indicators on your front panel etc...  Keep in mind however, that your device is still acquiring data at the sample rate you specified.  If your loop rate is less than than your sample rate, eventually the buffer which sits between your DAQ device and your program will fill up, and I would expect that you would see a buffer over-write.  While you can use DAQmx Configure Input Buffer.vi to increase the size of your input buffer, you will still eventually over-write if your loop rate does not keep up with your sample rate.
 
What I would recommend that you do, is go back to using an 'NSamp' version of read.  These will be more efficient than then '1Samp' versions (ie... less function call overhead, less arbitration of the buffer between the device writing into the buffer, and your application reading from it, etc).  This will also allow you to read your data with a lower loop rate than using the '1Samp' method.  Or is there a reason why you must use the '1Samp' read flavor?
 
I realize that was a bit of a long explanation, however I hope that it explains what you are seeing.  Please post again if it does not clear up your questions.
Hope this helps,
Dan
0 Kudos
Message 2 of 4
(3,387 Views)

Hi Dan,

Thank you first!

This really helps me. The's only one thing I don't understand. Why do I need to take a lower rate if I take N Samples? I understand that more you have samples, a lower frequency you need, but are these N samples read at each time increment or all at the same time?

I'm sorry don't know much about how do acquisition cards work physically...

Thank you very much!

0 Kudos
Message 3 of 4
(3,367 Views)

Ploy mega

The lower rate which I was referring to in my 3rd paragraph was referring to the rate at which your loop in software where you call the DAQmx Read VI needs to run.  If you read only a single sample at a time, the loop must run for each and every sample acquired.  For example, if you are acquiring at 10 kHz, reading 1 sample at a time will require your loop to run at a rate of 10 kHz.  If it does not then data will eventually accumulate in your buffer until that buffer overflows.  However if you read data 1000 samples at a time, then your loop only needs to iterate 10 times per second.
 
The way this will work is that your hardware will continually be sending data to the buffer at a rate of 10 kHz.  When DAQmx read is called, the driver will examine the buffer and determine if 1000 samples are available (if you set the 'number of samples per channel' input to be 1000 as I used in my above example).  If there are not yet that many samples in the buffer, the read VI will essentially go to sleep for a period of time.  When it wakes up, it will again examine the buffer to see how many samples are available (keep in mind that while all of this is occurring your device is still sampling data at 10 kHz and sending data to the buffer).  If there are 1000 samples available, it will then read them from the buffer and return them to your program (all of theses samples get returned at the same time).  If there aren't 1000 samples available, it will again go to sleep and continue the sleep/wake/check cycle.  This can have several advantages.  The example which you mentioned takes the data read and writes it to a graph.  If you only read one sample per channel per read, then LabVIEW will need to redraw your graph for every sample acquired (or 10,000 times every second).  If you only read 10 times per second, operations such as redrawing the graph only have to happen 10 times per second.  If your graph auto-scales data, then LabVIEW will probably attempt to re-scale the graph 10,000 times per second.  This consumes a lot of your processor's resources.  By reading more samples less often, you often free up your processor while waiting for samples to arrive in the buffer.  This will allow other programs to run, or other VI's to execute (if for instance you had a separate VI running an analog output generation).
 
From a hardware standpoint, when you run the start VI, basically all of the analog input clocks on your daq card become active.  The card starts sampling and transferring data to the buffer on your computer at the rate specified to the DAQmx timing VI.  For a continuous acquisition, the hardware will continue to sample at the rate specified until one of two things happen.  The first is that you stop your task.  The second is that an error is encountered (ie... the buffer its writing into becomes full).  The process in hardware is fairly independent of how you read the data from the buffer.  However the flavor of the read VI you choose to use, and what you do with the data you acquire can have a large impact on how efficiently your software runs, and as a result how quickly you can read the data being sent from the device.
 
I hope this addressed some of the questions that you still have.  If not, post back and I'll see if I can get you the answers you need.
Dan
0 Kudos
Message 4 of 4
(3,352 Views)