Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

DMA buffer in mxbaseconfig

I am working on code in C++ on OS X using NI-DAQmx Base. I have gotten to the point of trying out mxbaseconfig and DAQmxBaseLoadTask(). I'm using a PCI-6014 for testing.

I notice that the pre-configured example task "ai finite buffered", on the Advanced tab has DMA channel 1 selected with a DMA buffer size of 32768. So when I created my own analog input task, I did the same. In my task, I selected ai channels 0 and 1, and a 10000 scans.

When I load and run the task, DAQmxBaseReadAnalogF64() comes back with error 42 (one of the engineers read the Hitchhiker's Guide?) and DAQmxBaseGetExtendedErrorInfo() gets this message:

RLP Invoke Node The DMA buffer overflowed because data was not read from the buffer as fast as the DMA channel wrote to the buffer.

If I increase the DMA buffer setting to 65536, it works. If I turn off DMA, it works.

So here's the question finally:

Does the DMA buffer setting have to be larger than the total number of bytes that will be acquired? How large can it be? Are there other details I should know about DMA and NI-DAQmx Base?

OK, so it's three questions...
John Weeks

WaveMetrics, Inc.
Phone (503) 620-3001
Fax (503) 620-6754
www.wavemetrics.com
0 Kudos
Message 1 of 4
(3,444 Views)
>>Does the DMA buffer setting have to be larger than the total number of bytes that will be acquired?
Well, more precisely, the DMA buffer needs to be bigger than the number of scans to read in at a time. What is happening in your situation is you are telling the DAQ Device to put 10000 scans (2 channels * 2 bytes per sample * 10000 samples = 40000 bytes) in a buffer in RAM that is 32768 bytes big, and before LabVIEW gets a chance to read any of those data points into LabVIEW memory space, some are being overwritten (because the total number of scans to read won't fit into the buffer). Notice that a buffer size of 40000 will work, but a buffer of 39999 won't.
Maximum DMA buffer size is probably system dependent, based on amount of available RAM.
To avoid this error in your situation, without increasing the buffer, you can simply read fewer samples more often (put the DAQmxBase Read.vi in a loop), rather than trying to read all at once.
-Alan A.
Message 2 of 4
(3,423 Views)
Thank you, Alan.

So is the DMA Buffer setting in mxbaseconfig equivalent to calling DAQmxBaseCfgInputBuffer()? In C code if I don't call DAQmxBaseCfgInputBuffer() isn't the buffer allocated automatically based on the sampsPerChanToAcquire value passed to DAQmxBaseCfgSampClkTiming()?

My confusion comes from the fact that it looks like I have to explicitly set an "Advanced" setting in mxbaseconfig, whereas my C code gets some automatic behavior.
John Weeks

WaveMetrics, Inc.
Phone (503) 620-3001
Fax (503) 620-6754
www.wavemetrics.com
0 Kudos
Message 3 of 4
(3,411 Views)
John,

>>So is the DMA Buffer setting in mxbaseconfig equivalent to calling DAQmxBaseCfgInputBuffer()?

No, the DMA Buffer setting in the Config Utility sets the size of the DMA buffer, used only by DMA channels. The DAQmxBaseCfgInputBuffer function sets the size of the input buffer used by all other channels.

When you use a dynamic example (C or LabVIEW), DAQmx Base will use a DMA channel when if one is available. Also, both the DMA buffer and the Input buffer will be set automatically. Size will be based on number of samples to acquire and sample timing. You can programmatically change the size of the input buffer, but not the DMA buffer.

-Alan A.
0 Kudos
Message 4 of 4
(3,397 Views)