Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Error allocating DMA channel with NI-DAQmx Base 2.0, Mac OS X

I have an VI in LabVIEW 7.1 that uses NI-DAQmx Base 2.0 on Mac OS X 10.4.6. It does a lot of serial, GPIB, etc. as well. But the DAQ fails after running for 9 days. At least this time. I haven't waited a second 9 days.s

The DAQ part creates a channel, for 32 channels and a finite number of samples (1250) at 80 KHz. About once per second, the application starts a DAQ operation, waits, and then reads the returned values. This is supposed to happen conintually.

This morning, I got a "The DMA Channel could not allocate a large enough buffer. Try using a smaller buffer size. Error 42 at RLP Invoke Node." Now it has allocated this amount of data over and over. Hopefully it is not locking up memory (the system has about 1 GByte in it). Why suddenly will there not be a large enough buffer? The buffer should be 2 bytes * 32 channels * 1250 samples = 80 kbytes. This is not a big chunk nowdays.

Temporarily I have reduced the size of the chunk of data read, but has anyone else seen this problem? Any resoloutions? I have reported it to an AE at NI but other experience would be helpful. Temporarily I have reduced the buffer size to 781 samples at 50 kHz. But I should not have to do that.

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 1 of 13
(4,161 Views)
Hi sth,

I hope you're doing well.  That is strange that you would be seeing these messages after running for such a long period of time, but I was curious if you noticed if there were any memory leaks associated with running the application.  At such a small amount of data, it may be hard to notice this in the short term, but it would be interesting to see how the memory usage changes after a few days.  I agree that 1 GB of RAM should be plenty, but this was just something I was curious about.  We'll continue to help research this issue, but maybe some other users may have first hand experience with this type of error and may be able to provide some insight.

Thaison V
Applications Engineer
National Instruments

0 Kudos
Message 2 of 13
(4,153 Views)
Thiason,

Thanks for the note. I also have a service request in on this and am supposed to hear back today. (SRQ #809591 with Mike).

I haven't dug thru the NI-DAQmx Base to see if the actual DMA allocation is in the kernel or in the user space. The RLP invoke node is a user space client calling into the Register Level Program in the kernel driver. The program has now been running again (with smaller number of samples on each cycle) since 10:51 AM EDT on 5/5.

Here is the output from "top -uw". If either the kernel or LabVIEW is leaking memory it is not doing it very fast! But even 80kb/cycle should show up as 80 Mbytes after 20 minutes (1000 cycles). The smaller # of samples should be about 50 kbytes for a buffer. So if it is leaking then it is not leaking the entire buffer. It may be that the DMA engine just can not get enough contiguous memory? Don't know, that is why you guys are the experts!! If there is anything else I can give you to help track this down, please let me know.

Processes: 61 total, 2 running, 59 sleeping... 227 threads 10:01:58
Load Avg: 0.56, 0.65, 0.84 CPU usage: 24.2% user, 24.2% sys, 51.6% idle
SharedLibs: num = 178, resident = 39.9M code, 3.59M data, 16.2M LinkEdit
MemRegions: num = 8413, resident = 171M + 10.3M private, 88.0M shared
PhysMem: 115M wired, 298M active, 843M inactive, 1.23G used, 22.7M free
VM: 4.43G + 107M 50874(0) pageins, 1956(0) pageouts
PID COMMAND %CPU TIME #TH #PRTS(delta) #MREGS VPRVT RPRVT(delta) RSHRD(delta) RSIZE(delta) VSIZE(delta)
17313 LabVIEW 48.6% 156 hrs 29 205 668 100M 60.7M(68K) 42.4M(-68K) 65.2M 247M
6453 top 15.4% 3:24.61 1 20 23 26.2M 1.29M 456K 1.71M 26.9M
68 WindowServ 9.0% 19:03:23 4 260(-1) 353 57.2M 4.24M(4K) 19.8M(-4K) 19.7M 122M
0 kernel_tas 3.4% 11:05:15 44 2 3615 46.1M 27.2M 0K 93.0M(-4K) 970M(-4K)
327 SystemUISe 1.5% 4:20:13 3 223 251 33.8M 5.00M 11.1M 7.95M 136M

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 3 of 13
(4,136 Views)
Thanks sth,

I'll work with Mike on this, but we'll keep this thread going in case any other users want to chime in with any input on this issue.

Thaison V
Applications Engineer
National Instruments

0 Kudos
Message 4 of 13
(4,109 Views)
This is sort of what I thought was going on. It seems this is a nasty consequence of the architecture of NI-DAQmx Base. For a repetative, finite number of samples situation, the DMA buffer (even though it is the exact same size) is allocated and deallocated. Aside from the fact that this is expensive operation to do in the kernel, it leads to memory fragmentation in the kernel and finally memory cnnot be allocated for DMA transfers.

The problem is that the START operation is inside the loop. The start operation needs to be divided into a "Start" and a Trigger. OR the DMA allocation and setup should be created when the Task is created. At that point it is determined that it is a finite number of samples and how many samples. The DMA allocation should persist UNTIL the task is destroyed. This unfortunate architectural decision is now cast in stone to prevent breaking previous work.

And no I can not do continuous acquistion because if the buffer overflows it stops all data acquistion. I do not want to have this ever happen. Not rarely, but never.

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 5 of 13
(4,063 Views)
[this post is broken into 2 parts to work with the 5000 character limit of this forum]
PART 1

I am summarizing a lot of back and forth email on this to keep as a record. Perhaps someone would turn this into an application note! (hint, hint)

A continuous acquisition will be not robust without forcing a reboot of the system at an unpredictable time. It is not just that it is non-ideal, it will not work. It requires reallocating either a large buffer few times or a small buffer a lot of times.

For another take on this (which would help me as well), see the forum thread and this.

The other problem is that I am reading a fixed number of samples. There is NO way to find out that remaining number of samples in a buffer unlike older versions of NI-DAQ. This means that if I fall behind, I can not clear out the buffer as I would wish to do. The data is available at a lower level but not passed back to the top level VIs. Of course I could modify the DAQ read.vi but then I am back to mucking with the standard package again. Why was this remaining number of samples in the buffer deliberately hidden from the user? To make life simpler?

If I make my buffer too large then I am reading too many stale samples. If I make it too small then I am going to overflow it too often and fragment memory and have to reboot again. What is that optimum buffer size or how would I determine it. We all agree that it is a less than optimal solution but what is the best way to implement it. I am at NI's mercy here since they are the only ones that know the details of the kernel.

The other problem with the continuous acquisition has also been discussed in the NI forums. When one does the DAQ Read, the system spikes the CPU usage as it POLLS until the read is completed. This is bad. Very very bad. NI-DAQmx base tries to minimize the polling overhead by slowing down after the first 10 polling events but that is a kludge. The usual way around this is to read the number of samples in the buffer with a read of length zero, calculate how much to wait until doing the actual read. BUT one can no longer get the number of samples in the buffer since that value is hidden deep in the NI-DAQ mx base software as I pointed out above.

And, when you restart the acquisition it appears to put a spike in the data as all the channels are setup once again. Not sure why or if this a real problem or not, but you have to read about 100 samples and discard them before continuing.

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 6 of 13
(4,004 Views)
[this post is broken into 2 parts to work with the 5000 character limit of this forum]
PART 2

If I separate out the setup and triggering part of the Read portion of the VI, why would this not avoid the DMA memory fragmentation. My last email asked for specific reasons to pick one of the two options over the other. I agree that 86,000 calls per day would fragment memory. BUT if I do not reallocate but just merely trigger the acquisition would that not solve the problem? Here are the two strategies. I have listed the features pro and con of each one.

  1. Continuous acquisition with occasional timeouts that need restarting and reallocating a DMA buffer.

    1. No way to discard samples when get behind.
    2. glitch in data due to re-configure of ADC hardware, needs to discard samples.
    3. Reallocation of DMA buffer causes kernel fragmentation
    4. How to optimize DAQ buffer size is unknown. A maximum value of 2 MB seems to be enforced.

      1. Smaller buffers (2 secoonds) will more likely overflow but be easier to reallocate.
      2. Larger buffers (10 seconds) will overflow less often but be harder to reallocate.


  2. Finite data acquisition while separating out DMA and configuration functions from acquisition trigger function

    1. Not standard NI-DAQmx base software, needs modification to standard install
    2. NO DMA reallocation ever leading to NO kernel fragmentation
    3. Only done for E-Series boards (that is all I needed)
    4. No glitch in data, ever.
    5. No extra samples to discard.

It seems that there are 4 drawbacks to item #1 and only 1 drawback to item #2 which has some positives. Can you tell me what reasoning leads you and the PSE to suggest that item #1 is the "only way to go"? I can see some of the drawbacks to #2 but I am not sure why they outweigh the advantages? I would like some insight from you (and possibly the PSE) on this. I am including two examples of the two implementations above with this email one using finite samples and one using continuous acquistion with a restart. They are hopefully self explanatory. Both are not great programming style since I tried to make them have as few sub-VIs as possible to make the structure clear. Any comments on the functionality of these two would be appreciated.

Personally I lean towards implementation #2. It seems cleaner to me and more robust. But I hesitate to contradict the *strong* suggestions of you and the PSE that is why I am being a little insistent in asking for the deeper reasoning here.

I have requested most of these with the NI suggestion web page.

I need figure out how to determine the buffer size for a continuous acquisition? I don't mind doing the work, but it is unclear completely as to the procedure for doing this. If this is your highly recommended procedure by the PSE and yourself, I need that last bit of information.

It seems that buffer sizes of 10 seconds of data (25000 samples/channel, 1.6Mbyte) will cause inability to reallocate the buffer after about 5 overflows. Buffer sizes of 1 second will overflow extremely frequently. It does seem like there is a 2 MByte limit on buffer allocation.

We all seem to have learned a LOT about the internals of NI-DAQmx Base!! 🙂

LabVIEW ChampionLabVIEW Channel Wires

Download All
0 Kudos
Message 7 of 13
(4,002 Views)
I've recently run into this same problem, and I'm curious as to which solution(s) others have had the most success with.  I'll try moving the start outside the loop and see if a triggered acquisition reduces the memory allocation/deallocation.
0 Kudos
Message 8 of 13
(3,791 Views)
I changed the whole structure of the program to a continuous acquisition so that there is a single start outside the loop. If the loop overruns, then I stop and restart the continuous acquistion. This seems to work well, though it is sort of a kludge. But it has run reliably for quite awhile now.

I had the other way working where I divided up the configuration and start parts of the driver. This worked as well, but I was worried as discussed above that future updates would overwrite my enhancements to the NI-DAQmx Base package.

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 9 of 13
(3,786 Views)
I don't use Labview, I write in C++. I did try NI-DAQmx Base on OS X and my conclusion was that for robust DAQ applications you need to use NI-DAQmx. That means you need to do it with Windows or (now) Linux. NI-DAQmx Base is really only suitable for relatively simple applications.
John Weeks

WaveMetrics, Inc.
Phone (503) 620-3001
Fax (503) 620-6754
www.wavemetrics.com
0 Kudos
Message 10 of 13
(3,785 Views)