LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

memory leak in Digital Waveform

I have a program with a pretty serious memory leak that uses up all my system RAM and crashes my computer within a few hours of running the program.

 

The program takes an array of U16s where each bit represents a digital signal. The VI converts each U16 to a digital array and groups the resulting 16 digital signals into different busses for display on a Digital Waveform Graph. The profiler doesn't show any excessive memory usage in the VI. I put the whole VI into a Diagram Disable structure and moved a few pieces out at a time, and eventually the only thing inside the disable structure was the Digital Waveform Graph indicator. When this indicator is enabled, the memory usage of my system rises slowly and steadily until it uses all available RAM and crashes the system.

 

If I replace the Digital Waveform Graph indicator with a cluster, the memory leak still occurs (but much more slowly). I thought using the cluster fixed the leak until I reran the VI overnight while using the cluster instead of the Graph.

 

If I stop the VI before all the RAM is used, the RAM will not release until I close LabVIEW entirely. Once LabVIEW closes, the memory is released slowly and exponentially unless I use the "End Process" option in Task Manager.

 

This is a continuation of a previous post I made where I thought the memory leak was due to problems transferring data from an FPGA for display.

 

I ran the MemLeak vi (attached) on two separate systems, both running LV 2013 SP1, and got the same results. The memory leak is noticeably fast when using the enable structure connected to the Digital Waveform Graph but still present when using the cluster of Digital Waveforms.

Download All
0 Kudos
Message 1 of 5
(2,928 Views)

Hi there,

 

One possible cause is how you're creating your array within each iteration of the loop.  A similar issue is seen here:

https://forums.ni.com/t5/Real-Time-Measurement-and/Memory-leak-in-simple-loop-to-save-data-to-array/...


With the applicable solution being mentioned in the first couple of paragraphs of the post marked as Solution.

 

Also, a general knowledgebase on memory leaks is here:

http://digital.ni.com/public.nsf/allkb/771AC793114A5CB986256CAB00079F57

 

John M.
0 Kudos
Message 2 of 5
(2,853 Views)

From what I can tell, all sizes are constant, so there should not be a memory leak if LabVIEW correctly re-uses the memory.

 

I cannot really see the memory leak in LabVIEW 2014, so maybe there was a bug that got fixed. Hard to say....

0 Kudos
Message 3 of 5
(2,848 Views)

Thanks for the replies.

 

In response to John's points:

1. The attached VI is a simplification of an FPGA VI that read a fixed number of samples from a DMA FIFO using an FPGA Interface Invoke Method approach. I'm using a card (PXI-7842R) that doesn't allow use of the Acquire Read Region method. In order to allow people without an FPGA card to hopefully see the issue, I replaced it with the for loop. Assuming that this for loop does leak (which I don't believe it does; as altenbach said, it's a fixed size allocation that LV should be able to reuse), why would I see a difference in the leak magnitude depending on which indicator I connect to the array?

 

2. I've previously reviewed the document you referenced, and I don't see any errors from it present in my code; do you? I have no global/local variables, strings/arrays displayed on front panel, property nodes, coercion dots, altered memory sizes, resizing/reallocations, etc. I don't see any weird buffer allocations. I used to have the conversion from U16 array to digital waveforms in a subVI but placed it on the same diagram to allow incremental use of the Diagram Disable structure.

 

3. The forum post you referenced had many of the items discussed above, plus it was solved using an RT FIFO. I'm not passing data from a producer to a consumer; I'm just displaying acquisition results. I guess you could say I'm processing the data, but I'm really only converting it to a format that the indicator will take; I'm not operating on the data.

 

It's good that the leak doesn't show up in 2014, but my SSP runs out in a couple of days; I never got an upgrade to 2014. This is the last item remaining on the development path, and we've already spent ~$4k to upgrade the controllers enough to display the acquisition without dragging down the CPU. I will be in hot water if I spent all that money and then end up having to scrap the display...

0 Kudos
Message 4 of 5
(2,805 Views)

If you are on SSP, you should be able to upgrade at any time by downloading the software.

 

(I don't guarantee that it is fixed in 2014, just that I did not see any obvious memory leaks in my limited testing of your code.)

0 Kudos
Message 5 of 5
(2,797 Views)