From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
06-18-2014 11:06 AM
I'm working on a large application that will suddenly spiral out of control on memory usage. I've been analyzing the desktop execution output from a time during the spiraling memory. My understanding (which may be very wrong or at least incomplete) is that each entry for a memory allocation for a particular handle will tell you how much memory that handle represents. For a "Allocate" event, the number after "Size: " is how much memory is there. For a "Resize" event, the number after "New Size: " is how much memory is now there. A "Free" event obviously releases the memory and the handle may be reused again.
Is my understanding correct? If so, here is the most concerning thing I'm seeing. I've created a program to parse the output from a trace. The trace captures all memory allocations. The program parses the output and sorts all the entries by handles. These entries are an example of my concern:
1093116:12:23.1099782 TranslocMeasure.vi Memory Allocate 31 0 Handle: 0x329F5024; Size: 80004
6007916:12:24.3637328 Text File.lvclass:Add Data to Files.vi Memory Free 32 7 Handle: 0x329F5024; Size: 28
These are consecutive events for this handle. There are no other memory events for this handle between the two. If I'm understanding this correctly, what happened to that 80004 of memory? Does the "Size: " at a "Memory Free" event represent how much got free up? Or is that the new size of the handle?
Is this a real memory leak or am I misinterpretting the output of this tool?
The tools in Labview for tracking memory usuage are sooooo bad. Does anyone have a good method of tracking issues like this down? I've used desktop trace obviously. I've also tried the performance/memory profile tool, but I find that to be totally erroneous. I have instances where Labview is using close to 2gB of ram, and when I look at the performance tool, the total memory usage doesn't add up to more than a couple hundred kBs.
06-19-2014 05:36 PM
I'm not sure if you have seen this yet, but this KnowledgeBase Article is a great starting point when trying to debug memory problems.
http://digital.ni.com/public.nsf/allkb/771AC793114A5CB986256CAB00079F57
06-19-2014 05:43 PM
Are you handling large arrays? Inefficient array handling could cause exponential leaps in memory usage.
06-20-2014 07:39 AM
I hadn't seen this article in particular but I'm familiar with everything it covers.
06-20-2014 07:50 AM
We are handling large arrays. We are collecting 8 channels of 100ks/s continuously. I believe we do hanlde those arrays well. Whenever manipulating the waveforms that are generated, we use in-place elements to ensure memory copies are not made unnecessarily. The waveforms are also largely downsampled before displaying. If anyone has any other helpful tips, please let me know.
This program has existed for a few years. We have had differing levels of issues with memory usage over that time as we've continually developed this program. My issue with Labview in this case is it seems to be severely lacking in tools and/or methods of determining where all the memory is being allocated. The tools it does have are largely ineffective.
Has anyone seen a good explanation of the trace data from desktop trace execution? I still don't know if I'm interpreting that data correctly.
06-20-2014 07:57 AM
06-20-2014 10:41 AM
We're certainly not trying to add it all up and expect it to equal the windows accounting of memory for the LV process. I understand that it may not release the memory right away. The problem is that it will fairly quickly spiral out of control to the point of crashing out.
06-20-2014 01:45 PM
If the memory usage "spirals out of control" it sound like you have something pretty dramatically wrong with your code. How long does it take for the memory leak to crash the application?
Mike...
06-20-2014 11:50 PM
Another way memory can really spike is if you are allowing a queue buffer to back up. For instance, if you are inefficient at writing the data to a file.
06-24-2014 09:46 AM
We certainly do see a queue pile up, but the question is whether it's the cause or a symptom. The loop runs well until the memory goes out of control, so I think it's a symptom.