From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
NASA Matt

Automatic Memory Management Upgrade Needed

Status: Declined

Too many times, a generic "Out of Memory" error pops up without explanation, source, or traceability.  Sometimes it occurs intermittently when executing the exact same process. Tracking these mystery errors down takes more time than necessary and takes away from the efficiency and gains intended by the design of the automatic memory manager within LabVIEW.  After some research and help from an application engineer, it is apparent that the memory manager is not well suited for modern PC's and OS's when needing to process larger amounts of data.

 

LabVIEW should be able to use all the application memory offered by the OS, not just the contiguous parcels it is lucky enough to find.  Not only should it be able to use fragmented virtual memory but it should also be able to exploit more than just 75% of a 1GB application segment, particularly when 16 GB is installed on the motherboard.

 

For example, simple arrays of I16's are sometimes denied if they are only tens of MB in length and denied all the time if they are in the hundreds of MB.  That doesn't even come close to the available memory capacity in the PC.  Granted, those arrays are large compared to VI's written for simple GPIB devices twenty years ago but the need for larger arrays is now more prevalent with high-speed data acquisition and high-resolution imaging.

 

Why can't the memory manager grow with the latest PC memory capacities, motherboard architectures, modern OS's, and modern instruments that can acquire and transmit data with those array sizes?  Isn't it time to challenge the need for contiguous memory?  Can't more intelligence be added to the memory management strategy by not needing to copy large arrays redundantly that cause "out of memory" errors?  Can't a memory manager be able to work within the fragmented virtual memory space of a Windows OS without having to reboot?  Shouldn't it adapt to the OS environment instead of needing to prevent every other application from running in order to statistically gain more contiguous memory?  Can't better automatic tracing and error messaging be delivered to the programmer prevent to much wasted time?

 

I have been impressed by the quality of service and detail of the online help to tiptoe around these limitations.  However, it seems time to graduate from building contraptions to avoid the problem and instead apply that effort towards solving the problem.  Are there plans to issue a new automatic memory manager to optimize the potential of modern PC's and OS's?

19 Comments
X.
Trusted Enthusiast
Trusted Enthusiast

I am currently fighting with this "out of memory" issue, so I feel your pain. In my case, I even get LabVIEW to pop-up a goodbye window of the kind "LabVIEW encountered an unforeseen error during the past session. An automated report has been generated, do you want to send it?". I am constantly monitoring the task manager Memory Usage window and dread the stepwise increases bringing me closer to the physical memory limit (that of Windows XP, which is more like 3GB than 4GB). I seem to remember that MatLab or Comsol Multiphysics have the same issue though, so I am not sure there is much specific to LabVIEW to blame. Would a piece of C code do a better job without YOU checking the results of your mallocs and adjusting for failures? Just curious...

altenbach
Knight of NI

You left out a lot of crucial information in your problem description!

 

Are yor running 64bit LabVIEW? (LabVIEW 32bit, like any 32bit application cannot use more than 4GB on a 64bit OS (and even less on an 32 bit OS (details)). If you run your program with smaller data structures, can you estimate how many extra copies of your large arrays are in memory? There are many programming guidelines to keep large arrays from multiplying like rabbits due to bad coding practices, such as constant reallocations due to array resizings. Excessive indicators with their transfer buffers also use extra memory copies. Look into data value references. Make sure to keep the panel of subVIs closed and make sure they don't force the front panel to be in memory (e.g. if they contain property nodes, for example).

 

Operations on large arrays need to be efficient. I don't know how much effort it would take to allow noncontiguous arrays in memory and maybe the performance hit would be severe.


In any case, this seems to be your first forum post and we don't really know your LabVIEW skill level (and don't be offended if we over- or underestimate it ;)). I would recommend to help us troubleshoot your memory errors more systematically by posting in the LabVIEW forum. Maybe there is a simple solution.

 

 

kegghead
Member

Well, any 32-bit executable will be bound by the same memory limitations, regardless of runtime. One of my LabVIEW applications tends to bog down and become sluggish after about a 2 GB memory load when running 32-bit code, but the 64-bit code stays very snappy when running on the same system. Similarly I can start to see out of memory errors as the 32-bit code starts to creep beyond 3 GB.

 

I don't see this as a problem though, other languages are the same. Arrays require continuous memory, it's that very feature that allows such fast random access and keeps a whole swathe of functionality lightning fast compared to other constructs. If you have large arrays that are causing you problems, perhaps the better question is whether an array is the right data structure?

 

For what it's worth, my main gripe with memory management in LabVIEW is more how difficult it is to track where memory is being used. The fact that memory metrics are so hard to come by means designing a well managed application can take a lot of experience if your dataspace starts to impose memory restrictions.

NASA Matt
Member

So far, this obstacle has been investigated with 32-bit LabVIEW 2009 on 32-bit Windows XP, SP3, on a motherboard with 3.3 GB of RAM.  The problem can usually be duplicated entirely within LabVIEW without using external hardware or interfaces.

 

In a blank VI, just initialize an array of integer16 with a size of 64,000,000.  Create a simple indicator for the output of the array on the front panel.  It should run out of memory almost immediately even if Windows has over 2 GB of RAM free.  You would think the array size should be 122MB and the indicator would be another 122MB.  244MB does not seem like a high demand of the OS.

 

If your demo doesn't exhibit the out-of-memory error, copy the front panel indicator to the clipboard (Ctrl-C) as if you were going to paste it on another window and that should do it.  If your demo doesn't do it, adjust the size from 64,000,000 to another number to see where your threshold is on your particular machine/OS.  I have found the number can be as low as 200,000.

 

This is my first post but I have used LabVIEW for over twelve years at an intermediate level.  I have attended multiple training sessions but am not certified.  I have created and modified dozens of projects. I feel I have an adequate sample of different architectures and hardware and have invested over a thousand hours programming and debugging in LabVIEW.

 

Memory management shouldn't be this limited, be hit-or-miss, require such contortions, or require you to go to an alternate platform like C.  If the memory manager is provided to streamline memory allocation within an OS, then why can't it be relied upon in larger data sets or at least notify you of what it's exact limits are or exactly when and how it came to the conclusion there is not enough memory?  Our time should be spent on the function of the application, not the nuts and bolts of how the underlying memory manger works or doesn't work with the OS.

Brian_Powell
Active Participant

I just created an array of 64 million doubles (8-byte floating point values) in my 32-bit LabVIEW 2011, and LabVIEW seems quite happy.  I haven't rebooted in days, though I've only been running LabVIEW for a few hours.  This diagram is using 512 million bytes for the execution data, with a second copy for the front panel operate data.  (And for those not familiar with the difference, you might consider taking the LabVIEW Performance training course.)

 

buildarray.png

 

But you're right that I can change 64M to a larger value and at some point this will fail--probably well shy of the 2G boundary.

 

In my view, I created 64-bit LabVIEW to relieve the pressure so that we don't have to squeeze all the bytes out of the 2G address space.  Is there any reason you can't switch to 64-bit?

 

GregR
Active Participant

If we look at usable address space of 32-bit LabVIEW on 32-bit Windows, it doesn't come out near what you might think. Most apps will start failing even small allocations when the task manager is reporting a memory usage of about 1.7 GB. This is because Windows doesn't give the application access to the full 4GB address space. They immediately cut that in half because Windows keeps the top half for itself. The rest is off limits for things like memory mapped IO.

 

Now in the 1.7 GB Windows memory maps every DLL that is loaded by your process. So take out memory for LabVIEW.exe and all the DLLs that are used by the editor or referenced from your VIs (DAQ, analysis, ...). Not only does this memory mapping use up address space, but they fragment memory so the number of large allocations possible is further reduced. The last time I seriously looked at this (a few releases ago), I found that with some common DLLs loaded there were only a couple of large contiguous blocks left. This seriously limits what LabVIEW can do with large arrays because we allocate arrays as contiguous blocks.

 

As Brian said, 64-bit LabVIEW will give you access to much more address space, but there are actually steps you can take without that. Simply runing 32-bit LabVIEW on 64-bit Windows helps because Windows does not claim the top half of the address space in this case. That means we at least start with about 3.7 GB before loading DLLs. You can also turn on an option in 32-bit Windows so they only take the top 1 GB for themselves instead of 2. You can learn more about this switch on ni.com. (Brian may not have been thinking about these and had more success with his test because he was doing one of them.)

Norbert_B
Proven Zealot

This discussion is as old as "address management" in operating systems (in this case more specifically "memory management"). While i find the discussion most often interesting, it always comes down to two main issues:

1. Developers often do not account for efficient resource management (re-use). This will directly lead to issue 2, but is not limited to it. Inefficient use of memory implies additionally unneeded copies of dataset, amplifying issue 2. Proper optimization can remove the issue by chance, but no guarantee here....

2. Memory can be addressed in blocks only. So you cannot address a specific memory cell in RAM, but you have to use a whole block. To worsen things, certain datasets (arrays) requires the memory to be contiguous. This constraint is true for all programming languages i know.

These two facts must create the issue called "fragmentation" what you obviously are already aware off. But it seems that the effect of fragmentation is not clear....

 

There are tools available like VMMap which can break down memory usage of a process in order to visualize fragmentation.

 

I understand that you request, in essential, that LV does fragmentation checks before trying to allocate memory and break the required dataspace into smaller packages if required.

I have to decline this request (if i had to say anything....) because:

- there is already a LabVIEW tool if you require this for large data sets. Fragmented array library.

- if this would be done for all memory allocations, the process of allocation would slow down by magnitudes effectively making many "working" applications nearly un-usable. This must not happen.

 

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
X.
Trusted Enthusiast
Trusted Enthusiast

@Norbert_B: your second link is broken, it should be Fragmented array library.

Kudos to all the blue guys for chipping in! The take home lesson is: switch to Windows 64 bits and buy more RAM...

Nonetheless, besides the "Out of Memory" messages which ungracefully interrupt execution, the LV crashes I am experiencing myself are a bit over the top.

AristosQueue (NI)
NI Employee (retired)

> the LV crashes I am experiencing myself are a bit over the top

 

Making a program written in C++ resillient against out of memory problems is hard, and most programmers today writing for desktop systems just assume memory is available. Thus, when you actually do start playing close to that boundary, most programs become unstable. LV is worse than most, I agree, but in your case, the crashes may very well be another symptom of too few system resources to complete necessary tasks. A lot of the memory that a VI allocates while running remains allocated when the VI stops running, so the editor is at the mercy of the VIs you're working on.

X.
Trusted Enthusiast
Trusted Enthusiast

I wonder whether it would make sense to have a set of array function primitives with error input and output (all with MANDATORY in AND out connection) so that the user could avoid these "out of memory" interruptions/crashes. Memory block not allocated >> error output >> the calling VI handles the error or, since the errors are passed down to the following functions, nothing happens anyway?