05-26-2015 11:21 PM - edited 05-26-2015 11:28 PM
Hi there,
I am working on a vision application, and trying to process very large images.
Environment:
Windows 7 Professional SP1 64-bit Settings:
Physical RAM = 8GB
Virtual RAM Settings:
I read in a sequence of images in the LabVIEW application and stitch it together. I observe (via Task Manager) the physical memory rise up to approx 7.6GB, and then I am getting the "Out of Memory Error".
So it doesn't seem to be accessing the virtual memory I have allocated? Is there a setting for this that I am missing?
Thanks
06-17-2015 11:00 AM
Hello Chris,
as I can see you are using LabVIEW in 64bits and running an initial size of 24G. In order to help you I would like you to check this link:
http://digital.ni.com/public.nsf/allkb/AC9AD7E5FD3769C086256B41007685FA
Please let me know if this information is helpful.
06-17-2015 12:12 PM - edited 06-17-2015 12:21 PM
I'm not sure this is relevant.
I meant "I'm not sure MY POST was relevant."
06-17-2015 12:27 PM - edited 06-17-2015 12:35 PM
I understand the basics of memory paging, but I couldn't find an answer that, if a picture was bigger than your physical RAM, if only part of that image could be swapped to virtual memory. (In other words, can you have a chunk of data larger than your physical memory?)
My guess is you can have a whole bunch of chunks smaller than your RAM add up to more than your RAM, but no one chunk can be bigger than your RAM (or at least RAM - kernal).
06-17-2015 07:50 PM
Thanks for the link Supertramp. I had read through that already.
Unfortunately it does not help. It is just a basic description.
06-17-2015 07:54 PM - edited 06-17-2015 07:54 PM
Thanks billko.
I also couldn't find any information on it, but I think you're probably right, with regard to chunks of memory. Mind you, I was dealing with several large images, not just ONE image, so I would have thought this should have worked.
I have since optimised my memory management so that I don't exceed 8GB at the moment, however it makes me less than confident for future systems with regard to promoting that LabVIEW 64-bit can use very large amounts of virtual memory...
06-17-2015 09:12 PM
@Chris_Farmer wrote:
Thanks billko.
I also couldn't find any information on it, but I think you're probably right, with regard to chunks of memory. Mind you, I was dealing with several large images, not just ONE image, so I would have thought this should have worked.
I have since optimised my memory management so that I don't exceed 8GB at the moment, however it makes me less than confident for future systems with regard to promoting that LabVIEW 64-bit can use very large amounts of virtual memory...
At one point, I thought you said you stitched tham all together into one big picture. Regardless, I don't think you can do heavy image processing with only 8 GB of physical memory at any rate, whether LabVIEW is the application performing it or not.
I don't recall LabVIEW ever being promoted for being able to use large amounts of virtual memory. It's just able to access memory above 4 GB. It's still up to the user to use the available physical memory wisely.
06-17-2015 09:19 PM
Yes sorry, you're right. I did stitch them together, but from memory (pardon the pun) the system was falling over before I even reached that point. It's like you have to exceeded all of the DRAM before Virtual Memory kicks in.
This link is where it gave me the impression that LabVIEW can use large amounts of Virtual Memory:
http://www.ni.com/white-paper/10184/en/
Thanks for your feedback!
06-18-2015 01:29 AM - edited 06-18-2015 01:31 AM
LabVIEW does need a contiguous block of memory for every data structure it creates. In addition its arrays are traditionally limited to 2^31-1 elements per dimension since the count member of an array is treated as signed 32 bit integer. This 2^31 limitation very likely doesn't come into play here since IMAQ Vision uses its own internal memory structures and treats IMAQ Vision handles as pointers rather than LabVIEW data handles to make it less prone to unintentional memory copies of the data.
Basically image memory only gets copied when explicitedly calling IMAQ Vision functions or when passing an image to an image control for display in a window. And that is pretty much unavoidable.So if you create an image that uses 4GB of memory you quickly end up consuming 2 or 3 blocks of contiguous 4GB memory. Due to memory fragmentation, even if your system has still 4 or more GB of memory free, calling a function that (temporarely) causes another copy in memory of that 4GB image may still fail, since there is no contiguous block of memory available anymore in the system. And yes that data has to be in physical memory as it is actively used at that moment.