LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

how do I allocate more memory to a large labview array?

I am running LabView on Windows XP with 1.5 Gb of RAM. I tried to initialize a large 16-bit integer 2D array in labview (roughly 1000x1000xNUM_FIELDS) using the 'initialize array' vi. It works fine as long as NUM_FIELDS is less than about 200 (it would then take up approximately 400Mb in memory). If I set it higher, I get a 'memory is full' error from LabView when allocating the space for the array, despite the fact that windows tells me I have more than an extra 1Gb of physical memory to spare! I want NUM_FIELDS to be 600. It seems that LabView may be too polite when requesting memory from the OS. Is there any way to up the ammount of memory that LabView can access for its arrays?

Thanks!
0 Kudos
Message 1 of 5
(5,977 Views)
> I am running LabView on Windows XP with 1.5 Gb of RAM. I tried to
> initialize a large 16-bit integer 2D array in labview (roughly
> 1000x1000xNUM_FIELDS) using the 'initialize array' vi. It works fine
> as long as NUM_FIELDS is less than about 200 (it would then take up
> approximately 400Mb in memory). If I set it higher, I get a 'memory is
> full' error from LabView when allocating the space for the array,
> despite the fact that windows tells me I have more than an extra 1Gb
> of physical memory to spare! I want NUM_FIELDS to be 600. It seems
> that LabView may be too polite when requesting memory from the OS. Is
> there any way to up the ammount of memory that LabView can access for
> its arrays?
>

When a program runs out of memory, it doesn't really mean that no more
memory is available, it means that the largest free block isn't large
enough for what is needed. As memory is handed out, used, and returned,
the address space gets fragmented. Think of putting food in a pantry.
Each object is a different size and they aren't able to be compressed.
Then you need to place a big item into the pantry. There is plenty of
space, but all of the spaces are too small. In real life, you would
move things around to try and make enough room. Since many items on the
computer have pointers handed out, they can't be moved either. So, my
point is that you may have memory left, but it is fragmented.

What to do? First, you might want to determine if it is really
necessary to have one piece of memory that is 1.2GB large. If what you
are storing is somewhat sparse, then you might want to store a 2D array
of cluster of 1D array, or a 1D array of clusters of 2D arrays, or
perhaps an array of cluster of array of cluster of array. Since these
arrays don't all have to be the same length, only the amount needed is
allocated for each array. Additionally, each array is a piece of memory
rather than being one very large contiguous piece.

If you do need a 3D array and won't gain anything from sparse data, then
you should simply break the data up into smaller arrays either in time
or in channel so that the 1.2 GB is in smaller pieces. If you can
describe more about what you need this for, then we can make better
suggestions.

Greg McKaskle
Message 2 of 5
(5,977 Views)
Thanks Greg, the nature of memory fragmentation is definitely interesting but there are more than a few times when splitting the data is not an option. I have been trying to process some very large images with IMAQ and have faced similar memory problems.

I can happily load a 300MB image with the IMAQ Readfile VI, but when I try to load larger images (eg 500MB), labview gracefully shuts down with a "Not enough memory for requested operation" error (-1074396159). The first thing I did was increased my RAM from 1GB to 2GB but this made absolutely no difference.

Would love to know how I can use a little more of the memory that I have installed!

Chris Virgona
0 Kudos
Message 3 of 5
(5,977 Views)
>
> I can happily load a 300MB image with the IMAQ Readfile VI, but when I
> try to load larger images (eg 500MB), labview gracefully shuts down
> with a "Not enough memory for requested operation" error
> (-1074396159). The first thing I did was increased my RAM from 1GB to
> 2GB but this made absolutely no difference.
>
> Would love to know how I can use a little more of the memory that I
> have installed!

And when the OSes and applications move to 64 bit addressing, we should
all be able to. Until then, you are working awfully close to the amount
of VRAM that NT can give the process. NT is a 32 bit OS, which lets it
address 4GB. I believe that it will only give each process 2GB, and
half that is reserved for stack space. I'm not certain about that
number since I've also heard that each process gets 2GB of data/code,
and 2GB of stack. And it is also possible to reconfigure the NT kernel
to give processes a different amount of stack upon creation, though I've
never looked at how to configure it.

Anyway, my point is that you are reaching a limit of the OS virtual
memory system. When this happens, you can't rely on it, and the
application has to handle it. Since LV doesn't know much about your
application, it is actually up to the person writing the diagram.

As I mentioned before, the options are to work on a portion of the data
at a time. Process the top 200MB of the image, then the next 200MB, ...
Then combine the results. This of course may involve overlapping the
data being processed, all depending on what processing is going on.
This is done all the time on time acquired waveforms, where the infinite
or very long waveform is chopped, windowed, processed, and the results
are then combined. It makes the algorithm more complex, but reality
often encourages people to do clever things.

If you will be more specific about the contents of the image and the
analysis, there is almost always a way. There are tons of algorithms
that trade space for time and allow for processing to be done
incrementally on huge datasets, but they get very specific and can't
always be used in the general case.

Greg McKaskle
Message 4 of 5
(5,977 Views)
Many thanks for your reply Greg.
Just thought I would fill you in on a few of the details of what I am doing - maybe you have some more useful information! 🙂

I am working with 24 bit uncompressed TIFF images at up to 15000x12000 pixels, resulting in files of around 0.5GB. I am doing some processing on the intensity component of the image (mostly feature detection) which needs the high resolution but only requires 8 bit data, and also some color processing which needs the full bit depth but only half the resolution. In this sense the problem is separable however the source images must be processed online, so I can't just break them up in photoshop.

The bottom line is that the IMAQ toolkit is unable to cope with the size of the source
data, and to make matters worse, I need to have two images (or regions) in memory at the same time to analyse differences, etc.

My current solution is to use a third party SDK (Leadtools) to decompose the images into smaller components using a series of ActiveX calls. This is working OK for now but is a bit cumbersome. I have observed that the performance of IMAQ is generally better than Leadtools but of course is limited to smaller images (up to 300MB).

My question then is do you know of a way to read a TIFF file bit by bit using IMAQ? There doesn't appear to be any way to do this directly, however I have considered perhaps breaking the original TIFF file in half programatically and modifying/replicating the header info. Any other ideas? Oh - and did I mention that processing time is limited too? 🙂

Chris
0 Kudos
Message 5 of 5
(5,977 Views)