LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Cannot allocate matrix (2000x500x500) of U8

When trying alocate a 3D matrix (using 'initialize array' function) a message 'not enough memory to complete this operation' appears. Bigger arrays cause LabView to display another message: 'LabView: memory is full. Vi "name.vi" was stopped at Initialize Array 0x4DC of subVI "name.vi"'. Sometimes the vi causes LabView to close. Allocating by loops does not work neither.
Is there any possibility to allocate such big arrays?




0 Kudos
Message 1 of 9
(3,199 Views)
Hello Mitra,

Keep in mind that you're trying to allocate 500 MB with such a matrix.
2000*500*500 = 500 000 000 bytes, it's a lot don't you think?
Take a look at the performance monitor of your system to check the amount of memory that you have available...

Do you really need such a big array?

Paulo
0 Kudos
Message 2 of 9
(3,198 Views)
Hi Mitra,

in addition to what Paulo has already pointed out that this is a really large set of data, it's maybe important to be aware that LabVIEW allocates and reserves the entire memory required for the array in memory, even if it's 99% empty. Add to this the fact that the entire address range for the array needs to be allocated as ONE block. If you have 2 Gigs of memory in your machine, you might be able to pull off a 500 MB block like this.

If you're working with arrays of arrays where each array may have a different length, try creating an array of clusters, each of which having a single 1D array in it. This creates a bit more overhead when working with the data, but it allows LabVIEW to 1) fragment the arrays in memory (someone correct me if I'm wrong here) and 2) maintain each array with a different length, which isn't possible in a X-dimensional array.

Here's a pic of what I mean.

Hope this helps

Shane.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
Message 3 of 9
(3,186 Views)
Hi shoneill

Your method worked, but the time it uses for allocating memory for the data is rather not acceptable (several seconds)... I will look for another methods. Thanks anyway...
0 Kudos
Message 4 of 9
(3,154 Views)
You should really re-think your approach. It simply cannot be healthy with today's top-end computers to allocate 500MB for a single data set. Keep in mind that any kind of data manipulation or data display will probably cause additional data copies in memory and you'll be starving even with multiple GB of RAM. (For example have a look at this thread).

If you have LabVIEW 7.1 you should also carefully study your code while displaying buffer allocations and memory usage. Make your array 10x-100x smaller and run your VI. How is the memory footprint compared to what you expect. Calculate how many data copies there are.

What is your OS (windows, mac, linux, etc)? What kind of computer is this? How much RAM do you have?

About the slow allocation (personally, I don't think several seconds is very slow. How fast do you expect this to be?): How is your virtual memory configured. Often, such huge allocation tasks will cause windows to dynamically increase the swap file size, a very slow operation, especially if the swap file is fragmented. To prevent this resizing operation, set the minimum swap file size to a very generous value (an possibly defragment your HD before doing so).

Maybe you can tell us a little bit about your project and the justification for the absolute need to keep all this in memory. Hopefully we can come up with a better approach. 🙂
Message 5 of 9
(3,144 Views)
Hi, thanks for the responses.


Maybe you can tell us a little bit about your project and the justification for the absolute need to keep all this in memory. Hopefully we can come up with a better approach. 🙂


I write an application for analyzing and displaying data.



If you have LabVIEW 7.1 you should also carefully study your code while displaying buffer allocations and memory usage. Make your array 10x-100x smaller and run your VI. How is the memory footprint compared to what you expect. Calculate how many data copies there are.


The data is kept in a circular 3D buffer which size can be dynamically changed. I do not use any local variables, all allocations and dealocations of memory are proceeded in subvis with 'Request deallocation' flag set to true. Anyway the problem I have is not about multiple copies of the same buffer but about the possibility to allocate a buffer (see pictures in my first post).



What is your OS (windows, mac, linux, etc)? What kind of computer is this? How much RAM do you have?
About the slow allocation (personally, I don't think several seconds is very slow. How fast do you expect this to be?): How is your virtual memory configured. Often, such huge allocation tasks will cause windows to dynamically increase the swap file size, a very slow operation, especially if the swap file is fragmented. To prevent this resizing operation, set the minimum swap file size to a very generous value (an possibly defragment your HD before doing so).


I have AMD Athlon64 3200+ with 1GB of RAM and 8GB swap (2 disks - 4GB swap file on each). About the allocation: allocating an array is rather fast, the problem with time appeared when I tried to allocate an array of clusters of arrays (in every combination possible of dimensions - 1D array of clusters with 2D arrays, and 2D array of clusters with 1D arrays).

In fact maybe there is a systematic error in my conception of the application. The most important thing in it is the speed. That is why I wanted to avoid disk operations and keep everything in memory. But of course this is not possible in certain cases. I will now try to keep most of unused data on disk. In memory I will keep only the most important part of the data.

Thanks for your help
Mitra
0 Kudos
Message 6 of 9
(3,089 Views)
Hi Mitra,

one important part of your last mail is the fact that your array can change size dynamically. Doing this on such a large array will quite often (if not always) end up doing a lot of copying in memory. So you will have problems with copies of your arrays, even if somewhat indirectly.

Hope this helps

Shane.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
Message 7 of 9
(3,072 Views)
Mitra,

You can probably pull this off with a bit of patience. I have successfully allocated and used larger arrays (almost a gigabyte) on a machine with 512MBytes of RAM. Check out the tutorial "Managing Large Data Sets in LabVIEW". It shows you how to handle such things and includes code to do almost exactly what you are attempting (see the memory browse example). It still won't be easy, but you should be able to do it, provided your analysis can be done in pieces (most can). Even though the limit for Windows is 2GBytes/process, you will find a practical limit at about 1GByte total memory in LabVIEW. With this limit, you will be unable to have more than one copy of your data without running out of memory.
Message 8 of 9
(3,047 Views)
Thanks DFGray
I will check it soon...
0 Kudos
Message 9 of 9
(3,039 Views)