LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Memory problem with my subVI

Solved!
Go to solution

Hello,

I am facing some memory issues while processing large arrays in one of my subVIs, which is a part of a large project. The arrays are decomposed multiple times in a for loop in Labview function called SVD Decomposition VI (http://zone.ni.com/reference/en-XX/help/371361R-01/gmath/svd_decomp/). I have attached screens of the subVI to this post, there are 2 cases but only one of them is used in the calculation (the 2nd one).

As you can see, the input to the SVD function is 2D array and in each iteration its size is approx 100x4, but the number of iteration is more than 2000. The input data type is single complex. I get error 'Not enough memory to complete this operation.' most of the time and then error window like in the error.jpg occurs.

I have really no idea how can I improve the memory efficiency of this VI. Should I do something like storing the result of SVD decomposition per iteration in a file and later read it from there when I need it? Is there a way to deallocate memory after each iteration then?

Thank you a lot, a solution would help me a lot.

 

(P.S. I use 32bit version of Labview because 64bit doesn't support some of modules, which I need.)

Download All
0 Kudos
Message 1 of 14
(2,985 Views)
Solution
Accepted by lukas.maliar

It would probably help if you keep the Front Panel of the VI closed.

 

That error is about the FrontPanelDataController. Each control\indicator contains a copy of the data if it's FP is open.

 

Yes, storing and processing parts later would help too.

 

DVRs might help to avoid copies.

 

That for loop before the case should be in the case. As it is, there might be a copy of the data...

 

Use Show Memory Allocations (in Tools>Profile IIRC). Dots on arrays should be avoided.

Message 2 of 14
(2,963 Views)

Your input array is 4D, not 3D as labeled (same for some of the outputs). What is the size in the fourth dimension.

 

Can you attach the actual VI and some typical data? What's happening in the subVIs? Where is the data coming from?

 

Shouldn't that small FOR loop be in the "1" case?

0 Kudos
Message 3 of 14
(2,932 Views)

@altenbach wrote:

Your input array is 4D, not 3D as labeled (same for some of the outputs). What is the size in the fourth dimension.


It does say 3D .. Array. So that could mean a 3D array of .. 1D arrays Smiley LOL. Could be semantics.

 

3D arrays are usually iffy, 4D definitely. It isn't 'bad' or 'wrong' per se, but it tends to get very confusing and hard to handle.

 

If the data is 1D, it will get much more comprehensible if you push that (in a queue?) around to next processing stages. Hard to say without the hole picture though...

0 Kudos
Message 4 of 14
(2,916 Views)

Thank you all for answers.

 


to @altenbach:

OK, to make it easier to solve it, I will upload the subvi there with common data, but I forgot to write, that front panel is not opened when it runs. I can't upload the whole project, it consists from around 50 VIs, it wouldn't probably help much. 

And you're right, the input array to the subVI is 4D but it doesn't matter much, because it is modified at the beginning to 3D and then iterated through the 3rd dimension in both cases. You're right that the small for should be inside the case 1, but it is not so important. 

 

to @

 



0 Kudos
Message 5 of 14
(2,896 Views)

@lukas.maliar wrote:

to @




If you turn on "show memory allocations", each copy shows a dot. More or less anyway. So, avoid dots, because those mean copies of the data. Dots of scalar values can be ignored in this case, but dots on (large) arrays are bad.

0 Kudos
Message 6 of 14
(2,888 Views)

I don't know anything about "SVD Decomposition" or its purpose for your dataset.  Here are some observations about memory usage that may or may not be appropriate and expected for this kind of data processing.

 

     You start with a 4D array of complex elements, dimensions 6x16x4x4097 ~= 1.5 million elements x 8 bytes each.  Roughly 12.5 MB.

     Method 1 (the default value) wants to produce a 3D array of dimension 4097x96x96 ~= 37.8 million elements x 8 bytes each.  Roughly 300 MB.

     Method 0 (the averaging case) wants to produce a 3D array of dimension 6x4097x16x16 ~= 6.3 million elements x 8 bytes each.  Roughly 50 MB.

 

So first, confirm that this kind of data expansion is necessary and appropriate for your processing methods.  Method 1 in particular looks like the biggest stressor.  I agree with Wiebe that at least part of the issue will be the data copy needed for the GUI indicator if the front panel is open.  A continguous 300 MB chunk of memory isn't always trivial to find or make, making it especially important to limit data copies.

 

Beyond that, you may need to consider accumulating the results in a different data structure that isn't required to be contiguous.  Or possibly accumulating them in something like a fixed-record-size binary file that would allow for somewhat-array-like random access.    One way or another, you'll probably find you're trading off among memory space, access speed, and programming convenience (not trivial by the way -- difficult code tends to be more bug-prone).  The best choice will likely depend on how different parts of your app depend on access to this data.  

 

 

-Kevin P

 

P.S. Below is what I got by dropping your vi onto a new empty block diagram, running it with its front panel closed, and graphing the PSD output.  I didn't get an "out of memory" error because there was no longer a need to make a separate 300 MB copy of data for a GUI indicator. 

    At a glance, the data appears reasonable, suggesting to me that the matrix results are also likely large out of necessity, rather than an inadvertent error in the accumulation of results.

SVD PSD.png

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 7 of 14
(2,856 Views)

Thank you Kevin for your analysis and sorry for my very late answer but I didn't have a time to have a deeper look at my problem.

I can confirm you that the data expansion as you mentioned is really neccesary for me. 

Now I spent some time on reading this help topic https://zone.ni.com/reference/en-XX/help/371361R-01/lvconcepts/vi_memory_usage/ and it helped me to understand few basic things about the memory optimisation and buffer allocation. In my previous version of VI, which I also uploaded in one of above posts, there were few mistakes that led to creating copies of some large arrays in buffer. Another problem was that I didn't know about creating copy data in memory in case of inconsistent data type between input control and input data wire. The solution is the data conversion before the data is wired to the control. I somehow always thought that this is done automatically. 

But still, I end up with too memory consuming arrays and my code is near crashing. Few more input data and it will be crashing again.

 

So now I wonder about possible solution. I will try to describe my idea and I would be grateful to anyone who would show me any basic example for that. Imagine that you have 2-3 for loops with autoindexing 2-3D array outputs. Imagine that there are millions of elements in the output array and when you hold the whole array in allocated memory, you get that 'not enough memory' error. Now, how could I do something like doing only a part of all iterations in the for loops, save part of the output array to a binary file, deallocate memory (or at least use allocated memory for the rest of iterations) and continue in the rest of iteration? 

0 Kudos
Message 8 of 14
(2,800 Views)

@lukas.maliar wrote:

. Now, how could I do something like doing only a part of all iterations in the for loops, save part of the output array to a binary file, deallocate memory (or at least use allocated memory for the rest of iterations) and continue in the rest of iteration? 


Instead of putting the data in a shift register or using auto indexing, store it to file. There would be no 'allocation of memory' (at least not the 'manual large array' allocation), so no need for deallocation of memory (there seldom is in LabVIEW).

 

You might be overthinking this.

 

BTW are you using LabVIEW 32 bit or 64 bit? That would make a huge difference. If you aren't using 64 bit LabVIEW, I'd consider that. You're still advised to think a bit about memory and performance of course, but these problems might simply 'go away'..

 

What does task manager report on your memory?

 

A typical hack to reduce memory (by 50%) is to use singles instead of doubles... At the expense of accuracy of course.

0 Kudos
Message 9 of 14
(2,788 Views)

OK, I will try as you wrote. But the storing to a file function should be inside the loop, right?

I am using 32 bit version of labview because of few modules that are not supported in 64 bit. I could consider 64 bit for this project (not sure if I would miss any module in this project where I face memory problems) but not for all my projects. In my task manager I can see 1700MB used by Labview at the top of performance of my running VI now and I think it crashes after 2000MB. BTW that is weird indeed because I use 64 bit windows and somewhere I read that in case of 64 bit OS and 32 bit Labview there could be 3GB of RAM used...

0 Kudos
Message 10 of 14
(2,783 Views)