LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to avoid memory increase with FOR loops

Hi All,

 

I'm having the following problem shown with some different approaches in the attached 2 example VIs:

- Each VI has an external loop, running 10000 (for example).

- Each VI has an internal loop running 5 times.

- The internal loop basically retrieves one cluster element at each iteration out of the 5 element input array. It then may do some calculation on it inside the internal loop and output the result to the 10000 element array output. (in the EXAMPLE VIs there are no internal calculation. The don't really do something, just demonstrate the problem).

- The size of the 10000 element output array is about 87MB

The problem is that when I run the VIs and watch the Labview memory on the TaskBar it increases by more than 700 MBs (rather then 87MB).

It looks like the memory increase I see on the TaskBar is more or less the multiplication of the Original (5 element) array size by the total number of iterations (10000x5).

If I change the 10000 constant of the external FOR loop to a much higher number the Labview memory keeps increasing until it is out of memory...

So, is there any way to perform these iterations without the memory blow up?

That is, to have the memory increase by the output array size only.

I tried some different approaches on the 2 attached VIs but got the same results.

 

I'll be glad for some ideas on this...

 

Thanks in advance,

Mentos

0 Kudos
Message 1 of 13
(2,629 Views)

@Mentos wrote:

The problem is that when I run the VIs and watch the Labview memory on the TaskBar it increases by more than 700 MBs (rather then 87MB).

It looks like the memory increase I see on the TaskBar is more or less the multiplication of the Original (5 element) array size by the total number of iterations (10000x5).


Do you have a task manager screenshot from before running the VI? I see a total memory use by LabVIEW of about 700MB, not an "increase by" (size after minus size before). The LabVIEW memory use in the task manager includes everything, not just the data in one output array.

0 Kudos
Message 2 of 13
(2,576 Views)

It should be painfully obvious that the calculation would only be valid if the output was exactly the input x 10000 x 5, but it's not.  A quick look at the data shows me that it is entirely different.  Look at the first cluster out, for example.  Each array inside each output cluster has 601 elements in it instead of 2, so of course it's going to be a lot bigger.

 

Maybe I misinterpreted the problem? 

 

I think I got confused.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 3 of 13
(2,568 Views)

@billko wrote:

It should be painfully obvious that the calculation would only be valid if the output was exactly the input x 10000 x 5, but it's not.  A quick look at the data shows me that it is entirely different.  Look at the first cluster out, for example.  Each array inside each output cluster has 601 elements in it instead of 2, so of course it's going to be a lot bigger.


At first I thought the data might be calculated wrongly (flattening to string, etc) but based on 10000*~11 bytes (7 character string, I32)*601 + 4 (I32) I arrived at ~80MB...

 

When I tried running the VI immediately after starting LabVIEW, I also observed a large increase in the memory being held by LabVIEW, up to around 700, 800 MB (from around 150...). However, when I'm generally going about my programming, I often see LabVIEW using ~800MB - I'd guess this is a typical running value for 32-bit LabVIEW 201{7,9}.

 

When I repeatedly ran the VI, I observed (using Task Manager) a more reasonable increase of ~80-90 MB. This didn't increase further on subsequent runs.

 

So I suspect that the initial measurement is wrong (because it accounts for LabVIEW loading some stuff, maybe altenbach knows what...) but that subsequent measurements are pretty much in line with the expected value.

 

OP: depending on your actual application, you may find you can streamline this a bit by changing your data structures. Replace Array Subset may be less than totally fantastic when you are placing new arrays into the array's element each iteration (since the initializing element is in fact with two empty arrays).


GCentral
0 Kudos
Message 4 of 13
(2,534 Views)

Hi Altenbach,

 

I run the VI just after opening the Labview environment.

The Task Manager was showing about 60MB. (Sorry I don't have a picture at the moment)

After running the VI the task manager went up to 771MB. So it an increase of about 711MB while the output array is only 87MB.

0 Kudos
Message 5 of 13
(2,528 Views)

"right after opening" is probably not a good baseline. Also your output array (assuming that the inner data, arrays etc. Does not change size) requires at least 3x the memory you are estimating (wire data, transfer buffer, indicator, etc.). If the code hints that the inner arrays change size, the compiler might also preemptively allocate slack space for each.

 

Why exactly do you need these gigantic outputs? What does the rest of your program do with it?

0 Kudos
Message 6 of 13
(2,508 Views)

Hi Cubtcher

 

I'm not sure I fully got your last note

("OP: depending on your actual application, you may find you can streamline this a bit by changing your data structures. Replace Array Subset may be less than totally fantastic when you are placing new arrays into the array's element each iteration (since the initializing element is in fact with two empty arrays).")

 

I tried to use the Replace array subset function in my 2nd VI with no success.

(I used it replace the new clusters in the outer loop, but I still needed some other functions to retrieve the correct cluster in the inner loop and that seemed to blow up the memory).

Can you post some example or give some more explanation?

 

* To further explain this issue:

After running the VI for the first time the amount of used memory indeed stabilizes. 

But I don't think it's a normal behavior if the memory increases by more than 700MB when creating an output of 87MB. 

Imagine replacing the 10000 constant with a much higher number. 300000 let's say...

In such a case, the first run would make the memory blow up until Labview runs out of memory.

So I'm hoping there's a better way of implementation.

 

* I'm Using Labview 2011 on a Win10, 64bit OS

0 Kudos
Message 7 of 13
(2,506 Views)

So the little bit at the end was just a suggestion that despite the use of RAS, you might not get (won't get? I think arrays could be allocated as pointers in clusters, but I really don't know...) in-placeness because you need more memory than you allocate with Initialize Array.

 

The 700MB in my case is I suspect not proportional to the cost of the VI, but rather a flat cost to using the LabVIEW development IDE. Assuming that you have enough memory, try changing your 10000 value to perhaps 20000 - this should increase your cost by a factor of 2(ish). If you get to 1.5GB, you'll know something's weird. If you get 700MB + 2*90MB, you'll know there's just a sizeable offset to running LabVIEW.


GCentral
0 Kudos
Message 8 of 13
(2,497 Views)

Well , Labview crashes when I change the 10000 to 200000.

As far as I can see it happens somewhere around the 173000 iteration and the task manager shows more than 3GB at that point. So I think something is fishy. 

 

* Basically I'm doing some post processing for very big files and that's why these huge numbers are required.

 

 

0 Kudos
Message 9 of 13
(2,484 Views)

The previous message said 10000 to 20000 as in 10,000 to 20,000.  You changed it from 10,000 to 200,000.

 

 

0 Kudos
Message 10 of 13
(2,482 Views)