08-12-2010 11:34 AM
I'm supposed to be working on code for a lab, and they have reported possible problems with labVIEW eating through memory on long experiments. Someone before me tried to fix the problem but I am unsure if it is actually helping. (I'm more familiar with languages like C++, and have not used labVIEW prior to this summer).
Where I believe the problem lies is with the array (within a loop). Depending on the experiment the arrays will be of different sizes so how they handle the array is:
-> It is an array of a cluster of 2 elements
-> The array is wired to a shift register.
-> The shift register is initialized prior to the loop opening by wiring the shift register to a cluster of 2 "0's".
->Each loop cycle they add new data (a new cluster) to the array using "Build Array"
There are multiple of these arrays all being plotted so they use "Build Cluster Array" and then wire it to the corresponding Plot (an XY Graph). They use this after "Build Array".
This used to be it, so the arrays would grow large and crash the program. Someone before me added an option to clear the arrays, but I am unsure if the way she designed it actually releases the memory since they are still reporting some problems. The user enters a number in a control "Clear After:". On every iteration that is a multiple of that number, the program passes the shift register an array with one element. The array that is passed set up the same as the array passed for the initialization process.
My concern is that the code never specifically says delete the array or release the memory. It feels very similar to the situation in C++ when the programmer dynamically creates an array (using new) but never deallocates the array (using delete), instead they just change where the pointer is pointing. There the memory would still be tied up and unusable.
So I guess my question is, looking at the process above do I need use "Delete from Array" to release the memory and allow the program to run faster on longer experiments with large datasets or does labVIEW automatically deallocate that memory and therefore I should I be looking elsewhere in my program for processes that would slow down everything on longer experiments?
Thanks,
Val
Solved! Go to Solution.
08-12-2010 11:42 AM
Can you post your code? I think something is very wrong with the program stucture.
08-12-2010 11:56 AM
Val,
The generally accepted way to manage large arrays is to preallocate an array which is as large as the maximum size that needs to be in memory at any one time. The Initialize Array function is good for this. Then pass the array via a shift register (which it seems you are doing). Inside the loop use Replace Array Subset to change the appropriate element(s). Depending on the requirements you may need a second shift register to keep track of the index(es) where data is to be replaced. Avoid the use Build Array in a loop.
LV requires that all the elements of an array be in contiguous memory. So every time build array makes an array larger than the currently allocated space, the memory manager must find a new space large enough for the whole array and then move it. It may not take too many such moves before a contiguous space large enough no longer exists.
As Coq Rouge requested, please post your code.
Lynn
08-12-2010 11:59 AM
I have attached a photo of the portion of code that I was referring to. It shows 2 photos so you can see all possibilities in the 2 case statements.
The first picture is when the cycle is adding new data points, and does not clear the array.
The second picture shows the program passing through the array (which it does every second cycle) and then "clearing" the array. (Which as I state above, I didn't know if that was correct).
(None of this is actually my code, I was hired on to upgrade them from labVIEW 5.1 to labVIEW 2009. They just asked me to look at this. It seems to work fine on smaller length experiments on the order of a couple of hours). If you need anything else from me, don't hesitate to ask.
Thanks,
Val
08-12-2010 12:14 PM
Please attach pictures in .png graphic format. .docx requires the user to have a newer version of Microsoft Word installed, not always the case on this forum. As previous posters have stated, using build array in a loop is a bad idea, really will cause a performance hit as well as risk of crashes. The memory manager has to keep reallocating memory.
08-12-2010 12:15 PM - edited 08-12-2010 12:16 PM
Just a friendly tip Val. As far as possible please do not only post pictures of your code. As you may understand it is quite hard to debug a picture. Those who post code will as thumb rule get the most help.
08-12-2010 12:25 PM
This begs the question as to whether a 1 x N array of cluster of two elements are more memory intensive then say 2 x N DBL array. You could try building each variable as its own 1 x N DBL or SGL array and the building the plots by bundling the appropriate arrays into clusters and then building the cluster into arrays that are feed to plots 1 and 2.
Somebody with more experience in the memory allocation for these operations can comment on whether this is more efficient.
08-12-2010 12:30 PM
Ok, I'll put the piece of the code up if you give me a few minutes. I'm downloading WinZip.
(there are other problems with the code, I'm sure since it was done by research guys/gals and not programmers so I didn't want to post the whole code up so people didn't get distracted by other problems).
So if I use the array initialize, when I reach the end I should just start writing data at the top again?
Code coming up next post. 🙂
08-12-2010 12:37 PM
Open the CBS_Control_PIDGraph2.vi
The loop I am referring to is on bottom to the right.
Thanks,
Val
08-12-2010 12:49 PM
And Thanks for the friendly tip Coq Rouge 🙂