LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Ideas to release the memory associated with Dynamically Instanciated Objects

Hi All,

 

Any time a LVOOP class constant is executed on a LV diagram memory is allocated for that instance.

 

If I have code that instanciated a bunch of instances I have to be careful how I structure my code to avoid new allocations. I have used a number of different techniques ranging from running the code in a seperate process context so that I can kill that context and let LV clean-up the allocated memory to Re-cycling object instances by recasting the instance as a differnt child.

 

I am trying to help out a co-worker who keeps repeating the same mantra over and over "No Destructors in LV".

 

So...

 

I am curious what techniques other have used to keep memory under control and avoiding the issue that each time memory is allocated for an instance of a class it stay allocated until the LV clean-up runs.

 

Thank you advance,

 

Ben 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 1 of 17
(3,476 Views)

Ben wrote:

Any time a LVOOP class constant is executed on a LV diagram memory is allocated for that instance.


As I understand it, that's not quite right. "On a desktop system ... LV shares only one copy of the default value, but on real-time targets, that would be a full allocation (because everything has to be preallocated to avoid jitter in the event that the object gets passed into a deterministic section)." So, at least on a desktop operating system, you can drop as many LVOOP constants as you want without increasing memory substantially, until you write to those objects, at which point a full copy is made (because it's no longer the default value).

 

I have not tested or researched this thoroughly, but I don't remember seeing anything that suggests that LabVIEW objects are treated differently than other LabVIEW data in terms of when the memory they occupy is released for reuse (to other LabVIEW data, not necessarily to the operating system). The same in-place rules apply to classes as they do to other data. When an instance of a class reaches the end of its wire, the instance can be freed or reused, and the compiler should handle this for you. If your coworker is thinking that a destructor is necessary, then he or she hasn't fully understood that LabVIEW classes are by-value and not by-ref.

 

How large are your classes, that you are worried about individual instances? A class shouldn't take up much more memory than an equivalent cluster. Of course, if your class data includes a huge array, that's an issue, but not any more of an issue than dealing with a huge array outside a class.

Message 2 of 17
(3,462 Views)

Thank you for the reply nathan!

 

let me attempt to illustrate the challenge a little better.

 

Imagine you have written a completly generic tester in LVOOP. It can test anything provied a Class of tester exists for that type of test.

 

When you Generic Tester (GT) starts the user select the class for the test at hand. That class inturn does what it has to do to run the test. In the process of doing its work all of the supporting classes end up allocating ... 1 Gig of memory.

 

After the test is done they user chooses to run a different test that is support by a sibling class of the first test. Then works similarly to the first version and it ends up allocating another 1 Gig.

 

Three times arounf the pole and we are all tied up.

 

Another example:

 

I have an app that renders things in 3D. All of the points of interest are represented by spheres in 3-space. Those spheres are each instances of a class which are part of a surface (onother class) assoicated with a single plot. In that case the application creates (has a for loop that has a "Sphere" class constant in a For Loop that creates one instance of the Sphere class for each data point. When the image is done being rendered, there is 1 Gig allcated.

 

Now the user switch to another data file and wants to inspect a different file with more or less data points. If I just start out again using the same For loop to create all of the spheres new memory is allocated and the memory used from the first viewing are still allocated and I am now up to 2 Gig.

 

I took another approach to work around that issue. The work-around reuired that I re-cylce the spheres. If the first plot had a 1000 data points and the new plot needs 1001, the For loop uses a case structure to redefine the shperes I had previously used and then iterate one last time but this time the Case structure is used to get the Sphere class constant to define the additional point.

 

The recylcing of the object in the surface (redefining) and only crating a new instance when I do not have enough allocated already worked fine with one obvious exception that being if the user goes a data file will less data points, the memory used never goes down unitl LV exits. In this altter case I would be nice to be able to pass the class wire to a function and ask LV to release the memory associated with that instance. That function does not exist (and yes I have Read what Stephan hs written about the danger of deallocating and latter trying to acces it).

 

But lacking the abilty to tell LV "please relese the memory for this object" I am looking for a Design Pattern to handle this LV unique challenge.

 

Thank you!

 

Ben

 

 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 3 of 17
(3,456 Views)

Hi there,

 

I am not an OOP guru myself but it seems that the problem is the lack of a memory deallocation functions on LabVIEW right? Well, have you check the Request Deallocation Function this might be what you are looking for.

 

Memory wise, arrays require a defined space in memory that using the auto indexing on a For Loop will provide but if you increase the iterations that means you need more memory than the original space can hold so you end up creating a contiguous memory space large enough to hold the data and this eventually fragment the RAM.

 

Ideally you would reuse the already available memory space but if you need to auto increment data perhaps after each cycle from the test you can reset all values to default and call for the Request Deallocation Function at the end to be ready before the next test.

 

I hope this helps

Alejandro C. | National Instruments
0 Kudos
Message 4 of 17
(3,264 Views)

I am aware of memory management and performance techniques in LV as is illustrated by my collection of tags on that subject going back 7 years or so, see here

 

http://forums.ni.com/t5/tag/LabVIEW_Performance/tg-p

 

BTW: If you have not read Greg MCKaskles posts I highly recomend you do so.

 

Re: Deallocate

 

That only works when the VI containing it goes idle and then the LV clean-up will release that memory. 

 

My objective in the post is to point out that LVOOP as currently implemented is lacking since we can not control when object instances are destroyed and flag LV to deallocate the resources assigned to those instances.

 

A simple word example to make the point more obvious

 

Say we decide to develop a Game application in LVOOP. Not just one game but any game. So we have a Game object  as the parent and the different game types we want are set a children. When application runs it prompts asking whcih game we want to play instaciates taht flavor of game and we play to the end. When the game end was can choose to play again exit or play a different game.

 

So we start out playing Tetris and we use The Factory Patern to spit out a Tetris object and we use that until the game is done.

 

Then we decide to play Jutland (which would be an absolute memory pig to track all of the ships).

 

At this point we have memory allocated for the Tetris instances and the Jutland instance and that memory aint coming back.

 

WE can be clever and recycle these instances to let the gamer switch back and froth between the games unitl they elect to choose another game and hope there is enough memory to handle the next game.

 

LVOOP simply can not realize this fictional game application.

 

But...

 

If LVOOP would let us destroy a clase instances we could detroy the reference at the end of the game, free up the memory nad move on to the next game.

 

So lacking a destroy, how do we develop a trivial applicatin like a game app without running out of memory?

 

Ben

 

 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 5 of 17
(3,234 Views)

@Ben wrote:

So lacking a destroy, how do we develop a trivial applicatin like a game app without running out of memory?

 


DVRs?  

Message 6 of 17
(3,207 Views)

@Ben wrote:
[..]

My objective in the post is to point out that LVOOP as currently implemented is lacking since we can not control when object instances are destroyed and flag LV to deallocate the resources assigned to those instances.

[..]

Ben


I concur with you that having more influence on memory management would be nice, esp. when working with LVOOP. Having no way to "terminate" an object is somehow hard to work with under certain circumstances.

 

On the other hand, providing a feature like this for LVOOP would be against the very basics of LV itself, so i understand LV R&D decision NOT to provide methods of object destruction.

 

That being said, i am not sure if i understand your underlaying issue correctly. Is it possible that you provide an example which shows in a simple way what you are doing and where we can see the increasing memory consumption?

 

From what i understand, working on plug-ins could be a solution in oder to reduce memory consumption; still, memory has to be re-orginzed if you load a different childclass for the consecutive executions of your application. So you would see some impact on execution times, but (if objects of all childclasses consume similar amount of data) no severe increase of memory allocation. At least... once the application hit a size where the OS is not easily persuaded to provide more memory that is. In that case, LV memory manager would re-evaluate its already allocated memory and re-use memory which is currently not used otherwise....

 

BTW: What version of LV is it?

 

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Message 7 of 17
(3,180 Views)

" What version of LV is it?"

 

8.6 thru 2013

 

The app that prompted me to post is being developed by a coworker and of course I can not post it. It is a "Universal Tester" that can test any of the wodgets the customer wants and can switch from one to the other in any order.

 

All I can do is use words but that may be enough...

 

Concider the application a While loop that iterates once for each game played. THe first thing that happens is a prompt to choose a game to play. Allof the games inherit from the same parent whcih has three mtehods Init, Play and Close. Those three methods are called in that order in the loop as each game is selected played and then ended.

 

A Factory Patern is used to spit out the right flavor of child depending on which game is played.

 

If we play three deifferent games then LV allocates resources for the private data for each instance the resources for the first game stay allocated.

 

I have worked around this in other applications by bending over backwards.

 

Example:

I have a 3D viewer that gobbles up memory due the large data sets. Each point in three space is an instances of a "Point" object and are part of a surface. When switch between data sets I need to load different surfaces since each model can have different shapes.

 

Loading one data set can consume just short of 3 Gig of memory.

 

Switching to the second data set that uses a different surface that has a different set of Points means a whole new block of memory being required. In the initial version of that app. the second dataset would crash LV.

 

The "Bend over backwards" thing required that I "recycle" the "Points" used by the first surface for use with the second. I only allocate more Points if the new surface has more points that the previous surface. That approach worked but I had to adjust my design to accomodate the inability to deaalocate resources when I was done with them.

 

I understand "buffer copy on wire branch" thing and how it complicates memory management and LV's prefromance favoring choice to allocate once and only give it up when we are done.

 

But if you look back at my word model of that simple While Loop game app I am not branching wires and I have a single instance that I want to kill.

 

LVOOP need one of these that works.

 

 

Ben

 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 8 of 17
(3,157 Views)

@Ben wrote:
[...]the second dataset would crash LV.

 


So what is that crash? Out of memory dialog?

Do the games share resources in form of containment (aggregation pattern)? Or are all games completly independent from one another?

 

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Message 9 of 17
(3,148 Views)

Norbert_B wrote:

 


So what is that crash? Out of memory dialog?

Do the games share resources in form of containment (aggregation pattern)? Or are all games completly independent from one another?

 

Norbert


Out of Memory

 

IN the case of the 3D graph I fixed it by recycling the Points.

 

Re:Game

For that sake of this discusion completely independent.

 

And to be completely clear I did not write the Game app.

 

OOP to the extent of my limited understanding is completly focus on the problem we are trying to solve and is completely devoid of of any concept of the environment in whcih it runs.

 

But since we can not release the resouces allocated to a class instance by detroying it, We are forced in LVOOP to keep it in mind. LabVIEW as designed by Jeff K freed the developer of the mundane tasks of malloc etc. but due to LVOOP missing "Destroy" we are again forced to think about the envirnment again.

 

We have a tail wagging the dog situation.

 

I realize that the decision was made years ago to not implement the detroy. I belive that decision should be reconcidered now that we have taken the car around the block a couple of times.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 10 of 17
(3,138 Views)