From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Actor Framework Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

Actor Memory Usage and Release

Solved!
Go to solution

I have an Actor to anaylize a fairly larger data set, on the order of about 1GB. What I'm running into is after my Analysis Actor completes execution it doesn't seem to be releasing the memory back to the system, as seen through the Windows Resource and Performance Monitors. It has been my understanding that LabVIEW releases memory back to the system after the Top-Level VI finishes. Since the AF used the Asnyc call nodes, making each actor is it's own Top-Level VI, I would have expected LabVIEW to release the memory once the actor is complete.

Am I correct in my thinking? Or is the Actor remaining in memory and if so how can I release the memory used back to the system?

Ryan Podsim, CLA
0 Kudos
Message 1 of 10
(7,450 Views)

LVOOP objects are never released once loaded into memory. Some constraint they couldn't get around in the design. So any AF actor you load and use will hang around until you exit the application.

I recommend using SQLite or TDMS to analyze your data set in chunks or in a stream, leaving the remainder on disk when you aren't using it.

Message 2 of 10
(4,518 Views)

Is that true any VI with-in a class or just the object itself? I'm not storing any of the data in any LVOOP objects. I am using classes to define the anaysis, but the data is read and analized with-in single Dynamic Displatch VI (plus associated sub-VIs).

I'm guessing that since the object isn't getting released then the class VIs aren't either?

Ryan Podsim, CLA
0 Kudos
Message 3 of 10
(4,518 Views)

An LVOOP class is a Project Library, and (except in a binary distribution where it's been explicitly disabled) all of a Project Library's VIs are loaded into memory when any of them is. It then stands to my reason that all of a class's method VIs will be loaded as long as an object of the class is in memory (which is forever, once it's been used).

If your object holds the data set as a private member, then the object is taking up that 1 GB allocation. If any of your analysis VIs/subVIs create or store copies of the data by splitting the wire or using SRs/FNs, then (I think) those copies stay in the VI's memory allocation. I should warn, though, that I'm starting to tax the bounds of my knowledge of the LV memory manager and execution system.

0 Kudos
Message 4 of 10
(4,518 Views)

Try replacing the big data set with something small before closing the Actor.  The LabVIEW memory manager might notice that and clean up.  If the data is in a queue, close the queue reference.

Best solution is avoidance - do that analysis point-by-point, for data that big point-by-point could be much faster than filling up all that memory.

Disclaimer:  I have not yet used Actor Framework, but I have been LabVIEW-ing since LabVIEW 2.5.

Message 5 of 10
(4,518 Views)

RMThebert wrote:

Try replacing the big data set with something small before closing the Actor.  The LabVIEW memory manager might notice that and clean up.  If the data is in a queue, close the queue reference.

Hey, good idea! Just call "Reshape Array" with an argument of '0' to release any huge arrays held in the object's data.

0 Kudos
Message 6 of 10
(4,518 Views)

David_Staab wrote:

LVOOP objects are never released once loaded into memory.


FALSE.

LabVIEW *classes* are never released. LabVIEW *objects* are released all the time.

I'll write up a more complete answer to the original question in a bit, but I wanted to quash that rumor right now.

0 Kudos
Message 7 of 10
(4,518 Views)

rpodsim wrote:

I have an Actor to anaylize a fairly larger data set, on the order of about 1GB. What I'm running into is after my Analysis Actor completes execution it doesn't seem to be releasing the memory back to the system, as seen through the Windows Resource and Performance Monitors. It has been my understanding that LabVIEW releases memory back to the system after the Top-Level VI finishes. Since the AF used the Asnyc call nodes, making each actor is it's own Top-Level VI, I would have expected LabVIEW to release the memory once the actor is complete.

Am I correct in my thinking? Or is the Actor remaining in memory and if so how can I release the memory used back to the system?

Memory should be released when the VI is unloaded, not when the top-level VI finishes.  The Actor and Actor Core VIs are shared clones and I don't think they will be unloaded until all running code stops.  Deliberatly setting any large arrays in shift registers to empty before exiting, as has been suggested, is the way to go.

Note that Windows Resource and Performance Monitors may not reflect the freed memory, as that may be retained for reuse by LabVIEW.

0 Kudos
Message 8 of 10
(4,518 Views)
Solution
Accepted by topic author rpodsim

This issue has very little to do with LabVIEW classes (or with LabVIEW objects, to continue emphasizing that distinction from my previous post). It has everything to do with reentrant VIs and LabVIEW's overall rules for allocation of memory on terminals.

For the purposes of this discussion, let's assume that every wire in LabVIEW is an independent memory allocation. In actuality, the LV compiler optimizes and shares memory heavily, but for this discussion, this simplification works. I'm also assuming that none of the constant folding or other optimizations happen.

First, let's talk about a non-reentrant VI.

When a LabVIEW VI loads into memory, it allocates space for its data as a big block, which we call the VI's data space. An int32 wire adds 4 bytes. A double adds 8 bytes. A string adds either 4 or 8 bytes depending upon whether the system is 32-bit or 64-bit because a string's data value is a pointer to a block of the actual string text. That memory is initially all zeros -- the numbers have the value of zero, the strings have a null pointer. Then we go through and set values for things that are constants and default values. So the integer may be set to 4, the double to 5.5, and the string gets allocated to be a pointer to a block of the text. But most of the datablock is still all zeros because most of your wires are not constants or default values.

Now, you run your VI. Suppose an Add primitive adds that 4 and that 5.5 together. The output wire of the Add primitive is zero at the start. After the add executes, it is now 9.5. The original input wires are still 4 and 5.5. It is important that we not overwrite those values because the VI might be run a second time and we'll need those values to still be around.

A Concatenate String does much the same thing... two strings, each with their own allocation comes in, and LabVIEW outputs a new string by allocating a new block of memory, copying the contents of both strings into the new block and outputing the pointer to the original block. (Just a reminder that I've simplified out lots of optimizations that minimize how often this sort of full reallocation actually happens.)

So, now you have three string addresses in memory, the two original input strings and the output string.

The VI finishes executing its code. Maybe it is a subVI of another VI, maybe it is a top-level VI itself. Doesn't matter... the behavior is the same. LabVIEW does NOT go through and deallocate every terminal. Instead, it leaves them allocated. Why? Because if the VI is executed a second time, the memory is already allocated. For a top-level VI, this is a minor benefit. For a deep subVI, it can be a major improvement. What we found through lots of experimentation is that the probability of the allocation being the same size on the next execution as on this execution is pretty high, so we do not bother to deallocate terminals until the next iteration comes around and we find out we need a different size. If the new requirement is bigger or smaller, we deallocate the current block and allocate a new block (I think there's a couple special points in the code where we reuse an already-allocated block for a wire if we need a smaller block, but that's pretty rare).

Ok, now let's look at how this impacts reentrant VIs.

Full reentrant VIs are essentially unchanged -- their allocations can be thought of as simply an extension to the caller's allocations since each caller gets its own copy (not actually true in practice, but good enough for this discussion).

Shared reentrant VIs... ah. Each clone has its own separate data space allocation. When you make a call to the subVI, we grab the next available clone or allocate a new one if all of them are currently in use. And they follow the same rules as the non-reentrant VIs -- their terminals get deallocated only when they are called again.

So in your case, with the actors, there's data still sitting in the wires when the Actor Core.vi finishes running (and in the front panel controls if you've got the panel open). And all that data will stay allocated until that particular clone of Actor Core.vi runs again and LV decides it no longer needs the allocation.

The Anxious Deallocate primitive exists to force LabVIEW to deallocate all those terminals on that VI (not on any of its subVIs, just that one VI) when it finishes each execution. Use that primitive *very sparingly*. You will save a giant amount of data and you will burn a giant amount of performance as LabVIEW deallocates and reallocates repeatedly.

Still, you bring up a reasonable point that the Actor Core.vi might be a valid place to use the Anxious Deallocate primitive. Sure it is possible that the clone might be used for the same type of actor the next time it is called, but it is equally possible that a completely different class of object will be assigned to that clone. We're not generally launching and killing actors in a tight loop such that the pre-allocations would save us much. It might be worth trying to put that primitive on the diagram of Actor.lvclass:Actor Core.vi and see what impact it has on your code (and on your overrides of Actor Core.vi).

But the short answer is that you would see this behavior with or without LV classes if you just get a large array passing through subVIs... it stays allocated until the next time a smaller array comes through. And with clones, it could be a while until that clone ever gets exercised again if the parallelism is rare.

And the earlier suggestion about replaciing the large data set inside your object before you exit is also helpful.

Message 9 of 10
(4,518 Views)

BTW, what kind of analysis are you doing?  I, personally, would not try and handle an entire GB of data in memory at one time.  My favorite shiny tool for data analysis at the moment is SQLite, which allows one to keep data on disk and indexed for quick lookup.

0 Kudos
Message 10 of 10
(4,518 Views)