10-29-2008 10:25 AM
My RT application has a subvi that executes a rather complicated test sequence; this sequence includes a lot of Build Array functions and other memory hogs. Rewriting the sequence to handle memory allocation more efficiently would be a pretty serious undertaking, so for now, I'd just like to deallocate the memory each time the subvi finishes executing.
I tried placing the Request Memory Deallocation function at the end of the sequence, but it appears to have caused the RTOS to crash--I would lose all communication with the RT controller (PXI-8106), and it would eventually "Reboot due to System Error". Is there any reason why I wouldn't be able to deallocate the memory programmatically in RTOS? According to NI's "Memory management in LabVIEW Realtime" tutorial (http://zone.ni.com/devzone/cda/tut/p/id/4537), "Automatic memory handling is one of the chief benefits of LabVIEW Real-Time. However, because it is automatic, you have less control over it. For example, functions that generate data allocate storage for that data. When data is no longer needed, LabVIEW Real-Timedeallocates the associated memory." So, am I to understand that the RTOS will automatically deallocate the memory at the end of the subvi call? (that does not appear to be the case)
If LabVIEW does not automatically deallocate the memory, is there anything else that I could do to release the memory programmatically?
Thank you for your assistance.
10-30-2008 07:38 AM
TurboPhil wrote:My RT application ...
includes a lot of Build Array functions and other memory hogs. Rewriting the sequence to handle memory allocation more efficiently would be a pretty serious undertaking, so for now, I'd just like to deallocate the memory each time the subvi finishes executing.
... is there anything else that I could do to release the memory programmatically?
Thank you for your assistance.
Yes, but it will be easier to just do it right rather than trying to patch it up.
The issue is those two letters "RT" = Real-Time.
RT operating systems are intended to be deterministic so anything and everything that threatens determinism are "right out". (Instruction s for the Holy Hand-granande, Monty Python Quest for the Holy Grail").
The following represents my understanding and therefore should not be taken with a grain of salt.
In the context of your query this means stopping to allocate memory have to minimized or eliminated. To run an app the code has to be put in physical memory which requires reading from disk, setting up memory mapping etc. Since the disk I/O is asyncronous, the OS can't stay deterministic while this is going on. Since it can't be avoided (if you want to run code) the memory management is minimized. Once allocated, it stays allocated, we sorta. The memory remains allocated to LV and then LV handles what code uses which buffers. So behinf the scenes LV is tracking how much space is required for each buffer and it will mark as available buffers that are no longer used. An example would be if you pass a large array to a sub-VI a buffer is allocated for this transfer. The buffer will remain set aside for a similar operation at a latter time so the buffer does not have to be allocated again. This illustrates why why you first start up an RT app you may experience some jitter while all of the data paths are being layed out. If you call the usb-VI again, but this time only pas a small array, LV will reduce the size of the buffer set aside for passing the array. The memory still belongs to LV but can be used for other purposes provided they are appropriate. By this I am refering to the requirement that all buffers used by LV must be contiguous. If buffers originally allocated for your sub-VI are reduced and the "extra" memory is a large enough contiguous block to be used in another call, it will be re-used.
So....
One approach you could use to try to patch up what you have is to re-run your sub-VI and have it write empty arrays and strings into all of its wires thereby freeing up those buffers for use else where. But to be succesful at this approach requires you be able to spot where buffers are used and allocated and then bend over backwards to clean-up the memory mess. Some would call this approach hit and miss.
I would like to strongly urge you to bite-the-bullet and analyze your memory demands and concentrate on the big ones. Develop an Action Engine that performs all of the work "in-Place" so that YOU can manage where and when buffers are being used.
Just trying to help,
Ben