LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Strange Memory Leak

I'm working with an ActiveX API using events to capture the status of all of the I/O for a small PLC.  Essentially, the PLC contains 8 Analog I/O and 16 Digital I/O and retrieving the status of all of these is done via a single event, which requires a loop of API calls to decode the event data.  Part of this process involved querying the PLC for it's list of sources (all of the I/O names).  The API returns a variant data type for the sources, which is converted to a String Array.  This conversion every time the event fired was causing a memory leak that was proportional to the rate at which the events occur.

My question is, "Is this an expected behavior?".  It seems as though a buffer is created to do the conversion and that buffer is never freed up.  So, every time the event occurs, another buffer is created.

 

Now, In my working version of the attached block diagram, I pass the a class datatype in for the User Data, but stripped it down for illustration.  Part of that class is a queue reference for passing the data out of the callback VI.  And to work around the memory leak, I now pass in the decoded String Array as part of the class datatype, so the event doesn't have to decode it every time.

Does anyone care to comment on the Variant to String Array question?

 

Thanks,
Bruce

 



Bruce K

Not as smart as many other peepz on here....
0 Kudos
Message 1 of 5
(2,543 Views)

It does create a buffer allocation. Have you tried initializing the array before hand and using an inplace structure with that array as the type input to the Variant to String Array call?



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
Message 2 of 5
(2,537 Views)

Mark,

Thank you for the quick response.  I don't 'think' initializing the array is a valid option, since the customer has different versions of these PLC's, I can't for sure know how many entries there would be from system to system.  And it wouldn't be future-proof in the event that they migrate to a different version of the PLC down the road.

 

I think you did answer my question though, but I'm curious why LabVIEW doesn't eventually free that buffer up once that event callback is complete.  Is this a bug in LabVIEW that I'm having to work around?

 

 

 



Bruce K

Not as smart as many other peepz on here....
0 Kudos
Message 3 of 5
(2,531 Views)

It is the mysteries of how LabVIEW managemes memory for you. You could try adding a call to the "Request Deallocation" VI however this is just a suggestion to the memory manager. There is no guarantee any memory will be filled after making the call. However it does give the memory manage a clue that it could do some cleanup at that point.

 

As for preallocating you could have a parameter that you pass in, with some reasonable default value, that would provide enough space to contain the PLC data. At startup the application could read a value for the number of parameters you are getting from this particular PLC and then preallocate the array using that value. This would help to future proof the code some. Though I'm not sure if you would see the same issue with the preallocated array. It may have the same result.



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
0 Kudos
Message 4 of 5
(2,511 Views)

You might consider trying the Desktop Execution Trace Toolkit.  It provides valuable info on memory allocations and I've used it more than once in the past to find memory leaks on Windows applications.

http://sine.ni.com/nips/cds/view/p/lang/en/nid/209044

Message 5 of 5
(2,458 Views)