01-20-2016 11:45 AM
Hi,
I've got a relatively large labview application that I'm planning to run to test lifetime properties - i.e. it will need to be up for several months.
The VI runs some setup code then loops over some capture code at a regular interval (currently every 20 minutes as I'm doing debug).
Unfortunately, I seem to run out of memory after a week or so, and labview crashes 😞
When I initially load the labview, and start the application, task manager states that I'm using about 300 MB of Ram. within a few hours, I'm using about 600MB.
If I look at the profiler, there is nothing in my internal application that is using more than about 10 MB, and none of these appear to be growing.
If I use the DETT, I have no leaking references, and I generate too much data when I'm include memory allocations into the trace, so I can only capture for about a few hours before it fills up the 100G of disk I have available. I've not found anything useful from looking at that either.
I've added the "Request Deallocation" to my looped vi that is run every 20 minutes, but this makes no difference.
Any suggestions on how to find what is taking this memory? Any Idea how to fix it? Is there any VI that can be run 1 time per week to purge all VI's of unallocated memory?
If I stop the application, the memory still remains high. If I close the project, I also end up with no change in memory (as reported by task manager.
I'm running on windows server 2012 if it makes a difference.
Thanks for any suggestions,
Will
--
01-20-2016 11:53 AM
Desktop Execution Trace Toolkit can help you find memory leaks.
VI Anaylzer can help too! by advising you about poor LabVIEW practices. Can you provide details about the code that runs out of memory? and OS, OS bitness, and dependancies?
01-20-2016 12:05 PM
These types of issues are usually caused by growing arrays and/or strings. So keep an eye out for those and see if you can keep the sizes constant and/or perform operation in place.
01-20-2016 01:05 PM - edited 01-20-2016 01:06 PM
Feel free to post a simplfied version of your VI so we can look out for typical scenarios. It is not just growing arrays, but general array resizing that can cause memory fragmentation and suddenly there is insufficient contiguous memory (areay are stored in contiguous memory).
Since you are streaming the data to disk, you should make sure that there isn't another copy of the data left in memory. Are you building arrays from scratch before occasionally writing them to disk? Can you operate in-place instead?Are you writing to a binary file or are you using extensive formatting? Is the file kept open of are you constant open and close it? What file IO tools are you using?
"request deallocation" is often a bad idea, especially if the same subVI is called again with the same datasize. it is much cheaper if it can hold on to the allocated data instead of deallocation and then allocate again with the next call.
01-21-2016 04:10 AM
Hi again, and thanks for the questions.
I'll try to make a simplified vi to post, but for now some answers:
I'm running Windows Server 2012, 64 bit. Labview 2014 32 bit.
As I mentioned, the DETT doesn't find any reference leaks, and it generates too much information to be useful if I select memory allocations.
I'll run VI Analyser, to see what it says.
In general though, I would personally expect that the profiler should list all memory, and if any block was using a high amount, then it should be visible there? Or does the profiler only list the memory required, not the memory allocated and forgotten about (e.g. by resizing arrays)? Is there any way to find the total allocated memory size of each VI (then it will be easy to find where the problem is)?
My comment to disk usage was in reference to DETT's logging, however, yes you're right, I do write my results to disk, and don't keep them in memory (at least beyond the end of the "core measurement vi" (which requests deallocation after anyway).
There should be minimal array resizing going on, although there is some array filtering, which may be the same thing? I attach a VI where I do an inplace update - is this done properly or do I need to do this a different way?
The "Result" is a single float value that is created within a loop, so no, it's not stored after it's written. It's written to a CSV file, and I append to the CSV file (with the input configuration) after each measurement is made. Each line is generated with a string concatenation.
Agreed, in C, I'd definitely pre-allocate the memory, then work within this, I'm a bit of a labview novice, so I've tried to do in-place edits of arrays (as per the example below), but maybe that's not the way to go..
01-21-2016 04:26 AM
willholl wrote:
Agreed, in C, I'd definitely pre-allocate the memory, then work within this, I'm a bit of a labview novice, so I've tried to do in-place edits of arrays (as per the example below), but maybe that's not the way to go..
In your example, there is no need for the Index Array nor the In Place Element Structure. You can just use the Replace Array Subset directly.
01-21-2016 04:47 AM
After looking at the code a little more, I made this update.
1. Use the FOR loop to do your search.
You can enable a Conditional Terminal on a FOR loop to abort the loop before it runs through all of the elements. So you can use that to stop your FOR loop when you found the desired element. This will actually eliminate the need to build up an array and use the Search 1D Array. The lack of building an array will help your memory issue some.
2. Use the In Place Element Structure to index the item you want changed. Not exactly needed (the LabVIEW compiler does a lot of optimizations), but it makes things a little cleaner.
3. The Error Ring can allow for inputs for the source. So you can add a "%s" to the description in the error ring and you will see an extra string input to wire up your search name when it was not found.
01-21-2016 04:50 AM
Thanks for the suggestion, however, I don't understand how to use the "Replace Array Subset" function.
In my example, I start with an array of clusters, a name ("bitfield") and a value to update ("value")
I make a new array of the "bfname" so I can look up the index of the cluster I want to work with. I don't see how I can avoid this with the "Replace array subset"
Perhaps you mean this (attached) would be a more efficient method?
01-21-2016 04:56 AM
Thanks. That's (unsurprisingly) much better than my second attempt! I'll add this, and see how that affects my memory usage.
01-21-2016 06:19 AM
After doing a little playing, the previous solution seems to be glamorous, but not a complete fix for my memory issue.
I've found there are lots of VI's that read in files, pass them through the VI attached, then do some functions before writing out.
This will be a huge amount of work to go through all the VI's and make sure that they are all behaving nicely.
What would be very nice though, is to have a "Request Deallocation Now" that goes and defrags all the VI's in memory. I don't see it as a particular performance hit, as when I'm done, I'll run a test that takes about 2 hours, then wait for 12hours before running it again. I recon 12 hours should be more than enough for labview to tidy itself up... Unless I'm mistaken, the request deallocation only deallocates the memory associated with the VI it's attached to, and not those of it's sub-vi's. I guess I'll have to take a performance hit (instead of spending weeks getting this reprogrammed), and just add a request deallocation to all sub-vi's.
It seems that the problem is long standing, and as mentioned by others, whenever I create strings, resize arrays, etc, then I'm potentially leaving some small non-contiguous space in labview's allocated memory, that it will not use again, and not release to the OS.
Thus unless I write my VI's with the rigour of C (and I'm not sure how to do that), I'll never get a vi to run indefinitiely and we will always run out of memory at some point.