Showing results for 
Search instead for 
Did you mean: 

LabVIEW memory management changes in 2009-2011?

I'm upgrading a project that was running in LV8.6.  As part of this, I need to import a custoemr database and fix it.  The DB has no relationships in it, and the new software does, so I import the old DB and create the relationships, fixing any broken ones, and writ eot the new DB.


I started getting memeory crashes on the program, so started looking at Task manager.  The LabVIEW 8.6 code on my machine will peak at 630MB of memory when the databse is fully loaded.  in LabVIEW 2011, it varies.  The lowest I have gotten it is 1.2GB, but it will go up to 1.5GB and crash.  I tried LV 2010 and LV 2009 and see the same behavior.


I thought it may be the DB toolkit, as it looks like it had some changes made to it after 8.6, but that wasn't it (I copied the LV8.6 version into 2011 and saw the same problems).  I'm pretty sure it is now a difference in how LabVIEW is handling memory in these subVIs.  I modified the code to still do the DB SELECTS, but do nothing with the data, and there is still a huge difference in memory usage.


I have started dropping memory deallocation VIs into the subVIs and that is helping, but I still cannot get back to the LV 8.6 numbers.  The biggest savings was by dropping one in the DB toolkit's fetch subVI.


What changed in LabVIEW 2009 to cause this change in memory handling?  Is there a way to address it?

Message 1 of 22

I created a couple of VIs which will demonstrate the issue.


For Memory Test 1, here's the memory (according to Task Manager):



  Pre-run Run 1 Run 2 Run 3
LabVIEW 8.6 55504 246060 248900 248900
LabVIEW 2011 93120 705408 1101260 1101260


This gives me the relative memory increase of:



  Delta Run 1 Delta Run 2 Delta Run 3
LabVIEW 8.6 190556 193396 193396
LabVIEW 2011 612288 1008140 1008140


For Memory Test 2, it's the same except drop the array of variants:



  Pre-run Run 1 Run 2 Run 3
LabVIEW 8.6 57244 89864 92060 92060
LabVIEW 2011 90432 612348 617872 621852


This gives us delats of:




  Delta Run 1 Delta Run 2 Delta Run 3
LabVIEW 8.6 32620 34816 34816
LabVIEW 2011 521916 527440 531420



What I found interesting in Memory Test #1 was that LabVIEW used more memory for the second run in LV2011 before it stopped.  I started with Test 1 because it more resembled what the DB toolkit was doing since it passes out variants that I then convert.  I htought maybe LabVIEW didn't store variants internally the same any more.  I dropped the indicator thinking it would make a huge difference in Memory Test 2, and it didn't make a huge difference.


So what is happening?  I see similar behaviore in LV2009 and LV2010.  LV2009 was the worst (significantly), LV2010 was slightly better than 2011, but still siginificantly worse than 8.6.


  Pre-run Run 1 Run 2 Run 3
LabVIEW 8.6 55504 246060 248900 248900
LabVIEW 2011 93120 705408 1101260 1101260
Download All
Message 2 of 22

Don't know how that last table snuck in there, but you can ignore it.

0 Kudos
Message 3 of 22

Hi Matthew,


I opened up your VI's and I don't see where you connect to a database in the VI, so I'm a little confused.  Try looking at a couple of the examples that are available in the NI Example Finder.  You can access the Example Finder by clicking on Find Examples... on the getting started page.  From there, navigate to Toolkits and Modules » Database Connectivity and look at the Database Connection VI and Database Fetching VI and compare these to the way in which you are accessing your database and try modeling after them.  If you have and are still seeing this issues, then let us know and we can look deeper into what is causing this to happen.

0 Kudos
Message 4 of 22

It doesn't connect to a DB, but it shows the memory issue I am seeing.  I am not sure it has anything to do with the database tookit, except that the DB toolkit generates a large set of variants that I convert into data.


Since I would have to post a database and a VI, I just simplified it by generating an array of variants and converting back.  You should be able to open the VI in 8.6 and 2011, and using task manager, see a huge difference in the amount of memory used - for the same amount of data.  I think this is truly the issue, it just easily shows up in the DB toolkit.

Message 5 of 22

Can anyone else at least confirm they see the same behavior?

0 Kudos
Message 6 of 22

I see the same thing.


Labview's memory profiler thinks it's only using 160MB of memory. Could just be a case of Labview requesting alot more memory than it's actually using. I have no idea how memory management works in Labview, but it makes sense that a garbage collector would ask for a lot more memory than it actually allocates, so that it doesn't have to request an even larger contiguous block later and then move all the data into that new block.


I don't have 8.6, so I can't use the profiler to see the memory usage as reported by labview in that version.

Message 7 of 22

So, I have done a little more testing, and the memory issue lies in the variant.  This would explain why it showed up so easily in the DB toolkit since that is all it uses.


The two Vis below, I hardcoded some variant data in a constant.  Memory Test 6 converts it back to DBLs, and Memory Test 7 doesn't do anything (I just deleted the Variant to Data).


For Memory Test 6:



  Open LV Open VI Run VI
8.6 61932 74600 90872
2011 97596 336312 344944

Looking at relative values:


  Open VI Run VI
8.6 12668 28940
2011 238716 247348

For Memory Test 7:


  Open LV Open VI Run VI
8.6 59328 75324 75324
2011 97884 283472 283472

and the relative values:


  Open VI Run VI
8.6 15996 15996
2011 185588 185588

It should also be noted that LabVIEW 8.6 dumps the memory when the VI is closed.  LV2011 does not drop the memory when I close the VI.


It should also be noted, LV8.6 says the VIs have 18MB worth of data (via Memory Usage in VI Properties), and LV2011 reports 32MB in #6 but 10k in #7.


So, it appears to be an issue (bug?) in how LV is handling variant data now.

Download All
Message 8 of 22

THanks for your psts Matthew!


I don't have the PC to test and confirm but it sure sounds like a bug.


I have requested an App Engineer too...


1) Confirm your observations.


2) Log a CAR


3) Wait


4) Get an explanation from R&D.


Yes there were major changes to memory usage when NI shifted to a new optimized compiler and you are possibly seeing the effect of this change.



Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 9 of 22



Thanks fo your help.  I certainly hope that they don't try to justify variants using up to 5x the memory they used to for compiler optimization!  In other testing, I did not see the same behavior if I used just doubles.  It does appear to be specific to the variant data type.

0 Kudos
Message 10 of 22