We have an application in which we have recently started to implement new features using classes.The features are not massively more complex and don't require a great deal of configuration data compared with the pre-existing functionality. As each of the new features (and classes) have been added and new builds generated however, I have been noticing that the RAM usage when the exe is running has increased a lot from where we were prior to using them.
As a result I've been trying to understand why this would be the case. During this investigation I've been using the VI Profiler and Desktop Execution Toolkit to attempt to see how memory is being allocated.
Prior to using classes, any data that the application was using would be contained in a monolithic cluster in a Functional Global, and I've noticed that for the rare occaisions when more complex clusters (arrays of clusters within arrays etc.) are used, the memory footprint is huge compared with the 'actual' data that we are attempting to store in them. For example,one set of data which has a size in a configuration file of around 400K, requires approximately 10Mb (and around 60000 blocks!?) when generated or read into the complex cluster. Looking at the Desktop Execution Toolkit for this cluster also shows a huge amount of activity with respect to the memory manager.
I've been reading since that because of the way LabVIEW allocates memory for complex clusters, this can lead to a significant amount of excess memory allocation (I'm guessing particulary for arrays that are nested in clusters within clusters etc.) and using complex clusters can be very inefficient. However, I've also read that the LabVIEW OOA implementation treats class data in the same way it treats clusters - so for complex classes (lots of data and inheritance etc.) would this not result in a similar situation and could this be the reason for the increased memory usage?
I've been struggling a bit to make any sense of the Memory Profiler and Desktop Execution Toolkit with respect to analysing the new classes as the dynamic memory allocation is not even on the scale. However, when I load a VI that simply contains all of the classes in the 2014 IDE, the memory footprint is practically the same as that for the entire application before adding these classes!!!
Does anybody have any ideas why this would be the case - it seems completely crazy?!!
I'm afraid I can't provide the actual class libraries so apologies for that, but to put things into perspective, here are a few numbers:
Prior to Classes:
Number of VIs - 2000
Memory on load in 2014 IDE - 280M
Peak RAM Usage in exe - 400M
Number of VIs - 2300
Memory usage on load in 2014 IDE - 480M
Peak RAM Usage in exe - 650M
I would appreciate any advice on using classes to keep memory usage as low as possible (assuming we haven't done anything stupid?) and also any convenient and efficient way of grouping heirarchies of data sets either within classes or via any other means. Thanks v much in advance!!
LabVIEW classes are clusters with extended functionality. This extended functionality adds overhead.
That being said, using classes by itself don't need to be an advantage over using clusters. Also, the performance impact of large clusters is also applying for classes representing the same data set (cluster).
The most obvious difference between cluster and class reveals itself when the application or application development requires big flexibility: Classes are way more flexible and (if done properly) "secure".
However, the very same rules of using data types in LV apply to both of them:
I would expect your application to have some incidences where the first bullet is not implemented in the best way. That means that you essentially do not only multiply objects, but also multiply the overhead compared to clusters. Also, classes require more files to be in memory (e.g. lvclass file).
If all these items sums up to the huge difference you are stating, i don't know. It seems to be rather high for my guessing...
However, i recommend to not use LV classes only for the purpose to use classes. Use classes if you plan to use their advantages, most obviously the great developer flexibility!
Hi, Thanks for the reply and clarification with regards to classes and clusters.
I take your points with regards to classes/clusters and their use with respect to branching wires etc. I must admit prior to investigating this issue I wasn't really aware/concerned with memory usage (unlike my embedded colleagues!) but that has changed now and I've been doing some more reading in this area (in place element structures, DVRs etc.) .
However, when I just simply load the classes (in my dummy VI which just has them all sat there doing nothing), without even attempting to run the VI; populating and manipulating the class data etc., I get a huge memory hit which seems to roughly correspond with the 'extra' memory that I incur when running the application that contains the classes for real in the exe.
Do you think that this memory hit is simply having the classes in memory and if so do you know what labview does when it loads a class? The constituent VIs on disk (although quite numerous because of the way classes are implemented with respect to get/set methods etc.) should be no where near accounting for this overhead. Does the class data get populated in some way or reserved in memory?
I'd really like to continue developing using OOA as its incredibly elegant and suits our application well but until I can understand more about this issue I'm in serious trouble as I'm even reluctant to add any more functionality to the software (regardless of its implementation) due to the operating constraints of our clients (target) machines which are often legacy XP panel PCs etc.
[...]and if so do you know what labview does when it loads a class? [...]
I cannot speak for the internals as i do not know them, but i can give you some indicators.
When loading a VI file in LV, LV has to allocate memory for that VI. Depending on the components you require, you require a different amount of memory. One important question for instance: Is the front panel loaded?
Obviously, for a dialog VI it will load. But what about subVIs?
However, the amount of memory for the VI can be reviewed in the VI Settings >> Memory Usage. Please note that "Data" will be the part which changes greatly during VI execution.
Of course, the VI will very often include subVIs, so called "dependencies". All these dependencies have to be loaded, down to the bottom line (most basic) function.
Also memory consumption depends on VI settings like "reentrant execution".
Additionally for all lvlib-based management structures (like lvlib itself, lvclass, xctrl, ..) the VI loads the corresponding management file. This can lead to laoding additional files on top of the VIs (or at least a check of presence, which still requires little amount of memory!).
That being said, the cluster is 'way smaller' than a class when viewing the memory consumption. What a reasonable ratio is, i do not know. I even doubt that there is a good guess around as it depends of so many project specific things.....
However, as already indicated, memory footprint is definetly not the focus when going for OOP, so as long as performance (run speed) and flexibility meets or exceeds expectation while keeping code complexity as low as possible, OOP is a very good tool to get things done....
Thanks for further replies.
I think my first job is going to be picking apart the class structures and accompanying code to see what can be done.
As you say Mike, large data sets present problems that can often be safely overlooked for small data sets and I think this is probably where things have gone awry. Thanks for the link to your blog!
I do think there should probably be more emphasis on memory usage and efficiency in Labview documentation/examples/training etc. as often these are treated more as 'advanced topics' when really the principles are pretty fundamental.
Great to have some advice from the experts!
"...there should probably be more emphasis on memory usage and efficiency in Labview documentation/examples/training etc"
You may find this tag cloud useful. It is a collection of thread talking about performance in general which often comes down to managing memory, CPU and threads.
Personally, I think that relegating memory management to being an "advanced" topic is the last vestage of the NI marketing meme: "...look at all you can do, and no programming required..."
PS: in my office somewhere I still have some old marketing literature containing that exact quote!
There is a differnce between interpolation and extrapolation.
"Pay no attention the man behind the curtain."