LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Differences in properties data memory and profiled memory.

I am interested in observing the memory usage of a SubVI. 

 

I was wondering if someone could point me in the right direction as to why the data memory displayed in:

File, VI Properties, Memory Usage

is different  from that which is measured by the profiling tool:

Tools, Profile, Performance and Memory

 

I am profiling a top level VI then once completed, opening the SubVI and viewing its properties.

 

The data memory measured through profiling is always larger .

 

This discrepancy also occurs with the top level VI. For example properties states the top level VI's data memory is 317 kB but through profiling it is displayed as 489 kB (where min = max).

 

The documentation that I have found (http://zone.ni.com/reference/en-XX/help/371361K-01/lvdialog/profile/ and

http://zone.ni.com/reference/en-XX/help/371361J-01/lvdialog/memory_usage_properties/) both suggest these values should be for the data space of the VI.  Is there an extra memory overhead generated by using the profiller? Or is one simply not taken to the same accuracy?


Cheers

 

0 Kudos
Message 1 of 5
(2,297 Views)

The Profiler injects extra code into the VIs it is testing to gain access to data values and timing. As a result the values differ from the behavior without it.

 

I find the Profiler useful for relative measurements of time or memory compared to other VIs and subVIs. If I am trying to speed up a program or reduce its memory footprint, the Profiler is helpful in identifying the subVIs where the biggest bottlenecks occur.

 

Making accurate and reproducible measurements of a VI's performance requires both science and art. The compiler is very good at optimizing some parts of the code (constant folding is a simple example), so a block diagram may look quite complicated yet the code executes a constant. Similarly debugging code can be removed to increase speed. This can be a significant improvement in high iteration count loops.

 

Lynn

Message 2 of 5
(2,280 Views)

Thanks for the reply,

 

Though if I were to only run a VI once, with the profiler active, shouldn't the properties data also be generated when the extra code is present? So would it not be reasonable to expect them to be the same?

 

Ultimately I'll probably go with the profiler data, since its a lot more convenient to gather but I just wanted to be confident about the discrepancy. 🙂

0 Kudos
Message 3 of 5
(2,265 Views)

Perhaps someone from NI with knowledge of the internal processes can answer those questions.

 

Lynn

0 Kudos
Message 4 of 5
(2,255 Views)

Hi eee14,

 

The difference comes from the fact that the VI Properties>Memory usage figures show static memory usage, i.e. the size on disk that your VI takes. This static size is broken down into Front Panel Objects, Block Diagram Objects, Code and Data. As the name suggests, this memory is the memory taken to store all elements of the VI – the objects on the front panel and block diagram, as well as the compiled code. This memory does not account for any dynamically allocated arrays that the VI might use during execution.

 

On the other hand, the VI profiler catalogues the dynamic memory usage of the VI. Let’s say you continuously append elements to the end of an array in your VI, until a stop button is pressed. The array grows bigger and bigger and more and more memory is taken by that VI, but that memory allocation is done at runtime – there’s no way of knowing beforehand when the loop will be stopped, therefore how much memory needs to be allocated for this array. VI profiler shows you all the memory taken by the VI, the static one + the arrays and other data that’s used in your VI, that’s why the VI profiler always shows a larger memory usage than the VI properties figure.

 

In conclusion, if you are looking to improve the performance of the VI, VI profiler is the way to go. I hope this clears the matter a bit.

0 Kudos
Message 5 of 5
(2,200 Views)