From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Real-Time Measurement and Control

cancel
Showing results for 
Search instead for 
Did you mean: 

RT memory management: strings & Arrays

Hi all,

 

I am in the middle of writing my first RT control and DAQ system and have suddenly become very worried about memory management Smiley Surprised !

 

I have avoided using the build array function within loops. However as can be seen in the attached image I have used it to convert data to an array in order to write to a tdms file.

Then I also realised I have used strings in my 'event' data (a user interaction eg changing set points which I want written to file). These strings are predefined constants describing the event.

I also use queues rather than RT FIFOs to pass data (flattened to variant in some cases) between non-Time-Critical loops.

 

The queues are fixed length which I believe pre-allocates memeory on an RT OS but what happens with a variable length data type such as the variants or strings?... or arrays

 

I took the decision to use queues rather than RT-FIFOs so I could pass more complex data cluster around... Have I dug myself a big hole here??

 

Many thanks,

Steve.

 

 

0 Kudos
Message 1 of 22
(6,006 Views)

Hi St3ve,

 

What's the problem? The memory of RT was used up? Did you try without TDMS nodes? What was the result?

 

Thanks.

0 Kudos
Message 2 of 22
(5,994 Views)

Hi Steve,

 

The key in programming your control system in LabVIEW Real-Time will be to think carefully about what parts of your code need to be executed with critical timing (deterministically), and what parts have looser requirements.

 

I would recommend using one timed loop (which you set to the highest priority in your application) to handle the critical input/process/output cycle. You do need to be careful in this loop, and avoid memory allocations, blocking VIs, etc.

 

Next, use a real-time FIFO inside of this critical loop to deterministically transfer data to less important loops in your application. For example, disk access will not be deterministic (e.g. writing to a TDMS file), so you do not want to put this  code inside of your critical control loop. You can use whatever data structures or communication elements that make sense for you outside of your critical loop, as long as you think carefully about what the implications are of non-determinism. Is your code setup to handle a case where writing to disk takes longer than usual? 

 

Carefully thinking about these timing issues and writing your code in a robust way is the best way to ensure that your application will be reliable. Using non-deterministic functions inside of a real-time VI usually can't be avoided (network communication, disk access, etc), but your code should be setup such that it is ok if these functions encounter a delay.

 

I realize this is somewhat of a general answer, so please let me know if you have any more specific questions as well. Have a great day Steve!

 

Best Regards,

 

Casey Weltzin

Product Manager, LabVIEW Real-Time

National Instruments 

 

Message 3 of 22
(5,973 Views)

Hi Casey,

 

I have tried to do that. I have written the code using timed loops so I can control the priority levels a bit better than just using one time critical vi.

 

My question is quite general as well I suppose! I'd like to understand the memory allocation a bit better in RT.

 

For example I have used queues to pass data between non-dterministic parts of my code. The help documentation for the create queue function says that on RT memory is pre-allocated for a fixed length queue. Now, also the queue is cluster of enum (identifying the data) and a variant (data). I know the maximum length of data that the queue will contain is an array of 200 doubles.so when I created the queue I initialised an array of this size flattened it to variant and set this as the queue data type.

 

I assumed that then I could infact flatten anything to variant and put it into that queue and as long as it isn't longer than the space allocated for the queue the memory manager shouldn't be called. Is this the case?

 

Also I have often used queues for multiplexing data on windows (multiple writers, single reader). Now I'm using RT I was wondering if each reader/writer mutex's the queue and if that is the same for an RT FIFO?

 

Many thanks for your help,

Steve.

0 Kudos
Message 4 of 22
(5,959 Views)

Hi Steve,

 

The short answer to your question is that it is possible to use queues deterministically in LabVIEW Real-Time with careful programming. Real-Time FIFOs are typically recommended for this communication, however, as they are specifically designed for this purpose and don't require as much programming consideration.

 

When you create a LabVIEW queue and specify a maximum number of elements, the full queue is not yet allocated. To preallocate queue elements (a good idea for real-time applications), it is recommended that you enqueue the maximum number of "dummy" elements, and then flush the queue at the beginning of your application.

 

After a queue has been preallocated, for data types that do not have fixed sizes only references will be passed via the queue. Therefore, queues will not force copies of data inside of your time critical loop.

 

In response to your question about blocking, the Enqueue Element function will block if the queue is full (there is also automatic low-level protection so that two writes don't corrupt each other). If your application has a chance of filling the queue, then it is recommended that you use the Lossy Enqueue Element function to avoid blocking in your time critical loop.

 

To take a look at potential allocations in your LabVIEW applications, I would recommend using the Show Buffer Allocations tool (Tools >> Profile >> Show Buffer Allocations). To go a step further and look for allocations during runtime, you can use the Desktop Execution Trace Toolkit, or the Real-Time Execution Trace Toolkit as well.

 

Please let me know if you have any additional questions, and I am glad to help. Have a great day!

 

Best Regards,

 

Casey Weltzin

Product Manager, LabVIEW Real-Time

National Instruments 

Message 5 of 22
(5,933 Views)

Casey,

 

That's very useful, I've now initialised the queues as you have suggested.

 

I have been using the show buffer alocation tool which is what got me worried in the first place Smiley Happy.

 

I have used variants to pass data to the logging loop so a few different data types can be handled in the same queue. In the producer loop this requires flattening to variant and in the logging loop this requires unflattening from variant. Both of these conversion operations require buffer allocations which seem unavoidable.

 

In the producer loops the same size data is always enqueued so I was thinking once the memory has been allocated the same block can always be used - or is this not true and in fact the memory manager will be called every iteration to allocate a new buffer possibly eating all the memory?

 

In the consumer loop (logging loop) the different data types are handled in different cases of a case structure where the data is unflattened from variant. Again I was thinking once the buffer was allocated at the unflatten from variant it in each case it would be reused - am I wrong again?

 

As always, many thanks!

 

Steve.

0 Kudos
Message 6 of 22
(5,921 Views)

Hi Steve,

 

I will attempt to answer your remaining questions here.

 

While we can make educated guesses at what the OS memory manager will do during the execution of any given program, there are a lot of rules that determine its behavior. A compiled LabVIEW VI will reuse memory buffers whenever possible between pieces of data of the same type. If different sized data is used for a variable size data type such as a string or variant (perhaps in different iterations of an application), then the memory manager will need to decide whether or not to allocate additional memory, free up some memory, etc.

 

I would recommend optimizing your applications to minimize data buffers and make sure they are not resized, but not making assumptions about what the memory manager itself will do in any given instance. If you have a lot of time in your critical loops and soft-real time behavior is acceptable, then using variable size data types may be ok. For hard real-time performance, however, I would recommend only using fixed size data types in your critical loops and not count on the memory manager to exhibit specific behavior.

 

To be more specific to your use case, if your producer loops are passing pre-created variants into a pre-allocated queue, then the memory manager should not be called. However, if you are doing any conversion from variable size data types to variant on the fly, then the memory manager will be called (and may or may not reuse memory). In your consumer loop, converting from variant to another type means that the same memory location may or may not be used, but the memory manager will need to be called to decide this. However, if your consumer loop is non-critical, then the jitter associated with calling the memory manager may be acceptable. 

 

The bottom line is that any time buffers are created or resized, the memory manager will need to be called. Even if memory isn't actually allocated when that happens, it is a non-deterministic operation and should be avoided in a time critical loop. This is a tough topic, and I definitely want to invite others with more insight on the specifics of the memory manager, etc to chime in as well. Thanks so much for asking the tough questions Steve - it is through this kind of exploration that we can all improve our understanding of what is going on behind the scenes.

  

 

Best Regards,

 

Casey Weltzin

Product Manager, LabVIEW Real-Time  

National Instruments 

Message 7 of 22
(5,735 Views)

Hi Casey,

 

Thanks for your help, hope you had a good Christams/New Year?

 

It does seem a difficult topic to really get to grips with. I have found it very hard to find examples of RT code showing how to avoid memory allocation etc.

 

I have tried to use the queues deterministically as you suggested by pre-allocating them. When you said that Q's pass variable length data (arrays) by reference does that just mean that no memory allocation occurs on a write but when it is read a copy is made? otherwise the Q would just fill up??

 

I have attached an example VI of how I am dealing with the queues and memory allocation, I have written a few questions on it to do with how I have handled the memory allocation.

 

Many thanks,

Steve.

 

 

0 Kudos
Message 8 of 22
(5,720 Views)

Steve,

 

I found this thread looking for something else, and I've got a few suggestions for you.

 

First, you have your data buffer on a shift register.  I would just use a tunnel on the input side of the loop, this eliminated the wire branch that you have a question about.  By using the tunnel, LabVIEW allocates the memory for that array and then reuses the memory location for each loop iteration.

 

Second, you've stated that you are using the Queues over the RT FIFOs because you wish to pass more complicated data types (i.e. a cluster).  In the example that you've given the cluster is simple enough that you could represent each element of the cluster a double, and then use an RT FIFO to pass the array where specific elements in the array correspond to the individual elements of the cluster.  See the attached VI snippet.

 

Finally, I agree with Casey about buffer allocations.  My experience with LabVIEW has been that as long as the a memory location does not need to be resized, the memory manager doesn't get involved.  Basically, the memory manager is very busy during the first loop of execution, but then is much less busy (and may not even be needed) in subsequent loops.

 

Hope this helps,

Dave

0 Kudos
Message 9 of 22
(5,030 Views)

Level 13 Cleric casts Resurrect on dead thread!

 

I'm working in LVRT 2011 SP1 and need to sample data from an sbRIO's FPGA that has 4 channels, then push them into a queue for consumption elsewhere in the program. That's easy. But now I need to be able to configure how many channels of data I want to "listen to", so the arrays that I'm pushing into the queue (of type 2D array) might have 4 columns, or 3, or 2, or only 1. If I obtain the queue with a datatype of 2D array with dimensions 100x4, then later push arrays of size 100x3 into that queue, does a reallocation occur on every Enqueue?

 

Confused yet? I just read back through that, and I am! 🙂 Here's a picture to (hopefully) help:

 

rt_queue_example.png

 

Message 10 of 22
(4,642 Views)