From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Determinism in RT application

Hi,

I have an RT application with many parallel processes, and over the years just about every type of communication method has been used at one point or another to transfer data between the various processes.

 

  • Notifiers
  • Queues
  • RT FIFOs (single element and multi element)
  • Globals
  • Single Element Single-process Shared variables with RT FIFO enabled

I understand that the Queues and Notifiers are not good for determinism, but most of these are using clusters as the data-type, so I can't simply swap them for RT FIFOs or Shared variables with RT FIFO enabled.

 

So then what is better for determinism and processing time; Globals or Notifiers? Are there any other options? Are FGVs any help in RT situations (I've never used one)?

I'd also like to know whether there's any difference between having one Globals vi with many variables in it or many global vis with only 1 variable in each?

 

Sorry for the slightly waffling nature of this post!

Thanks,

 

Andrew

  

0 Kudos
Message 1 of 9
(3,619 Views)

I noticed noone has tried to answer this yet, possibly because the "correct", or maybe "best", answer would be "It depends critically on the situation".  It also depends on the "goal", or "what do you mean by Determinism?".

 

Is the goal "minimum latency" (a.k.a. "speed"), "constant latency" (or "reproducibility"), or synchrony?  Are there parallel processes that can run at different priorities and are governed by different timing considerations (for example, a DAQ loop that acquires 1000 samples at 1KHz, then needs to TCP the data to the Host, something that seems appropriate for a Producer/Consumer design)?

 

The question about Globals is an interesting one, a question that might have a "situation-dependent" answer (i.e. are we talking cRIO, PXI, or PC platform?).  Rather than getting an opinion from one of us, write yourself a little test routine and "do the Experiment" (what I call "being a Scientist").  Actually, if you do this in a thorough manner, you can write an article or make a presentation at NI Week the way Crossrulz did a few years ago and "educate the Community".

 

Bob Schor

0 Kudos
Message 2 of 9
(3,560 Views)

Let's try to hit on as many of these questions as I can.

 

Is it better to have one global VI or many? I've never seen multiple global VIs.  But, why would you want this?  Creating a VI for each would start to litter your project rather quickly.  If you have so many you're having a hard time maintaining your VI, it might be a good time to question your design decisions.

 

With respect to determinism, Notifiers will have the same problems you run into with Queues.  Queues aren't very deterministic because they'll block operation when they're empty.  If it takes longer than the time you have to run the loop, you'll miss your timing and break your determinism.  The same is true for a Notifier.  The difference between Notifiers and Queues: Notifiers will only enqueue a single element at a time.  When retrieving that element, they won't remove it allowing other processes to also retrieve the element.

 

While a lot of what Bob posted comes from not working with RT all that much, he did put in a question that really affects the answer.  "It depends"  If you can guarantee you're filling and emptying the queue at the same cadence, you get a bit better bout determinism.  It's still not great.  But it's something you can test out.  Globals aren't terrible.  But, they're also not one to one.  If you need the one to one behavior, you're looking at a bad option.

 

You've mentioned you run into issues because your data is in clusters.  Why not fix that?  There are a variety of ways to change this:

1) unbundle the pieces you care about and send them separately

2) cast to a type you can send (string/variant) and cast back when you get the data

3) acquire data in a deterministic loop and process it in a non-deterministic loop (application dependent)

0 Kudos
Message 3 of 9
(3,539 Views)

@natasftw wrote:

Queues aren't very deterministic because they'll block operation when they're empty.  If it takes longer than the time you have to run the loop, you'll miss your timing and break your determinism.  

 

 


So I have a question about Queues blocking operations when they are empty.  I'm thinking about a Producer/Consumer pattern, where the Producer is the DAQ "Acquire" loop with continuous sampling, where you need to get all of the processing in that loop "done" before the next set of data come in, so you "export" the data using one of several mechanisms (RT FIFOs being the fastest and "most deterministic", but in a number of discussions, the FIFO receiver loop simply enqueues the data so that it can be moved "elsewhere" safely).

 

I'm intrigued by your statement that queues aren't deterministic because they block if empty (which is definitely true for the receiver doing the dequeue -- the sender will also block if the Queue fills up, and I can certainly understand that as an issue).  But so what?  If nothing is coming in, who cares if the loop that "would process the data if there were any data to process" is blocked?  Of course, one answer is "what if you want to exit, but the Consumer is waiting on the Producer, so it gets hung and you can't exit".  Two solutions -- use a Timeout on the Dequeue and if it occurs, check an "Exit?" signal, or (better) use a Sentinel whereby the Producer enqueues "This is your signal to exit" (such as enqueuing an empty array).  Note that Asynchronous Stream Channel Wires has a "Last Element?" input that, together with "Element Valid?" form a Sentinel mechanism.

 

My recollection from the RT Project I finished a few years ago was that the RT FIFO was a special "Queue-like" method for transferring data to another loop.  As I recall, the major "lack of determinism" in Queues lay in allocating memory, which can be solved by pre-allocating the Queue.  It is also likely that the Queue is a bit "more general" and the RT FIFO can transfer slightly faster, implying testing may be a Good Idea.  [I did use the three-loop "Timed-loop to Transfer Loop via RT FIFO, Transfer Loop to Processing Loop via Queue, Processing Loop does the work, including sending from RT Target to Host via Network Streams", so it's really a 4-loop transfer, but who's counting ...?]

 

Bob Schor

0 Kudos
Message 4 of 9
(3,532 Views)

@Bob_Schor wrote:

I'm intrigued by your statement that queues aren't deterministic because they block if empty (which is definitely true for the receiver doing the dequeue -- the sender will also block if the Queue fills up, and I can certainly understand that as an issue).  But so what?  If nothing is coming in, who cares if the loop that "would process the data if there were any data to process" is blocked? 

 

As I recall, the major "lack of determinism" in Queues lay in allocating memory, which can be solved by pre-allocating the Queue.


The memory allocation is certainly another concern.  Although, one of two things tends to be true here:

1) It's a concern early and goes away

2) Your queue continues to grow and you've got bigger problems here than the queue that need to be addressed

 

The "So what?" comes into question if you care about determinism.  If you want that handled in a deterministic loop, for whatever reason, you can't have a blocking operation.  Determinism is simply the guarantee that a loop will operate within a set period.  If you block until you get data, you won't finish the loop in that period and you break the determinism.  Could you get around that with the ideas you mentioned?  Sure.  But, let's take a look at what we're requiring so far just to get the queue into a working situation for a deterministic loop: 
1) pre-allocate the queue to the maximum size

2) add error handling and a timeout to ensure it quits

3) add error handling to whatever is using the data you were guaranteeing would finish on time

4) working with the acquisition rate to ensure you're not overfilling your queue

 

We're getting very complicated in a hurry.  It's not ideal. 

 

The real question comes down to this: Do we NEED the "consumer" loop to be deterministic?  If not, we can run some things in a real-time application outside of a deterministic loop.  That's the beauty of having Timed Loops and While Loops in the same VI for RT.  One will hold your deterministic task.  The other will handle the things we can have a bit of luxury with respect to time (for example, sending things to the Host by TCP which is non-deterministic).

Message 5 of 9
(3,521 Views)

Thanks.  I do, in fact, use such tri-level logic in my RT DAQ loop -- the DAQ code is in a Timed-Loop, the data is "exported" using an RT-FIFO, the FIFO is read in a While loop, put on a Producer Queue, dequeued in a Consumer loop where (among other things) it is put on a Network Stream to the Host for display and streaming to disk.  Works like a charm.  So I may have been "doing it correctly" even if I didn't necessarily understand all of the ramifications (though I had read enough to know that RT FIFOs were the preferred Export route from Timed Loops ...)

 

Bob Schor

0 Kudos
Message 6 of 9
(3,506 Views)

Thanks for the replies guys.

The particular application I'm running is for hardware-in-the-loop simulation. I acquire analogue data at 1kHz in one timed loop, and I use this data in a simulation timed loop which also runs at 1kHz, and a TCP data while loop which sends the data to the UI where it is logged/displayed.

The simulation loop uses the analogue data along with a whole bunch of configuration data to decide what the outputs (both physical voltage style outputs but also internal simulation variables) should be. It sends outputs to both physical AO/DO (via a notifier) and to the TCP data loop.

There are also various other processes going on including CAN & UDP communications and messaging between UI and RT in both directions.

The problem I have is that the simulation timed loop which can (does ~90% of the time) can complete it's work in 200us, sometimes takes ~2ms and I'm not sure why.

0 Kudos
Message 7 of 9
(3,473 Views)

Your two most critical loops are the 1KHz AI Timed Loop and the 1KHz AO Timed Loop, which need to run "in synch" (with the AI acting as Producer and the AO acting as Consumer).  I'd use an RT FIFO to send the AI data to the AO Loop, as it is the "most deterministic" method (least jitter).  In the AO Loop, as soon as the data comes off the RT FIFO, I'd put in on a pre-allocated finite-length Queue to "export" it to a second Consumer Loop that gets the data ready to send to the UI (make sure the Queue has a reasonable length, maybe 100 points, just for safety).  When I did this, I didn't want to send single data points to the UI, so the "Send-to-UI" scheme was also a Producer/Consumer -- single data points were accumulated in an array of size 50, and when the Array was full, it was put on (yet another) Queue to the Consumer that Network-Streamed the 50 points to the UI.

 

In my case, my loops were also running at 1KHz, which meant that the UI got 50 points at a rate of 20 KHz.  On the UI side, I streams all the points to disk, but also plotted the mean of the points on a Chart, making the Chart update at 20 points/second, a very comfortable speed for viewing the incoming data.

 

Bob Schor

0 Kudos
Message 8 of 9
(3,468 Views)

@toddy wrote:

Thanks for the replies guys.

The particular application I'm running is for hardware-in-the-loop simulation. I acquire analogue data at 1kHz in one timed loop, and I use this data in a simulation timed loop which also runs at 1kHz, and a TCP data while loop which sends the data to the UI where it is logged/displayed.

The simulation loop uses the analogue data along with a whole bunch of configuration data to decide what the outputs (both physical voltage style outputs but also internal simulation variables) should be. It sends outputs to both physical AO/DO (via a notifier) and to the TCP data loop.

There are also various other processes going on including CAN & UDP communications and messaging between UI and RT in both directions.

The problem I have is that the simulation timed loop which can (does ~90% of the time) can complete it's work in 200us, sometimes takes ~2ms and I'm not sure why.


Please show us the code in the simulation timed loop.

 

From the limited symptoms you have shared it could be...

 

Memory allocations invoked in the loop

Loop not getting CPU time

or many other things

 

The RT Execution trace Toolkit will let you see what is happening and why the code seems to be going to sleep or is being delayed.

 

Too many other things to mention while still just guessing.

 

Show us some code so that we can help you without a 1000 questions.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 9 of 9
(3,461 Views)