LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Producer Consumer Parallelism Failing!

I'm controlling multiple automation stations with a single cRIO 9074 -- the multiple stations are all following the same logic but should be independently controlled. I'd like to accept all digital inputs (from all stations) in a producer loop, queue them up, and then consume the data in 3 consumer loops which actually perform the logic (take a digital input and then decide what to output at each station).

 

The actual logic is functional, but the three consumer loops are outputting the data with a time delay.

 

So, for example, if I want my first step to be "Event 1", once I start the program the state machines decide that:

 

Station 1: Event 1

Station 2: Event 1

Station 3: Event 1

 

And when a decision is made to switch to Event 2, the 3 stations take turns switching (the action is not simultaneous), so you may get:

 

Station 1: Event 2

Station 2: Event 1

Station 3: Event 1

 

and momentarily after,

 

Station 1: Event 2

Station 2: Event 2

Station 3: Event 1

 

etc...

 

0 Kudos
Message 1 of 8
(2,285 Views)

Queues are meant for a one to one, or many to one relationship.  Here you have a one to many which is wrong.  When something is sent, only one of the three loops will get the message depending on which is ready to grab the message first.

 

You should have a different queue for each consumer loop.

0 Kudos
Message 2 of 8
(2,265 Views)

@RavensFan wrote:

Queues are meant for a one to one, or many to one relationship.  Here you have a one to many which is wrong.  When something is sent, only one of the three loops will get the message depending on which is ready to grab the message first.

 

You should have a different queue for each consumer loop.


Or a User Event, and have all the consumer loops register for the single event so they can all act on it when one event is generated.

 

Attatched is a quick example of what I was thinking.  You can make this more complicated, and wrap some functionality into SubVIs to help scale it for larger applications.  User Events do work in LabVIEW Real-Time but I believe you would need some kind of FIFO to User Event to go from a host PC to the RT to have work be done.

0 Kudos
Message 3 of 8
(2,260 Views)

@Hooovahh wrote:

@RavensFan wrote:

Queues are meant for a one to one, or many to one relationship.  Here you have a one to many which is wrong.  When something is sent, only one of the three loops will get the message depending on which is ready to grab the message first.

 

You should have a different queue for each consumer loop.


Or a User Event, and have all the consumer loops register for the single event so they can all act on it when one event is generated.


You have to be careful how you do that as well.  You need to do multiple registrations.  Search the forums for details on that.

0 Kudos
Message 4 of 8
(2,250 Views)

It of course makes sense that if you're going to load data into a queue and then have 3 slaves grabbing from 1 queue, data is grabbed/erased (first in, first out). The other slaves look like they're lagging, but in reality they're just grabbing the next numbers in the queue (left-overs).

 

It does, however, seem a bit wasteful to make several more queues. I wish I could "obtain" a value from a queue with the option of removing that value or keeping it for another "obtain queue" node. That way I could:

 

1. Generate/Fill Queue

2. Obtain queue but not remove it (consumer 1)

3. Obtain queue but not remove it (consumer 2)

4. Obtain queue but remove it (consumer 3)

 

But in a truly parallel process, this obviously doesn't work.

 

I will therefore just create 1 queue per consumer in my producer loop.

0 Kudos
Message 5 of 8
(2,226 Views)

A notifier would do the job nicely.  I was posting that info when I noticed the OP is using the RT queue.  Looking at it deaper though I question weather the RT Target needs access to the queues / Notifiers. 


"Should be" isn't "Is" -Jay
0 Kudos
Message 6 of 8
(2,219 Views)

It looks like the problem has been fixed now. In addition to the inappropriate use of queues, it looks like I had internal VIs with "reentrant" turned OFF, which was causing problems.

 

In an application such as this, is there much of a disadvantage by preallocating clone reentrant execution? Not sure if it would serve me better than "shared reentrant".

0 Kudos
Message 7 of 8
(2,207 Views)

ap8888 wrote:In an application such as this, is there much of a disadvantage by preallocating clone reentrant execution? Not sure if it would serve me better than "shared reentrant".

Preallocating clones will usually have more memory overhead while the shared clones usually have a little more overhead when being called.  If your processes are truely in parallel and you need the speed, go with preallocate clones.  If memory is your limiting factor, then go while shared clones.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 8 of 8
(2,162 Views)