LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

queue efficiency

Solved!
Go to solution
So I'm working on improving the efficiency/speed of my producer-consumer system. I have two producers and one consumer (which among other things, writes files to the disk). I currently have both producers feeding the one queue which is then processed by my currently single consumer. Each queue item is a cluster of multiple data types and contains everything needed to "consume" it. There are certain cases where more complex queue items get placed which take longer to process and slow down the consumer process. I was thinking of adding a second consumer loop to run in parallel with the first to take the weight off the single loop so to speak. My question is, would it be more efficient to have both consumers dequeueing from the same queue, or would it be more efficient to have each producer feeding its own queue. For the sake of the exercise assume that I can guarantee that the complex queue elements will only come from a specific producer.
0 Kudos
Message 1 of 26
(4,316 Views)

@Hornless.Rhino wrote:
So I'm working on improving the efficiency/speed of my producer-consumer system. I have two producers and one consumer (which among other things, writes files to the disk). I currently have both producers feeding the one queue which is then processed by my currently single consumer. Each queue item is a cluster of multiple data types and contains everything needed to "consume" it. There are certain cases where more complex queue items get placed which take longer to process and slow down the consumer process. I was thinking of adding a second consumer loop to run in parallel with the first to take the weight off the single loop so to speak. My question is, would it be more efficient to have both consumers dequeueing from the same queue, or would it be more efficient to have each producer feeding its own queue. For the sake of the exercise assume that I can guarantee that the complex queue elements will only come from a specific producer.

Interesting.  I don't know if either way is more efficient.  That is assuming that the two consumer loops are identical and you don't care which dequeue gets what element.  (I assume you know that whatever dequeue is ready will get the next element, making that element unavailable to the other dequeue.)

 

Any gurus want to help out?  I'm very curious to hear what everyone has to say...

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 2 of 26
(4,285 Views)

Hi Rhino,

 

dequeuing a queue in two places is only an option when you don't rely on the queue element order. When this point isn't critical in your application you can dequeue as often as you want to…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 3 of 26
(4,278 Views)

Hi,

 

If it's not too hard, you could set up both scenarios and find out which one works better.

 

Anyway,

  • Have you profiled your code to find where the bottleneck lies? Is it in the processing, or the disk-writing?
  • What is the "production rate" of your producers?

 

Note: If you go down the 2-consumers path, make sure your consumers use reentrant VIs only. Otherwise, your 2 loops will block each other.

Certified LabVIEW Developer
Message 4 of 26
(4,273 Views)

Hi Rhino,

 

what about a 2nd consumer, which only handles the "hard and heavy" data packets? Your 1st consumer will hand over those packet to your 2nd consumer by it's own queue…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 5 of 26
(4,269 Views)
Thos sounds like a worker pool. The consumers are clones listening to the same queue. You can launch more clones as the load increases, you can watch the queue size to monitor load. On a multi core or multi pc this is a very scaleable architecture.
Message 6 of 26
(4,253 Views)

@Hornless.Rhino wrote:
So I'm working on improving the efficiency/speed of my producer-consumer system. I have two producers and one consumer (which among other things, writes files to the disk). I currently have both producers feeding the one queue which is then processed by my currently single consumer. Each queue item is a cluster of multiple data types and contains everything needed to "consume" it. There are certain cases where more complex queue items get placed which take longer to process and slow down the consumer process. I was thinking of adding a second consumer loop to run in parallel with the first to take the weight off the single loop so to speak. My question is, would it be more efficient to have both consumers dequeueing from the same queue, or would it be more efficient to have each producer feeding its own queue. For the sake of the exercise assume that I can guarantee that the complex queue elements will only come from a specific producer.

From a performance perspective, the Queue primitives won't care if you create two queues or use one queue in two consumers. To have multiple slaves (consumers) processing from one queue is perfectly acceptable, so long as the order of consumption is not important.

 

 

If you create management code to hand off the 'complex' jobs to a dedicated secondary consumer then you are not taking full advantage of the multiple-slave framework. It all depends on the ratio of simple-to-complex jobs, and also the time it takes to complete the jobs. For example, if the complex jobs come in once every 10,000 simple jobs, then a dedicated second consumer for the complex jobs will be largely idle and therefore under-utilised. If however, the ratio is more like 1 complex job for every two or three simple jobs, then you could find the primary consumer is largely under-utilised.

 

The best balance is to allow both consumers to dequeue all job types and therefore both be working at maximum capacity.

Thoric (CLA, CLED, CTD and LabVIEW Champion)


Message 7 of 26
(4,249 Views)
Solution
Accepted by topic author Hornless.Rhino

It is hard to give a good answer without really knowing what is happening in your consumer loop.  Does order matter?  Can some of these processes actually be done in parallel?

 

My first suggestion would be to make another loop (and corresponding queue) just for writing to disk.  Since that tends to be a "slow" process anyways, it should help releive your consumer loop at least some.  But, again, I do not know what your "complex" commands require of you.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 8 of 26
(4,233 Views)

I like Gerds idea, then the producers only need to worry about 1 queue.

Another solution, is spawning an asynchronos VI to reduce the queue execution time (as the work will be offloaded to a separate process). It's basically the same idea, but without a 2nd consumer/queue.

As mentioned, it's a matter of style and use case scenario which is better. 

 

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
Message 9 of 26
(4,224 Views)

If the queue is simply to make sure that you don't lose items as you pass them between producer and consumer, and you don't actually care about processing order I'll echo Yameda's idea of using an asynchronos VI to process the data.  You can just launch them from the consumer loop.  Another possibility would be to just launch and asynchronous VI for handling the data that takes a long time to process.

 

That and/or moving your writing to disk to another loop will probably give you a good, scalable architecture.

Message 10 of 26
(4,204 Views)