LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Queues - Enqueue / Dequeue Element Scheduling


@AristosQueue wrote:

CAR 305402 notes the existence of a queue starvation issue when you have multiple concurrent writers.


Hi there,

 

How can I access further information on this? General starvation due to lack of exclusive access is one thing but this seems like the problem is to do with my implementation code described above. The example I made in my previous post to attempt to replicate the issue ina  simpler form doesn't expose this but my original code does.

0 Kudos
Message 11 of 28
(924 Views)

@for(imstuck) wrote:

@AristosQueue wrote:

CAR 305402 notes the existence of a queue starvation issue when you have multiple concurrent writers.


Hm...sounds like AQ put a bug in LabVIEW Smiley Very Happy


I was going to say that "this wasn't the first time and it won't be the last", but the queues were the very first feature I wrote for LabVIEW and this bug goes all the way back to LabVIEW 6.1, so it might actually have been the first time.

Message 12 of 28
(905 Views)

@tyk007 wrote:
How can I access further information on this? General starvation due to lack of exclusive access is one thing but this seems like the problem is to do with my implementation code described above. The example I made in my previous post to attempt to replicate the issue ina  simpler form doesn't expose this but my original code does.

There's not much more information to convey... if you have multiple writers to a queue and a single dequeue, the writers are serviced in the order they put in their request to write, so there's never a starvation. If you have multiple readers from a queue and a single enqueue, the readers are serviced in random order, so it is entirely possible for one reader to go long stretches without getting service as others jump to the front of the line. Based on the rest of the information in this thread, that seemed to be the issue that was being highlighted. If I've misunderstood something, please ignore my CAR reference as irrelevant.

0 Kudos
Message 13 of 28
(904 Views)

@AristosQueue wrote:
If you have multiple readers from a queue and a single enqueue, the readers are serviced in random order

Really? I would have expected them to work in the order in which they started executing. It would seem to make more sense, precisely to avoid a case like this.

 

In any case, that's not the issue shown in this thread - the problem here is with the enqueue part - it can take longer to execute than its timeout input (basically, as if it has an infinite timeout). Look at my previous image to see how to clearly show this.

 

I actually did create a simple single VI example based on the concept in this thread (a SEQ where you enqueue to lock instead of dequeue), but surprisingly the example behaved differently than what is seen here. In that case it looked like the function wasn't waiting at all and the element in the queue was wrong, but I'm willing to accept that I may have a mistake in that example. I don't have access to it at the moment, but I'll post it tomorrow.


___________________
Try to take over the world!
0 Kudos
Message 14 of 28
(892 Views)

@tst wrote:

@AristosQueue wrote:
If you have multiple readers from a queue and a single enqueue, the readers are serviced in random order

Really? I would have expected them to work in the order in which they started executing. It would seem to make more sense, precisely to avoid a case like this.

 



I do concur with AQ that the behavior would be nondeterministic for multiple readers. They would execute and dequeue data whenever they were ready to run. Overtime this would become a random order. At any rate having multiple readers is not a good idea unless you are simply implementing parallel prcessing on tasks that can take time and you want to have multiple processing engines. Of course your processing would need to be self contained and not rely on the results of the previous data sets.



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
0 Kudos
Message 15 of 28
(886 Views)

@tst wrote:

@AristosQueue wrote:
If you have multiple readers from a queue and a single enqueue, the readers are serviced in random order

Really? I would have expected them to work in the order in which they started executing. It would seem to make more sense, precisely to avoid a case like this.


And as a senior software engineer talking to my new hire self, I express the same shocked expectations. And my new hire self looks back at me and asks, "And where were you 13 years ago?" No one caught it until this year.

Message 16 of 28
(884 Views)

@tst wrote:
I would have expected them to work in the order in which they started executing. It would seem to make more sense, precisely to avoid a case like this.

Mark, just to make it clear, "them" = the dequeue primitives which start executing when the queue is empty, not the VIs.


___________________
Try to take over the world!
0 Kudos
Message 17 of 28
(878 Views)

I fully expected that dequeue access to the queue would be effectively random since this is a lot like other syncornising constructs in other languages (lock, I'm looking at you) so would potentially lead to starvation and therefore enqueue time-outs from the starved client. But as tst points out the issue here is not that starvation exists but that the enqueue time-out doesn't appear to work in my particular scenario.

 

For those who missed the original attachment, I've re-added it here, as well as a test project that *doesn't* expose the issue, indicating either there is something fundamentally wrong with my code or that the example is too simple to expose the problem. I have a suspicion it might have something to do with obtaining a queue reference through a property via the DVR; that's occuring in the Reserve network.vi. If I get time I might elaborate on the example to demonstrate this.

0 Kudos
Message 18 of 28
(875 Views)

OK, so I found what was wrong with my example - it was basically the same as this, but I didn't have a case around the dequeue, so both loops were probably unlocking the SEQ. Now that I modified it, it behaves similarly to the original (although not identically, for some reason).

 

In the original, every loop took too long and didn't return the timed out output. In this, it only happens in some of the loops and the timed out output is T, so you should look at the original too (and modify it as I've shown above to see the issue), because there it happens all the time.

 

Here's what's happening in the example:

 

  1. A single element queue is created.
  2. One of the loops enqueues to it, which should lock it. It waits and then dequeues.
  3. If the first loop waits too long, the enqueue on the other loop should time out after 1000 ms.

What happens in practice (LV 2011, in my case) is that for some of the runs the time value shows a value greater than 1000, which as far as I can tell should never ever happen, because the timeout should set an upper limit on how long the function takes to execute.

 

See this to see the times:

 

http://www.screencast.com/t/i6oXs1o5

 

Unable to display content. Adobe Flash is required.

___________________
Try to take over the world!
0 Kudos
Message 19 of 28
(852 Views)

Looking at tst's latest posted VI, I have an explanation for its behavior. I'm pretty confident of this... though I haven't put it under a debugger to make this definative. If you have evidence to contradict this theory, please share it.

 

Remember that LabVIEW does cooperative multitasking between loops. We divide a block diagram into clumps. Each clump runs atomically and then provides an opportunity to switch to another clump. The Enqueue primitive always creates a clump division because of its asynchronous nature. So it is entirely possible that the very first call to Tick Count executes and then a thread swap occurs, allowing some other clump (like that third while loop, or a VI somewhere inside LV itself) some time to execute, and then the Enqueue happens. That interim is adding time to the final calculation between the two Tick Counts. After the Enqueue executes, there is then *another* opportunity for the clumps to swap out, possibly adding more time before the second Tick Count is read.  The reason you don't see this hit every iteration of the loop is that sometimes there isn't another clump ready to run, so loop ends up running as a block.

 

Now, the info I've presented above would result in some amount of skew. But exactly 965 milliseconds every time on both your machine and mine? That's odd. However, I note that if I change from "2533" as the delay on the bottom loop to be "2333", then that reproducible time changes to "1765" -- drop one by 200 ms, the other drops by 200ms. On the other hand, if you go up to 2733, the reproducible time changes to 1133. So there's some amount of interaction here, having to do with timing of thread swaps.

 

You also have to remember that you're on Windows. Windows thread swapping happens when it happens, and the OS is caprcious. These kinds of time slips are more than reasonable. If it bothers you, you'll need a real-time operating system.

0 Kudos
Message 20 of 28
(833 Views)