LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How long does the dequeue wait time need to be? Missing data if too short

I have a data acquisition loop, running at 1000 samples/second.  I am using a queue to pass a single command into the loop.  Unless I set the dequeue wait time to about 30mS, the message just disappears.  I know it is being put in the queue in the message loop, or at least I know it is being presented to the enqueue.  

 

I want the dequeue wait time as short as possible, so as not to slow down my acquisition.  If it is set to long enough to catch the data my DAQ card buffer overflows.  

 

Where is my message going?  If it's on the queue why does the dequeue need to wait a long time to get it?  Can't it just pick it up the next time around?  

 

Thanks

 

0 Kudos
Message 1 of 9
(1,314 Views)

I suggest attaching a VI so we can SEE what you are trying to describe in words.  What is the purpose of the message you are sending?

 

Note, you can set a timeout value on a Dequeue function to 0 if you want.  Just monitor the Timed Out? boolean to see if it timed out or received an element in the queue.  With a 0 value, the Dequeue Element function will not "wait" at all.

 

 

0 Kudos
Message 2 of 9
(1,295 Views)

I second the motion.  We need the code.  I could speculate several things, but the code would most likely quickly eliminate (at least) most of them as suspects.

 

FWIW, I've had schemes with calculated timeouts that were allowed to go as low as 0 msec.  Short timeouts didn't cause a loss of queue elements.  That isn't a limitation built into the queue primitives.  There's gonna be some other explanation.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 3 of 9
(1,282 Views)

I don't even know why your DAQ buffer overflows.  A queue is designed specifically to gather info as fast as possible so this doesn't happen.  Dequeuing has absolutely nothing to do with why your DAQ buffer overflows.  (On the other hand, an improperly managed queue could eat up all available memory on your PC if you let it.)

 

edit: nvm, I misunderstood the post.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 4 of 9
(1,268 Views)

Guys, I thank you for your interest, but I changed direction.  The message was a request for a small bit of data being produced in the main loop.  I got fed up and did what I should have done, which is an LV2-style global.  Works like a champ.  KISS.  

 

I don't want to copy the code from my stand-alone machine, but FWIW the request was sent via TCP.  The TCP listener was in a separate message handling loop.  The queue was used to send the request into the main loop and data back out to the message handler, and back over the TCP connection.  The request would make it out of the dequeue only if the timeout was set to more than 100ms.  Then the whole Rube Goldberg system worked. Except my main loop was slowed by the long timeout.  

 

I know you want to see the code.  but I'm moving on with my life.  Thanks again.  

0 Kudos
Message 5 of 9
(1,228 Views)

Sorry, I meant 30ms as in the original post.  But whatever.  

0 Kudos
Message 6 of 9
(1,227 Views)

@AndyTailored wrote:

I got fed up and did what I should have done, which is an LV2-style global.  Works like a champ.  KISS.


If it is a simple Get/Set, you might as well just use an actual Global Variable.  A Look At Race Conditions


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 7 of 9
(1,218 Views)

Do I  really want to chime in on this????

 


@AndyTailored wrote:

Sorry, I meant 30ms as in the original post.  But whatever.  


Yes, I think that I do.  For everyone else...this is a good BAD example of a post!  The OP brain deaded some code and won't share the information to help you!


"Should be" isn't "Is" -Jay
0 Kudos
Message 8 of 9
(1,215 Views)

@AndyTailored wrote:

Guys, I thank you for your interest, but I changed direction.  The message was a request for a small bit of data being produced in the main loop.  I got fed up and did what I should have done, which is an LV2-style global.  Works like a champ.  KISS.  

 

I don't want to copy the code from my stand-alone machine, but FWIW the request was sent via TCP.  The TCP listener was in a separate message handling loop.  The queue was used to send the request into the main loop and data back out to the message handler, and back over the TCP connection.  The request would make it out of the dequeue only if the timeout was set to more than 100ms.  Then the whole Rube Goldberg system worked. Except my main loop was slowed by the long timeout.  

 

I know you want to see the code.  but I'm moving on with my life.  Thanks again.  


I think even an FGV is Rube-ish.  Do you know why they call it a "LV2-style global"?  Because, in LV2 you didn't have real globals.  You may as well just use a "regular" global and dispense with all that overhead.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 9 of 9
(1,193 Views)