LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How accurate is the wait function when using really long waits?


@crossrulz wrote:

You really can't trust the waits being exact in a Windows environment.  What you should really be doing is use a While loop that iterates every 100ms or so and use the system time to see how much time has passed.  When the difference reaches your 30 minutes, stop the loop.


What crossrulz said.

The benefit of doing this is that you can put it in a parallel process and still carry on doing all the other things you might want to do...including interrupting your wait to stop the VI running.

---
CLA
0 Kudos
Message 11 of 19
(632 Views)

Thanks for the tips guys. I'll try using a time stamp comparison (if I understood your tip correctly) and see if that fixes the bug. The reason I didn't do that to begin with is that I figured there would be more overhead than with a simple wait function, which would ADD time to the loop...but in this case, if I'm over by a couple percent, it's not a big deal. Much better than being under by 50% every so often!

 

Can anyone explain why the wait(ms) function is unreliable in Windows?

0 Kudos
Message 12 of 19
(628 Views)

R.Gibson wrote:

Can anyone explain why the wait(ms) function is unreliable in Windows?


Simply put, Windows is not a deterministic OS.  It cannot guarantee your timing to any real accuracy.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 13 of 19
(616 Views)

You can also use the Tick Count function.  I do this just in case something silly like daylight savings time happens during a wait.  You do need to watch for the tick count to roll over though if you use it.

0 Kudos
Message 14 of 19
(605 Views)

Just use the elapsed timer express vi

 

Its one of the good low overhed Xvi's if you dig into it its just doing what you are doing only you don't have to worry about testing it NI did that


"Should be" isn't "Is" -Jay
Message 15 of 19
(585 Views)

@BowenM wrote:

You can also use the Tick Count function.  I do this just in case something silly like daylight savings time happens during a wait.  You do need to watch for the tick count to roll over though if you use it.


The roll over is no problem at all. The output is an unsigned integer and unsigned integer mathematics is defined to do the right thing even if you subtract a bigger number from a smaller number (which is what happens after a roll over). The result of the subtraction is still correct. The only thing you have to watch out there is that you can not calculate intervals that way that go over the entire interval of the unsigned number you are using. In the case of the timer tick output (uInt32) this is 2^32 ms or about 1193 hours or 49 days.

 

The only problem about the timer tick is that it is derived from one of the crystal clocks on the mainboard and therefore can go out of synch with the realtime clock which nowadays normally is synchronized to some internet time service with atomic clock accuracy. So for longer periods there is a small deviation between the timer tick and the realtime clock in the computer. This is however typically a few 100 ppm at most.

 

And yes, all the wait functions use the timer tick to calculate their wait time, not the realtime clock value which has a somewhat bigger overhead to be read. Not an overhead that is usually important but in tight loops trying to time on milliseconds, this overhead can add up.

Rolf Kalbermatter
My Blog
Message 16 of 19
(556 Views)

Win keeps civil time measured in seconds elapsed since epoch ignoreing leap seconds.  The Veep's house offers a service to provide your PC with the correct civil time.  NIST-Time32.exe. The USNO is an interesting google.


"Should be" isn't "Is" -Jay
0 Kudos
Message 17 of 19
(538 Views)

@JÞB wrote:

Just use the elapsed timer express vi

 

Its one of the good low overhed Xvi's if you dig into it its just doing what you are doing only you don't have to worry about testing it NI did that


I don't know why anyone would use a wait (ms) primitive for delays more than a minute or two. Timer vis are a lot more convenient and less prone to goofy stuff happening when the time zone changes or there's a rollover.

PaulG.

LabVIEW versions 5.0 - 2020

“All programmers are optimists”
― Frederick P. Brooks Jr.
0 Kudos
Message 18 of 19
(528 Views)

So I implemented a timestamp approach rather than a wait approach, which I agree is a more robust and accurate coding methodology for long periods. Using this approach possibly introduces a little more overhead but it is not accumulating error on each iteration of the wait(ms) function like my previous code. Here is what I ultimately ended up using:

 

Capture1.PNG

 

However, I realized that this wasn't the main bug that I was experiencing...there was a tiny little boolean in a error handling sub-vi that was set incorrectly, breaking out of the configure-equipment state-machine, rather than continuing to monitor the device until it had stabilized and soaked. This was in turn causing the dwell timer to start prematurely...hence randomly short dwell times every time there was a communication error with the device.

 

...now I just need to figure out why there are so many communication errors... 😉

 

Thanks for your help guys! 🙂

0 Kudos
Message 19 of 19
(472 Views)