Windows 2000 and later only!
What? you say its not possible?
I too was of that opinion. Its commonly held that we are limited in LabVIEW to the mSec timer and the resolution it offers for benchmarking our vis and creating delays. Reciently, on a related thread, I was even involed in exploding a suggestion that LabVIEW could be taught this nifty trick. But I went back to school because I hate saying "You can't do that with LabVIEW."
The attachment contains a project of vi's that use some kernel23.dll precision timer functions to access the Precision OS Timer that exists on modern processors. there are a few caveats:
These vis use the basic Query Precision Timer functions so DO NOT use them in cases where you don't have a spare core to burn. There appears to be a method to create a waitable timer as well but, that is not in the scope of this post.
Also- this is not a replacement for a real time OS! There are sources of error and the OS can (and does) interrupt the process.
There are inherant flaws in the basic input output system that contribute to jitter in calls through kernel23.dll
Some coersion errors may be introduced due to the necessity of mixing U64's and DBLs in the math. (Hey, if anyone can solve that I'd take a lesson, I hate saying "You can't do that with LabVIEW.")
The Simple Approach:
Precision Timer Wait.vi:
Is a basic stand-alone delay with a 100nSec resolution uSec to wait input. Negative values are coereced, resolution is coerced to next higher 100nSec. Actual resolution of the delay is dependant on HARDWARE or how fast your precision timer is updated. Ths vi querys the PT frequency and the current count. calculates what the count will be in x uSec and enters a greedy loop untill the PT counter is equal to or greater than the target.
This vi DOES test if the hardware supports a precision timer and has standard error in funcionallity.
The more optomized approach:
The case structures in Precision Timer.vi requre a bit of undesired overhead so, for advanced users:
PT Init Freq.vi Querys the timer frequency and preloads the global variable Counter Frequency.vi (Globals are not evil and this is one case where their blinding fast speed is useful)
PT Lightning Wait.vi reads the global instead of the actuall timer parameter and functions simillar to Precision Timer Wait.vi except it does not even waste the FLOPs to calculate how long we were in the loop and it has no error case.
Benchmark.vi demonstrates the optomized approach and explores some of the precision timer sources for error
All vi's are fairly well documented with their execution settings (obviously default settings were undesired)
For further reading on Precision Timers I recommend starting HERE and google your hearts out.
if anyone wants to play with a waitable timer object... (I'm curious but "Time Constrained")
Additionally, those of you with existing Benchmark vi's. I would be fairly interested in a benchmark benchmark comparing the two timer methods.
Thanks Jeff. I think there are some problems with your attachment:
Thank you Christian!
(Thants what you get for not previewing the source distribution)
Lets try this again- I have excluded the dependancies but you should have vi.llb , lvanyls.dll and kernel32.dll
Sourced in LabVIEW 2011
I think something might be machine dependant. Running the snippet returns about 8.41748 seconds. I wonder if it needs a calibration routine. Linux does some kind of Bogomips calculation at boot.
@Steve Chandler wrote:
I think something might be machine dependant. Running the snippet returns about 8.41748 seconds. I wonder if it needs a calibration routine.
Nice call, There is something machine dependant in there! What is the value of Count Freq on your setup? Over here and at home My PT is running at 2922392 cnts/sec and your snippet runs in 10.26 -10.305 seconds. Changing the uSec to wait from 1 to 0.9 has no effect since the change is inside the resolution of the timer 1/2922392=0.342xxuSec) so Theoretically a 1uSec wait is three counts or 1.0265563278300789216504835764675uSec + call overhead.
Well that's reasonable then 2 counts is 7.6852933841536167182798161524117e-7 and you left debugging on and ran with normal priority. Definatly a resolution difference.
Which brings me back to a point I glossed over in the original post about sources for error. ACTUAL resolution is dependant on the system precision timer and the priority the OS assigns to the timer process.
The vis shown do use a round toward 0 to calculate the number of counts to wait. This duplicates the operation of Wait (ms) function "waits up to the value specified in the milliseconds to wait input" for non RT targets. Steve just seriously demonstrated just how this can effect you when deploying an application. TIME CRITICAL loops need a deterministic OS.
Jitter: another feature the OS brings to us is just how often the PT Lightning Wait.vi will return late. Even in A "Quiet" system where the FP isn't being interacted with some calls will return late for eiither "OS too busy to wait for you" (The dll call requires the UI Thread) or the OS is too busy to update the Precision Timer. Statistically these late returns do not occur often- but they can have a considerable magnitude and there is nothing we can do about it. Again, this non-determinism is also seen with the Wait (mS) function.
Yet, if you need the average number of loop iterations to occur at about a value near "uSec to Wait" there are sub-millisecond timer methods available
Somewhere, several years ago, I found a reference to some VIs that make use of the Windows RDTSC function. One key element was knowing the CPU speed of the host processor. I wrote the following routine that first tries to get the value from the Windows registry, and if that fails, counts the number of "ticks" that occur in a LabVIEW second (using the LabVIEW msec timer). For more accurate results, one could run this, say, 5 times and average ... [The blocks labelled RDTSC were adapted from the RDTSC routines I found here ...]
I guess it depends on what you want to do but there's a subtle difference between accuracy and precision. Even using the WinAPI you'll be millisecond precise but not accurate. That is, units of milliseconds but not that you're reading the timer at the right time.
Plus if you're trying to time say a visual display to the millisecond then you're likely to be out of luck. PC's, Macs and Linux will all have similar problems as they use the same hardware. Even RTOS systems won't help you all that much as soon as you interact with a standard TFT or keyboard.
For some background on the issues take a look at: http://www.blackboxtoolkit.com and read about our Black Box ToolKit which helps users achieve millisecond timing accuracy in experimental work.