LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Understanding timestamps and sub-ms timing in windows

Solved!
Go to solution

@JÞB wrote:


You may want to look here

 

There are caveats 


Those API calls are pretty much what the High Precision Timer VIs are calling. And they do query and use the time interval count to get a pretty accurate timer count 

 

But since the OP explains that his C program works exactly as he intends when he uses GetSystemTimePreciseAsFile() I'm wondering what stops him of calling this exact function too through a Call Library Node in LabVIEW. The function is as simple as it can get, since it only has one parameter passed by reference. Its FILETYPE parameter is defined to consist of 2 32-bit unsigned integer variables in little endian order. That "incidentally" happens to map exactly to a single 64-bit unsigned integer on Intel x86/64 silicon and therefore you can actually simply declare that parameter in the Call Library Node as Numeric, 64-bit unsigned Integer, passed as Pointer to value.

 

If the resulting value is then divided by 10^7 to scale the 100ns per tick to 1s per tick, you get a floating point value that indicates the number of seconds since January 1, 1601 UTC. Subtract a meager 0x0153b281e0fb4000 from the original uInt64 value before scaling it and you get a seconds since value relative to LabVIEWs epoch of January 1, 1904 UTC. If you rather want the Unix epoch of January 1, 1970 UTC, you can subtract another 2082844800 from the LabVIEW epoch value.

 

But if the purpose is to actually compare very precise time differences between two arbitrary points, the High Precision Timer VIs are the way to go. The Windows API calls are optimized to go with whatever is the highest precision timing source, usually special CPU registers on modern CPUs, and to query them in a very performant way. The absolute time value (real-time) might have to go through extra layers and could potentially be blocked by the update routine that reads the actual (and rather slow) real-time clock in the chipset.

 

In fact GetSystemTimePreciseAsFile() is not simply the real-time clock but a combination of a clock value that is regulary compared to the hardware real-time clock a intermediate update through combining this value with the high precision timer values and an additional correction through successive adaption to an external real-time source such as an internet time service. Quite a bit of extra overhead that makes this value not as accurate and precise as the high precision timer value itself for relative time measurements.

 

If the intent is to do relative timing measurements between arbitrary points however, the High Precision Timer functions are the way to go. The precise filetime value is a complex calculation that consists of the value from the actual hardware real-time clock, intermediate updates generated from the high precision timer source and successive adaption to an external timing source such as from an internet time server (if enabled). The first is very slow to read (for modern CPUs), the second adds additional overhead to combine it consistently with the first, and the third generates a time value that can actually go backwards if the current time value is considered to much out of sync with the external internet timing source. Even if it doesn't go backwards, it can slow down and speed up as the algorithm tries to adapt its value successively to the internet timing source.

Rolf Kalbermatter
My Blog
0 Kudos
Message 11 of 11
(227 Views)