04-10-2018 09:59 AM
Hi everybody,
I'm working on a test bench with a PCI card Meinberg PTP270PEX.
This card is able to overwrite the Windows System time with a Windows service working transparently.
Since Windows 8, a new time management has been introduced, allowing to get timestamps with a true µs resolution. So, it's possible to get high resolution time under Window using the "GetSystemTimePreciseAsFileTime" fonction of Windows.
My question is : what does exactly the standard "Get date time in seconds" of LabVIEW Under the Hood ?
GetSystemTimePreciseAsFileTime call or a standard GetSystemTime call ?
I don't want to do a dll call from labVIEW because of the calling time of the dll which will introduced a time offset on the timestamp.
Best regards,
Solved! Go to Solution.
04-10-2018 05:35 PM
Why would it be any faster if Labview called GetSystemTimePreciseAsFileTime than if you did? Also, unless you are on a real time OS, you aren't going to get sub microsecond accuracy anyways.
04-10-2018 06:56 PM
According to MSDN GetSystemTimePreciseAsFileTime() was introduced in Windows 8. Therefore as long as LabVIEW will support Windows 7, it can't really use that function anyhow.
Aside of that, anything that majoris says is true too. It doesn't matter if you call that API or LabVIEW. The time for the call is about the same. And Windows as a non-RT system CAN certainly take much more than a few microseconds to call this API no matter how you do it.
04-11-2018 11:32 AM
04-11-2018 11:33 AM
Because of time overhead of DLL call from a diagram
04-11-2018 11:35 AM - edited 04-11-2018 11:37 AM
Disable debugging in the Call Library Node after you have made sure everything works perfect and the overhead compared to when LabVIEW calls this function internally is totally neglectable! The inaccuracy of Windows scheduling in terms of getting your code executed in a particular thread, and between other tasks (processes) on your computer, is magnitudes bigger anyhow!
04-11-2018 11:40 AM
I know..So why having a microseconds timer in the palette?
04-11-2018 11:44 AM - edited 04-11-2018 11:45 AM
Manufacturer of 1588 PCI card explain me that what you say is true, before Win8. Since, major improvements have been made to time management in Windows. To be confirmed!
04-11-2018 12:33 PM
The timer functions precision isn't a lie. Just the accuracy (aka the repeatability). And I think the same will be true for the precise time in the Windows API. My general reading seems to indicate that the PreciseFileTime function uses the query performance counter for its precision. And the QPF API function has been around since XP. And how the QPF is actually calculated depends on hardware. Very likely, calling PreciseFileTime will cost you a syscall, which will cost you a context switch. Which will be slow when we are talking about 10s of nanoseconds. But maybe the driver for your PCI card is magic and prevents all that somehow (i.e. perhaps writing the time to a user-mode memory mapped location).
04-11-2018 12:57 PM
I agree...but reading this has bringing me to see things in a different way :
https://www.greyware.com/software/domaintime/v5/overview/w32time.asp