From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Understanding timestamps and sub-ms timing in windows

Solved!
Go to solution

LabVIEW 2019

Windows 10


Context:

The end goal I have for this is to be able to obtain some nominal sub-ms timing to send a byte on a serial line, to synchronise the clock of an external hardware item. The test system will require taking time difference with single digit ms limits between events on the hardware logged internally and receiving messages on the PC on a different serial line logged as windows time. The clock on the hardware will drift over time so this sync will be repeated regularly, being more accurate to start with will reduce the frequency at which sync is needed, and will also improve accuracy of tests.


Issue at hand:

I am trying to find out whether we can use the high resolution timing functions to create a sub-ms "time stamp". If we know the value of the high resolution counter when the millisecond ticks over, we can use this in combination with a wait to execute some code (e.g. send a serial byte) at a time with sub-ms accuracy. The snippet here is to find out more about the LabVIEW time functions.

 

Snippet.png

Running as an exe

Ian_SH_1-1673951964899.png

 

Mostly it lines up with the millisecond value changing, but there are extra changes detected? Why? 1000 values should have taken 1000ms and it finished in between 770-850ms over several runs (so not particularly consistent either)

 

The millisecond changes seem to be able to be rounded to nearest 1/10 ms which is probably good enough for my end purpose but I don't understand what else is going on. Perhaps to do with the timestamp being a DBL under the hood?

 

Is there a better way to do this? We have already achieved what we want in C, using GetSystemTimePreciseAsFileTime which worked well enough, but I want to move all the serial comms to LabVIEW, otherwise we'll have to keep switching control of the serial port. I've looked at using things like QueryPerformanceCounter, but I think this is what the LabVIEW functions are built on anyway.

 

Please note RTOS is not an option, and the external hardware "time" can only be set via serial. I am also very aware of the issues of not using RTOS and having other processes running, I intend to use high priority threads etc to just get the best we can on Windows. The fact that it was good enough in the C code means what we need is theoretically possible.

 

Thanks

 

0 Kudos
Message 1 of 11
(2,191 Views)

Hi Ian,

 

simplify your VI:

Or use the HighResolutionPollingWait…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 2 of 11
(2,158 Views)

Hi Gerd,

 

Thanks for the reply.

 

Without the Get Date/Time in Seconds function in the VI however, we have completely avoided the questions about why the ms resolution timestamp appears to change more than once per ms, which I am still interested in answering if anyone has insight. Also we still can't get "Windows Time" more accurately than ms using this code. I still need to set the time on the hardware to match if possible.

 

I could, I suppose, just use the high resolution timing functions for all windows timing purposes. It seems to keep counting as long as the execution remains open. This might mean I don't need the windows time at all, I'd just have a start time and calculate the offset. 

 

Thanks

0 Kudos
Message 3 of 11
(2,119 Views)
Solution
Accepted by Ian_SH

Basically your assumption that the timestamp has millisecond resolution is unfounded. A Timestamp is in fact a 128:64 bit fixed point value. That means it has 64 bit integer and 64 bit fractional range. This makes it have a theoretical resolution of 1/2^64 seconds. However LabVIEW only really uses the higher 32-bits of the fractional part, which is still a resolution of 1/2^32 seconds , or ~1/4000000000 seconds. But the underlaying OS functions are a little less accurate, having under Windows in fact "only" 100ns resolution.

 

So all in all your assumption that a LabVIEW timestamp has a resolution of 1ms is totally wrong and based on no real facts.

 

And I'm sure you don't expect the timestamp to increase in 100ns intervals. Calling an OS function to get the current system time takes a lot more than that and then there is some calculation necessary to translate it to a LabVIEW timestamp, some scheduling of the diagram code, and the other calls to the high resolution timer functions too!

 

Rolf Kalbermatter
My Blog
0 Kudos
Message 4 of 11
(2,107 Views)

@Ian_SH wrote:

LabVIEW 2019

Windows 10


Context:

The end goal I have for this is to be able to obtain some nominal sub-ms timing to send a byte on a serial line, to synchronise the clock of an external hardware item. The test system will require taking time difference with single digit ms limits between events on the hardware logged internally and receiving messages on the PC on a different serial line logged as windows time. The clock on the hardware will drift over time so this sync will be repeated regularly, being more accurate to start with will reduce the frequency at which sync is needed, and will also improve accuracy of tests.

 


Why is the device clock drifting?  I want you to go and look at any hardware reasons that that clock is not working right!  Usually, that means environment temperature and cooling has been overlooked in the system.   

 

SHOUTING! you can't fix the system engineer with software! Because your clocks are not working right (in sync) you must find out which is wrong.   Ben Franklin said, "A man with 1 clock always knows what time it is.  A man with 2 is never sure."


"Should be" isn't "Is" -Jay
0 Kudos
Message 5 of 11
(2,092 Views)

A few things in no particular order:

 

1. While the timestamp as a datatype does indeed support the ability to express smaller than 1 msec *resolution*, the OP's original snippet demonstrates that those timestamps tend to get updated only about once per msec under Windows.  There are several orders of magnitude more loop iterations than timestamp changes.

 

2. My test runs (LV 2020 64-bit) showed a slightly different behavior than the OP's.  1000 changes consistently took almost exactly 1000 msec.  And the places where I noticed ~0.5 msec changes always followed after a change of ~1.5 msec.

 

3. High-Res relative seconds seems to change values at 100 nanosecond intervals.  I would often see 1000 changes occur in 0 msec according to Tick Count (meaning only that some unknown amount of time < 1 msec elapsed).

 

4. Much of this fuss about fractional msec timing may be interesting but is going to be rather a moot point when applied to syncing two systems via serial communication.  The chain of actions that needs to happen on both sides of the serial link after you initiate a call to VISA Write is *not* going to be controllable or repeatable to within a fraction of a msec.  The communication interface hardware, drivers, and target-side code will have more influence over the sync you can achieve than your time queries.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 6 of 11
(2,089 Views)
Solution
Accepted by Ian_SH

@Kevin_Price wrote:

A few things in no particular order:

 

1. While the timestamp as a datatype does indeed support the ability to express smaller than 1 msec *resolution*, the OP's original snippet demonstrates that those timestamps tend to get updated only about once per msec under Windows.  There are several orders of magnitude more loop iterations than timestamp changes.


I would say that the highlighted part pretty much is not in contradiction to what I said. The Get Date/Time in Seconds simply calls the Windows API GetSystemTimeAsFileTime() and scales the returned value to a LabVIEW Timestamp. This value has a 100 ns resolution but Windows used to update the underlaying value typically every 10 ms.

 

Quote in MSDN:

"System time is typically updated approximately every ten milliseconds"

 

However in recent Windows versions this was improved to "typically" 1 ms.

 

Note the word typically! Microsoft does not guarantee anything, just says that it is typically doing this.

 

Rolf Kalbermatter
My Blog
0 Kudos
Message 7 of 11
(2,077 Views)

Hi Rolf,

 

Thanks for your insights on the inner workings of the function, I think this provides me with the information I needed to understand what is going on. I wonder if I missed some documentation somewhere which states the windows API call used, or is this just some knowledge obtained elsewhere. 

 

Thanks to all others for your useful contributions too.

 

Ian

0 Kudos
Message 8 of 11
(2,015 Views)

@Ian_SH wrote:

Hi Rolf,

 

Thanks for your insights on the inner workings of the function, I think this provides me with the information I needed to understand what is going on. I wonder if I missed some documentation somewhere which states the windows API call used, or is this just some knowledge obtained elsewhere. 


You didn't miss anything here. This is NOT documented for a few simple reason.

 

- It is different for every platform LabVIEW is running on

- What exact function is called has changed several times in the last 30 years, as Windows provided newer and better functions and even the behavior behind some of these functions changed depending on the Windows version and other things like if the multimedia timers were enabled for the process, which LabVIEW does, but it can have different effects on different Windows versions too. The hardware also has influence depending on what CPU is available in the system.

Rolf Kalbermatter
My Blog
0 Kudos
Message 9 of 11
(2,001 Views)

@Ian_SH wrote:

LabVIEW 2019

Windows 10


Context:

The end goal I have for this is to be able to obtain some nominal sub-ms timing to send a byte on a serial line, to synchronise the clock of an external hardware item. The test system will require taking time difference with single digit ms limits between events on the hardware logged internally and receiving messages on the PC on a different serial line logged as windows time. The clock on the hardware will drift over time so this sync will be repeated regularly, being more accurate to start with will reduce the frequency at which sync is needed, and will also improve accuracy of tests.


Issue at hand:

I am trying to find out whether we can use the high resolution timing functions to create a sub-ms "time stamp". If we know the value of the high resolution counter when the millisecond ticks over, we can use this in combination with a wait to execute some code (e.g. send a serial byte) at a time with sub-ms accuracy. The snippet here is to find out more about the LabVIEW time functions.

 

Snippet.png

Running as an exe

Ian_SH_1-1673951964899.png

 

Mostly it lines up with the millisecond value changing, but there are extra changes detected? Why? 1000 values should have taken 1000ms and it finished in between 770-850ms over several runs (so not particularly consistent either)

 

The millisecond changes seem to be able to be rounded to nearest 1/10 ms which is probably good enough for my end purpose but I don't understand what else is going on. Perhaps to do with the timestamp being a DBL under the hood?

 

Is there a better way to do this? We have already achieved what we want in C, using GetSystemTimePreciseAsFileTime which worked well enough, but I want to move all the serial comms to LabVIEW, otherwise we'll have to keep switching control of the serial port. I've looked at using things like QueryPerformanceCounter, but I think this is what the LabVIEW functions are built on anyway.

 

Please note RTOS is not an option, and the external hardware "time" can only be set via serial. I am also very aware of the issues of not using RTOS and having other processes running, I intend to use high priority threads etc to just get the best we can on Windows. The fact that it was good enough in the C code means what we need is theoretically possible.

 

Thanks

 


You may want to look here

 

There are caveats 


"Should be" isn't "Is" -Jay
0 Kudos
Message 10 of 11
(1,903 Views)