LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How does Tick Count (ms) Function work?

Solved!
Go to solution

I would like to know how the Tick Count (ms) function works on a PC and Real Time enviroment.

 

On a windows machine is this an OS API call?  Does windows provide LabVIEW the Tick count value? 

Can error be introduced on multi core machines where the cores can run at different speeds? (dynamic energy savings)

 

What about different versions of LabVIEW. Does hunder the hood behaviour change with different flavors of labview? What about behaviour between Windows XP and 7?

 

The reason i'm asking is because I came accross something odd when using the Tick Count Function.

I was doing some logging of streaming data and was using the Tick Count Function to produce a reference stamp. (the how i came accross this is irrelavent)

Using LabVIEW 2011 I stream data to DataSocket. 

Using LabVIEW 8.6.1 I read the data from DataSocket.

I was expecting the log files to show that the data packets referenced to the tick count recorded in LabVIEW 8.6.1 to be a few milliseconds behind the log file I generated in LabVIEW 2011.

 

To my surpirse that was not the case.

I'm streaming simple array of u32 via datasocket, and in both the send and recieve side I make a CSV file of <data>,<TickCount>

 

On the LabVIEW 8.6.1 it appears that I am receiving the data anywhere between 8 ms before I even send the data from LabVIEW 2011 to about 10 ms after.

In my code I take the Tick Count sample after I receive the data from the DataSocket. 

 

 

So, Obviously i'm not receiving data before I send it. 

 

The LV 8.6.1, and LV 2011, and DataSocket are all running on one computer.

 

So, my question is,,,

What's up with the Tick Count (ms) Function.

I always thought that this came from some counter regester in the CPU that starts counting on power up.

 

If anybody has a theory I'm happy to hear it. 

 

 


Engineering - The art of applied creativity  ~Theo Sutton
0 Kudos
Message 1 of 18
(17,792 Views)

You have either found a potential Nobel Prize (effect before cause)

 

OR

 

the scheduling of the "Tick Count (ms)" calls are variying slightly. Some sequence structures may help force the order of the processing of the code "clumps". Unless we force the order with data dependency or seq structures, the clumps are passed to the OS scheduler and processed in the order ti selects.

 

But with not images of your code, all of the above is speculation (aside from the Nbel Prize thing).

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 2 of 18
(17,777 Views)

Yeah, the tick count can't be relied on for anything other than time difference measurements (e.g. subtract two tick counts to get a difference in ms...as long as it doesn't wrap around the full tick count value (2^32ms? iirc)).

If you want to compare times - use the 'get date/time in seconds' VI.

In the help it states - "The base reference time (millisecond zero) is undefined. That is, you cannot convert millisecond timer value to a real-world time or date." The fact it says it's 'undefined' is pretty vague...I would have expected it to be the same value on both applications but I absolutely wouldn't rely on it.


LabVIEW Champion, CLA, CLED, CTD
(blog)
0 Kudos
Message 3 of 18
(17,759 Views)

@Sam_Sharp wrote:
Yeah, the tick count can't be relied on for anything other than time difference measurements (e.g. subtract two tick counts to get a difference in ms...as long as it doesn't wrap around the full tick count value (2^32ms? iirc)).

If you want to compare times - use the 'get date/time in seconds' VI.

In the help it states - "The base reference time (millisecond zero) is undefined. That is, you cannot convert millisecond timer value to a real-world time or date."

Subracting two U32 still works becuas ethe way unisgned math works with over-flow.

 

The Time Stamp does vary across different platforms.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 4 of 18
(17,754 Views)

Just to clarify i'm not looking at getting real-world time or date. 

 

 

Currently I have much less faith in the ms Tick Count that what I previously had.

I don't know how it works. 

 

In today's computers on the hardware level there are multiple sources to get a time and passed time from event A to event B. 

 

you have 8254 counter/timers that now reside in the chipset, CPU clock cycle counters (RDTSC).

 

High Precision Event Timer (HPET) that apparently windows Vista and newer utilizes for their newer TimeGet() family of function calls. But, I can't find any documentation about how the older win32 api calls still get their time data in the newer OSes.

 

On top of that you have tons of other timing options depedent on the chip manufacturer and series of processor.

 

I'm just curious how accurate (really) is the 1ms Tick Count. With newer processors varying their clock speed for power optimizations nearly every second; I'm really curious as to where LabVIEW is getting its Tick Count. 

 

Regards,

 

 


Engineering - The art of applied creativity  ~Theo Sutton
0 Kudos
Message 5 of 18
(17,715 Views)

Your clarification doesn't make all that much sense.  You're not specifically seeking the real date/time.  We get that.  But, you are seeking a reference that can be used across applications and match.  In that end, understanding the exact implementation of the tick count is clearly a wasted endeavor.  You already know it doesn't have the same relative zero across the applications.  As a result, it's not going to work for your task regardless of what any of the answers to your questions are.

 

With that, you're left looking for another route.  There's really two routes I can come up with quickly.  The first is the date/time you've already been suggested.  The second relies upon using Windows API calls to get the values of the registers you desire from Windows.  In doing so, you'll run into the problems you've already mentioned with variable timing.  That makes this option less appealing.

 

Where does that leave you?  Get date/time in seconds.  You're back at this point.  The unfortunate part here is the resolution comes down to seconds.  But, you're also running on a non-deterministic system where the function calls have enough jitter I'm not sure you'll get valuable information by seeking greater resolution.

Message 6 of 18
(17,706 Views)

Ditto what natasftw wrote.

 

There are limits to the tick-count when you start working down in the ms range and obviously below.

 

Windows was never intended to be RT and only had to make the user think they were moving a mouse in pseudo-real-time. That means below 30 ms, it is up the OS to determine exaclty what happens when.

 

If you really want to investigate transport times (which is what I think I read in your orginal post) then hardware timing will be required.

 

I once had a project that was intended to evaulate various message transport schemes comparing serial, to Ethernet, and ScramNet (see here ) with transport latancy being one of the measurements.

 

We used two PXI chassis running RT each with their own counter/timer cards driven by a branched clock singal from a high-speed master clock widget with equal length cables used to share the clock. even using high end clock rates the transport was so fast that using software we could only measure average times since multiple packets could move during a single clock cycle. But we were able to come up with repeatable numbers.

 

Spoiler
The transfer between chassis using SCRAMNet were so fast that we had to take into considertion the call-over head of calling sub-VIs to query the clock count. While doing that we discovered that sub-VIs with "extra connections" on the icon connector took longer than calling sub-VIs with less. I f memory serves me, I want to say it was on teh order of 7 femtoseconds per connection on the icon connector!

 

 

So if you actually want to do measurements, you may want to consider hardware timing.

 

Re: details of the ms timer

 

If I wanted to dig into that, I would be looking at very old documentation from the DOS days and the early bios since the ms timer goes back that far and what you find on modern OS, are carried over from that time.

 

Edit:

Quick Google Search and the first hit pointing at Microsoft can be found here.

 

Trying to help, but most likely missing the mark...

 

Ben 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 7 of 18
(17,690 Views)

Here is some discussion on timing and OS limitations courtesy of Jeff Bohrer.:

 

https://forums.ni.com/t5/forums/v3_1/forumtopicpage/board-id/170/thread-id/635808/page/1

 

-AK2DM

~~~~~~~~~~~~~~~~~~~~~~~~~~
"It’s the questions that drive us.”
~~~~~~~~~~~~~~~~~~~~~~~~~~
Message 8 of 18
(17,674 Views)
Solution
Accepted by topic author MrQuestion

@MrQuestion wrote:

I would like to know how the Tick Count (ms) function works on a PC and Real Time enviroment.

 

On a windows machine is this an OS API call?  Does windows provide LabVIEW the Tick count value? 

Can error be introduced on multi core machines where the cores can run at different speeds? (dynamic energy savings)

  

So, Obviously i'm not receiving data before I send it. 

 

The LV 8.6.1, and LV 2011, and DataSocket are all running on one computer.

 

 

If anybody has a theory I'm happy to hear it.  


OK,  You raise some great points worthy of exploration.  And, some miss-information seems to exist about timing and timers in general so I'll try my best to add to this discussion.

 

First let me address the potential for "Time Warps" in data transfer or, apparent negative time Inter Process Communication (IPE.)  It cannot happen, didn't happen, and will never happen with currently available Si based hardware that can be accessed near room temperature.  

Spoiler
For completeness, all bets are off for potential Cause / Effect synchronicity / inversion at quantum size scales, temperatures below a few miliKelvin or at temperatures above a few dozen kiloKelvin.  If your pc is operating in those extremes - Well, that Nobel prize is all yours!

  So, let's first follow the admonishment of one of science's favorite Franciscan Friars, William of Ockham,.  LabVIEW 2011 has a compiler with optomizations built into it and written in LabVIEW.  LabVIEW 8.6.1 has a compiler with an entirely different compiler.  The simplest hypothisis suggests that you uncovered a key difference between the two.  This is not incontrovertable proof but, reasonable enough in theory that I won't spend longer on the point.

 

So, What really does Happen and what does it when "Get Tick Count" is called?  The exact answer is "Who Cares?" LabVIEW abstracts that so you don't need to know.  Yet, a more complex answer can be infered.  The mSec Timer exists in hardware so, some kernel capability must exist to update an address in kernel space at an interval determined by the kernel and at the priority assigned to that kernel capability.  For most systems this process is the responsibility a hypervisor loaded at boot time as defined by the core kernel after the underlying hardware is identified either statically or dynamically.  Yes, that means that not only does the Si effect the process but, the kernel may change how we access the hardware.  YUCK! however it is safe to assume that some unvirtualized address in system space is what needs to be accessed to get a return value for "Tick Count" in LabVIEW (or any other application.)  Since the data we want is in Kernel space, only the kernel can access it therfore, something must trigger a context switch (mode switch) from the user space (where our app is running) to the kernel space and transfer a copy of the data back to the user space.  

 

OR, to put it simply (all joking aside) The LabVIEW Run-Time Engine (application context) asks the OS to ask the kernel to request a value from the hypervisor, loaded at boot time, or dynamically in the case of multiple virtualized OSs, that is selected as a driver for the hardware that keeps track of time.

 

With embedded systems and RTOS's that process is subjected to greater optomization and consequently both more hardware dependant and less fault tolerant.  In non-deterministic systems it is less optomized but, it is more modular and offers greater system stability and security (Fault Tolerance)

 

Now that you have read that, Try to stuff all the grey matter back between your ears and thank the Kernel and OS developers for abstracting all of that and giving us all some means of getting a I32 value related in any way shape or form to the approximate number of miliseconds since last time the PC learned it had a timer.Smiley Very Happy


"Should be" isn't "Is" -Jay
Message 9 of 18
(17,647 Views)

You left out the "PAL" (Processor Abstraction Layer) Jeff.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 10 of 18
(17,639 Views)