LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Timestamp Formatting Truncates Instead of Rounds

Solved!
Go to solution

I wonder if this has anything to do with LabVIEW using banker's rounding?

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 11 of 17
(628 Views)

@TomOrr0W wrote:

The post from billko made me go back and look again at the code I ended up with to see if it would be a better fit.  While typing in timestamps, I noticed the below behavior.

 

It turns out the Timestamp doesn't actually Truncate.  There are a certain number of individual times where a timestamp less than a certain time rounds up to that time instead of down to the time 1 ms prior.  See the attached vi.


You might be running into something I found a few months ago, when we were discussing other weirdness related to timestamps. See my post here: https://forums.ni.com/t5/LabVIEW/TimeStamp-to-Text-Insanity/m-p/3746249/highlight/true#M1054587

 

Basically I found that if you configure a timestamp indicator or control to show more than 19 digits after the decimal point, what it shows for the fractional part is basically garbage and not related to the actual value in the timestamp. I think it's a bug in LabVIEW, and the workaround is to never try to display more than 19 fractional digits.

 

Edit: Looking a little more closely at what's happening, it seems like if you display 20 digits, then LabVIEW adds approximately 1/8 (0.12499...) or 2^-3 seconds to the value in the timestamp before displaying it. If you display 21 digits it adds 1/64 (0.01562499...) or 2^-6 to the value before displaying it, displaying 22 digits adds 1/1024 or 2^-10, displaying 23 digits adds 1/8192 or 2^-13, displaying 24 adds 2^-16, 25 adds 2^-20, 26 adds 2^-23, and so on. Except for the jump from 2^-6 to 2^-10, it seems to follow a pattern. However, if you go all the way up to displaying 39 digits, suddenly it adds almost 1/4 (0.24999...) or 2^-2 to the value before displaying it, 40 digits adds 1/32 = 2^-5, 41 adds 1/512 = 2^-9, 42 adds 1/4096 = 2^-12, etc. The moral of the story is, don't try to display more than 19 digits of precision on a timestamp or it'll have something extra added to it before it's displayed.

Message 12 of 17
(624 Views)

@TomOrr0W wrote:

The post from billko made me go back and look again at the code I ended up with to see if it would be a better fit.  While typing in timestamps, I noticed the below behavior.

 

It turns out the Timestamp doesn't actually Truncate.  There are a certain number of individual times where a timestamp less than a certain time rounds up to that time instead of down to the time 1 ms prior.  See the attached vi.


The problem doesn't really relate to the Timestamp Datatype.  The datatype is simply far more granular the the theoretical maximum resolution of the SI Second as it is currently defined.

1.087827757077667e-10Sec that is the period of one transition between the two hyperfine levels of the ground state of the cesium 133 atom you only get integer number of transitions!  and, a U64 divides a second into about 2e19th parts.  Now, if you have a bunch of cesium clocks, you can arerage them, and weight those individual clocks based on a whole bunch of variables somewhere in France and derive about 1e-12 seconds as a long term accuracy for the SI Second but that takes a large group of dedicated physicists.

 

(Where is Block when you need a discourse on atomic fountains?- yes, that's what his avatar is- an atomic fountain)


"Should be" isn't "Is" -Jay
0 Kudos
Message 13 of 17
(612 Views)

@JÞB wrote: 

(Where is Block when you need a discourse on atomic fountains?- yes, that's what his avatar is- an atomic fountain)


Did you mean "Blokk"?  His avatar looks like a puppy, to me ...

 

BS

Message 14 of 17
(601 Views)

@Bob_Schor wrote:

@JÞB wrote: 

(Where is Block when you need a discourse on atomic fountains?- yes, that's what his avatar is- an atomic fountain)


Did you mean "Blokk"?  His avatar looks like a puppy, to me ...

 

BS


IBD I guess he changed it Smiley Surprised


"Should be" isn't "Is" -Jay
0 Kudos
Message 15 of 17
(596 Views)

Hi arteitle,

 

From the 32 bits your thread mentions LabVIEW actually using, you should only count on displaying 9-10 digits (nanoseconds - hundreds of picoseconds).

 

This should never be a problem, as I don't display (or use string formatting codes) for anything beyond 3 digits in normal work.  This large number of digits would never be displayed to a user.

 

Today's post about it not always truncating is more a curiosity than anything.  Adding half a millisecond before displaying the time will work fine even if the result is a couple tenths of a millisecond above or below the boundary.

 

I wonder if the weirdness you mention is computer-specific.  I don't see any strangeness of that nature until I get to 78 digits displayed with LabVIEW 2015 on a Windows 10 VM.

0 Kudos
Message 16 of 17
(594 Views)

The fractional part is stored in 64 bits and as the documentation says, that provides 19 digits of precision. You're right, the misbehavior I described is version-specific, I only see it on the 64-bit version of LabVIEW 2017, not on the 32-bit version. 

0 Kudos
Message 17 of 17
(582 Views)