LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TimeStamp to Text = Insanity

Solved!
Go to solution

If you change the format code to %<%.7X %x>T, giving you a much shorter (by 72 characters string), you still get the original TimeStamp value that shows "Equality" in the comparison

 

The "79" in my post came as a compromise between the 250 that it will actually produce, and the desire to show the whole thing on that pic.  I'm aware that 79 useful real digits are not possible, coming out of a TimeStamp.

 

I started with 17, though.  As I said, to EXPORT / IMPORT a DBL, you need to use 17 digits to make sure you get back everything you started with (barring some weird NaNs that won't translate into text).

So, that was my starting point.  (A U64 will produce 20 digits, so perhaps I should have started there).

 

 

In ANY case, even with 19 identical digits in the text, it failed to produce the correct answer.

 

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 21 of 25
(1,310 Views)

An unpublished fact about the timestamp is that only the uppermost 32 bits of the fractional part are really used by LabVIEW as that already provides about 250 femtoseconds resolution and there is no computer that can measure timing to that amount currently.

 

Basically the official type looks like this:

 

typedef struct
{
    int64_t seconds;
    uint64_t fractional;
} Timestamp;

But only the highest 32 bits in the fractional part are really used (highest because it is a fraction rather than a normal number). How the other 32 bits get messed up like that I'm not sure, but it would seem like they are really some remains from an intermediate calculation or they might not really be initialized at all properly.

 

And yes LabVIEW for Windows uses Little Endian format and even-though the flattened data may look like the upper 32 bits of the fractional part are garbage you have to take into account that the Typecast function always creates Big Endian format including for the embedded 64 bit integers in the timestamp.

Rolf Kalbermatter
My Blog
0 Kudos
Message 22 of 25
(1,276 Views)

But only the highest 32 bits in the fractional part are really used

That seems to be true.  It's also true that only the LOWER 32 bits in the I64 are currently used.

The range is from the year 1600 to the year 3000, and any time I enter is coerced to that range and occupies only 64 bits - the bottom 32 of the I64 and the top 32 of the U64.

 

FACT:  the TimeStamp will hold all 64 bits.  If you cast a {I64+U64} into a timeStamp, and then back, all 64 bits are preserved.  I just tested this.

 

FACT:  comparing a TimeStamp with low-order 32 bits = 00000000 to a TimeStamp with low-order 32 = 00000001 results in NOT EQUAL.  Sensible.

 

My guess:

--- They got bit by not having enough resolution in LabVIEW 1 for a TimeStamp (I think it was a double) and said "we don't want THAT to happen again".

--- They made the transition to I64+U64, leaving PLENTY of expansion in both directions (larger year range and smaller resolution).

--- Currently, they only use the top 32 bits of the fraction, when using CURRENT DATA and such functions.

--- The TIMESTAMP-TO-DBL operation knows this rule and disregards the low bits.  Perhaps they define it as a structure like {I32-U64-I32} so that the important bits are in the middle.

--- The SCAN FROM STRING operation knows this and disregards all bits beyond the 19th or so.

--- The FORMAT INTO STRING does NOT know this, and generates digits using arbitrary data for as long as you care to use them, up to 255 or something close.

--- There arbitrary crap-digits won't likely agree with the real ones, and it shows as unequal.

 

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 23 of 25
(1,258 Views)

I say it is really a more complicated situation when talking about time stamps, doubles, floating point representation, whether 0.1=0.1 in binary, comparing doubles for equality, ....

 

We work in the decimal world.  0.12345

Computers working in the binary world.  0 1 2 4  and 1/2 1/4 1/8 1/16.

Double precision floating point uses a special assignment for bits to represent different real numbers in a binary world  where one bit represents the sign, some bits represent the mantissa, some the exponent, one bit the sign of the exponent.  You can google that to get the details.  https://en.wikipedia.org/wiki/IEEE_754  (The table and graph under Basic and Interchange formats is particularly interesting and helps explain what I'm describing below.)

 

Timestamps are a special hybrid decimal and binary.  16 bytes/128 bits, and like you said combination of an I64 and a U64.  Those two parts are both binary integers, but based on splitting a value of time into integer and fractional parts at the decimal point.  So number of seconds since the epoch (or before if negative).  The second being the value of a second broken into 2^64 equal parts and and how many of those since the previous whole second.

 

LabVIEW knows how to manipulate the native timestamps.  But convert that to a double, you've eliminated half the bits you have available (now 64 rather than 32).  You really do lose the precision at the tiniest fractions of a second.  But you are converting that to true binary number and the bits available for the various signs, mantissa, exponent will determine how much precision you have in the new double floating point value which is a single entity represents the number of seconds since the epoch with the integer part and fraction part all blended together.  It is basically a binary version of scientific notation, and the number of significant digits are limited.  The smallest increment you can have now is actually a much larger unit of time now than it was in 1904.  And that unit of precision will keep getting larger as you move into the future.  As the value of the exponent grows, the precision of the least significant digit (bit) becomes a larger unit of time.

 

For double to timestamp and back, the loss of precision doesn't seem to be a huge deal for the real world timestamps and smallest fractions of a second we care about.  And Steve's timestamp to string to double to timestamp is working for him to a fairly large number of digits in the string.  But something must not be quite write in the timestamp to string to timestamp path, and the string to timestamp conversion is failing, forcing some bits to be 1 rather than 0, and causing the string representation to change at a much fewer number of digits and thus does not work for him as well.

 

 

0 Kudos
Message 24 of 25
(1,243 Views)

Tech Support finally got back with acknowledgement and a CAR# 685375

 

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 25 of 25
(1,203 Views)