LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TimeStamp to Text = Insanity

Solved!
Go to solution

OK, Blokk, that works ! Thanks.

 

That gets me where i want to go, though I still don't understand why the problem exists with TimeStamp import.

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 11 of 25
(3,064 Views)

If what I'm about to say is wrong, let me know.  But I have a memory of reading a thread (don't have a chance to search for it yet) where there was a similar issue with some LabVIEW primitives using the EXT extended precision datatype.

 

I wonder if this is related.  A timestamp is somewhat like its own special version of an extended precision.  Perhaps the underlying code for some primitives don't handle all the bits that extended precision and timestamps have.

 

If you typecast the timestamp to a U8 array before and after the conversion and view it in binary or hexadecimal, I wonder if you can see oddities in the binary value such as a bunch of bits having been coerced to zero.?

 

Actually, you do.

 

Timestamp%20conversion

Message 12 of 25
(3,054 Views)

Sure enough, it looks like it's manufacturing some bogus 0x00FF data to tack onto the end.

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 13 of 25
(3,048 Views)

What confuses me about Blokk's Solution (which I totally respect!) is the notion of "lost precision", that is, the ability to take a 128-bit TimeStamp, reduce it to a finite string (with 79 decimal places), convert this back to a 64-bit Dbl (which, as I recall, can express about 16-17 decimals, some of which have to be used for "seconds since Date Zero"), and have the comparisons yield Equality.

 

Ah!  Maybe they are "cheating", and are doing the comparison by "Convert TimeStamp to Dbl, compare Dbls" as opposed to "Compare two 128-bit quantities" ...

 

Note that the TimeStamp Help says "Use the To Double Precision Float function to convert the timestamp value to a lower precision, floating-point number".  Once precision is lost, it's hard to get back ...

 

Bob "Skeptical" Schor 

0 Kudos
Message 14 of 25
(3,033 Views)

Maybe they are "cheating", and are doing the comparison by "Convert TimeStamp to Dbl, compare Dbls"

 

 

--- I doubt that, because then comparing a cluster containing a timestamp wouldn't work unless they unbundled the thing just to do that conversion.

 

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 15 of 25
(3,031 Views)

P.S. -- if you really want to drive yourself nuts

 

Speaking just for myself, Bob, but I don't really come here to find NEW ways to drive myself nuts...

 

Smiley Happy

 

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 16 of 25
(3,030 Views)

Try it this way:

 

Instead of writing a "human readable" timestamp write the double to your spreadsheet and you can simply (in Excel)

  1. Highlight the column
  2. Right click
  3. Select Format Cells
  4. Select Time or date
  5. Select the time or date format you want.

Ust this to convert the LabVIEW timestamp to an OLE (Excel) timestamp.

Timeconvert.png

 

========================
=== Engineer Ambiguously ===
========================
Message 17 of 25
(3,024 Views)

@Bob_Schor wrote:

What confuses me about Blokk's Solution (which I totally respect!) is the notion of "lost precision", that is, the ability to take a 128-bit Time Stamp, reduce it to a finite string (with 79 decimal places), convert this back to a 64-bit Dbl (which, as I recall, can express about 16-17 decimals, some of which have to be used for "seconds since Date Zero"), and have the comparisons yield Equality.

 

Ah!  Maybe they are "cheating", and are doing the comparison by "Convert TimeStamp to Dbl, compare Dbls" as opposed to "Compare two 128-bit quantities" ...

 

Note that the Time Stamp Help says "Use the To Double Precision Float function to convert the timestamp value to a lower precision, floating-point number".  Once precision is lost, it's hard to get back ...

 

Bob "Skeptical" Schor 


But I don't think that is the case here, at least for the values of time stamps that Steve is using.  Because if a conversion to double caused a loss of precision that affects the value of his original time stamp, the conversion from double back to a timestamp should not be able to add that precision back in.  But the Scan from String on the original timestamp must be failing to restore it back to the same precision level that the scan from string to double is able to do.

 

When I did my test to look at the raw bits, I expected to some values on some bits exist in the original timestamp but be coerced to zero in the conversion to timestamp (but not in the work around Blokk showed).  It surprised me to see the number of 0 bits that were in there, and then that them coerced to a 1.  Actually, the last 43 bits were coerced to 1's (5 bytes and 3 bits before that).  That means only 85 bits of the timestamp, were returned back to their original values using the time stamp conversion, but all 128?? (? not sure), were using the To double.  I added a binary representation for the double, and there is no real matching pattern.  So really there are only about 64 bits being used in the timestamp for the time Blokk provided.  Going through double will lose precision, probably at the lowest of binary fractions depending on how many whole seconds at passed since the epoch.

 

I also added an array showing

 

(I'm uploading my snippet again, I realized I made a formatting error on the bit display when I set it to show 16 bits on 8-bit numbers).

 Timestamp%20conversion

 

0 Kudos
Message 18 of 25
(3,020 Views)

RavensFan,

 

     Thanks for providing that Snippet.  I played a bit with it, and discovered that 72 of those post-decimal digits are superfluous.  If you change the format code to %<%.7X %x>T, giving you a much shorter (by 72 characters string), you still get the original TimeStamp value that shows "Equality" in the comparison!  There's something weird going on "under the covers" ...

 

Bob "Of the Essence" Schor

0 Kudos
Message 19 of 25
(3,003 Views)

I'm finding that if I configure a timestamp control or indicator to show more than 19 digits after the decimal point, I get garbage. Check this out:timestamp weirdness.png

 

I'm taking a cluster of an i64 and u64 and casting it into a timestamp. The example with 19 digits is correct, since my time zone is UTC-5. But if I show any more digits than that I get garbage after the decimal point. With 20 digits the fractional part shows very close to 1/8, with 21 digits it's close to 1/64, with 22 digits it's 1/1024. For a control with more than 19 digits displayed, if I type 00:00:01.0 for the time, it gets replaced with garbage after the decimal point. This seems like a bug with timestamp controls.

 

edit: replacing my snippet because I realized I mistakenly labeled the integers as 16-bit. They are in fact 64-bit.

 

edit 2: I see the LabVIEW help addresses this somewhat, under "Displaying Higher Digits of Precision in a Time Stamp Control" it says "Enter the digits of precision in the Digits field. Although you can enter any number in this field, LabVIEW can accurately store up to 19 digits of precision in the fractions of a second." It seems like we should be prevented from setting this higher than 19, then, if it's going to break the display.

Message 20 of 25
(2,993 Views)