LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

timestamp to bytes conversion only 6 bytes

I am trying to convert a double into 8 bytes.

The double is the number of seconds past midnight.

 

When I run the VI I the two least significant bytes of the 8 bytes is always 0.

Yet, when I hard code a double, all 8 bytes have values.

What I am I doing wrong?

 

EDIT:  I am attaching a second VI where I am calling the .NET function I am trying to replicate.

 

 

 

 

Download All
0 Kudos
Message 1 of 10
(4,008 Views)

Your VI was a bit confusing.  Is this what you were after?

time.png

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 2 of 10
(3,976 Views)

Oops, misread your post.  Still, you can probably use most of the code.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 3 of 10
(3,973 Views)

Using the Flatten to String still result in the two bytes of 0.

Not sure why.

0 Kudos
Message 4 of 10
(3,969 Views)

I've just used an on line converter to convert the integer 186400 (seconds in a day) to a double precision real. The least significant two bytes ARE zero.

 

Rod.

0 Kudos
Message 5 of 10
(3,942 Views)

That's because the LabVIEW timestamp doesn't have unlimited resolution and with converting to a Time & Date record you reduce it further as the fractional seconds is not going much below 0.001  or 1 ms. The seemingly much higher accuracy of the .Net function is also just noise. The computer isn't really going to give you the full 64 bit accuracy of a double number in any case. Anything beyond 0.1 ms on user level can be considered random noise under Windows. And you can't guarantee accuracy of Windows execution even to 10 ms.

Rolf Kalbermatter
My Blog
Message 6 of 10
(3,932 Views)

Looks like timestamp isn't that precise.  You can verify the byte array is correct with your existing code, though.

Change the precision on your 'timestamp' indicator to about 13 or 14 digits of precision.

Copy that value to your 'x' control, and run the VI.

You will see that the resulting byte array will also (likely) have the zeroes at the end.

 

Looks like rolfk has a great explanation

0 Kudos
Message 7 of 10
(3,927 Views)

@psuedonym wrote:

 

What I am I doing wrong?


Dear Psue (you should probably learn the correct spelling for pseudonym, meaning "false name"),

You are "trying too hard".  In LabVIEW, a TimeStamp is an internal 16-byte unsigned value representing the number of ticks of a very fast clock, with Time 0 being near the turn of the previous century.  As you probably know, any representation of fractions using Floats are approximations.  In your case, the "fractional seconds" appears to be represented in single precision (the last 4 bytes are 0), which probably explains what you are seeing.

 

Bob Schor

0 Kudos
Message 8 of 10
(3,852 Views)

@Mancho00 wrote:

Looks like rolfk has a great explanation


Just say, "rolfk posted."   We know the rest...  Smiley Wink

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 9 of 10
(3,840 Views)

@Bob_Schor wrote:

@psuedonym wrote:

 

What I am I doing wrong?


Dear Psue (you should probably learn the correct spelling for pseudonym, meaning "false name"),

.

 

Bob Schor


Ha. Ha. Good one.  Smiley Very Happy

0 Kudos
Message 10 of 10
(3,807 Views)