10-21-2019 02:41 PM - edited 10-21-2019 03:23 PM
I am trying to convert a double into 8 bytes.
The double is the number of seconds past midnight.
When I run the VI I the two least significant bytes of the 8 bytes is always 0.
Yet, when I hard code a double, all 8 bytes have values.
What I am I doing wrong?
EDIT: I am attaching a second VI where I am calling the .NET function I am trying to replicate.
10-21-2019 03:26 PM
Your VI was a bit confusing. Is this what you were after?
10-21-2019 03:29 PM
Oops, misread your post. Still, you can probably use most of the code.
10-21-2019 03:31 PM - edited 10-21-2019 03:34 PM
Using the Flatten to String still result in the two bytes of 0.
Not sure why.
10-21-2019 04:07 PM
I've just used an on line converter to convert the integer 186400 (seconds in a day) to a double precision real. The least significant two bytes ARE zero.
Rod.
10-21-2019 04:22 PM - edited 10-21-2019 04:24 PM
That's because the LabVIEW timestamp doesn't have unlimited resolution and with converting to a Time & Date record you reduce it further as the fractional seconds is not going much below 0.001 or 1 ms. The seemingly much higher accuracy of the .Net function is also just noise. The computer isn't really going to give you the full 64 bit accuracy of a double number in any case. Anything beyond 0.1 ms on user level can be considered random noise under Windows. And you can't guarantee accuracy of Windows execution even to 10 ms.
10-21-2019 04:28 PM
Looks like timestamp isn't that precise. You can verify the byte array is correct with your existing code, though.
Change the precision on your 'timestamp' indicator to about 13 or 14 digits of precision.
Copy that value to your 'x' control, and run the VI.
You will see that the resulting byte array will also (likely) have the zeroes at the end.
Looks like rolfk has a great explanation
10-22-2019 07:38 AM
@psuedonym wrote:
What I am I doing wrong?
Dear Psue (you should probably learn the correct spelling for pseudonym, meaning "false name"),
You are "trying too hard". In LabVIEW, a TimeStamp is an internal 16-byte unsigned value representing the number of ticks of a very fast clock, with Time 0 being near the turn of the previous century. As you probably know, any representation of fractions using Floats are approximations. In your case, the "fractional seconds" appears to be represented in single precision (the last 4 bytes are 0), which probably explains what you are seeing.
Bob Schor
10-22-2019 08:43 AM
@Mancho00 wrote:
Looks like rolfk has a great explanation
Just say, "rolfk posted." We know the rest...
-Kevin P
10-24-2019 10:19 AM
@Bob_Schor wrote:
@psuedonym wrote:
What I am I doing wrong?
Dear Psue (you should probably learn the correct spelling for pseudonym, meaning "false name"),
.
Bob Schor
Ha. Ha. Good one.