취소
다음에 대한 결과 표시 
다음에 대한 검색 
다음을 의미합니까? 

timestamp to bytes conversion only 6 bytes

I am trying to convert a double into 8 bytes.

The double is the number of seconds past midnight.

 

When I run the VI I the two least significant bytes of the 8 bytes is always 0.

Yet, when I hard code a double, all 8 bytes have values.

What I am I doing wrong?

 

EDIT:  I am attaching a second VI where I am calling the .NET function I am trying to replicate.

 

 

 

 

모두 다운로드
0 포인트
1/10 메시지
4,867 조회수

Your VI was a bit confusing.  Is this what you were after?

time.png

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 포인트
2/10 메시지
4,835 조회수

Oops, misread your post.  Still, you can probably use most of the code.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 포인트
3/10 메시지
4,832 조회수

Using the Flatten to String still result in the two bytes of 0.

Not sure why.

0 포인트
4/10 메시지
4,828 조회수

I've just used an on line converter to convert the integer 186400 (seconds in a day) to a double precision real. The least significant two bytes ARE zero.

 

Rod.

0 포인트
5/10 메시지
4,801 조회수

That's because the LabVIEW timestamp doesn't have unlimited resolution and with converting to a Time & Date record you reduce it further as the fractional seconds is not going much below 0.001  or 1 ms. The seemingly much higher accuracy of the .Net function is also just noise. The computer isn't really going to give you the full 64 bit accuracy of a double number in any case. Anything beyond 0.1 ms on user level can be considered random noise under Windows. And you can't guarantee accuracy of Windows execution even to 10 ms.

Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
6/10 메시지
4,791 조회수

Looks like timestamp isn't that precise.  You can verify the byte array is correct with your existing code, though.

Change the precision on your 'timestamp' indicator to about 13 or 14 digits of precision.

Copy that value to your 'x' control, and run the VI.

You will see that the resulting byte array will also (likely) have the zeroes at the end.

 

Looks like rolfk has a great explanation

0 포인트
7/10 메시지
4,786 조회수

@psuedonym wrote:

 

What I am I doing wrong?


Dear Psue (you should probably learn the correct spelling for pseudonym, meaning "false name"),

You are "trying too hard".  In LabVIEW, a TimeStamp is an internal 16-byte unsigned value representing the number of ticks of a very fast clock, with Time 0 being near the turn of the previous century.  As you probably know, any representation of fractions using Floats are approximations.  In your case, the "fractional seconds" appears to be represented in single precision (the last 4 bytes are 0), which probably explains what you are seeing.

 

Bob Schor

0 포인트
8/10 메시지
4,711 조회수

@Mancho00 wrote:

Looks like rolfk has a great explanation


Just say, "rolfk posted."   We know the rest...  윙크 이모티콘

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 포인트
9/10 메시지
4,699 조회수

@Bob_Schor wrote:

@psuedonym wrote:

 

What I am I doing wrong?


Dear Psue (you should probably learn the correct spelling for pseudonym, meaning "false name"),

.

 

Bob Schor


Ha. Ha. Good one.  매우 기쁨 이모티콘

0 포인트
10/10 메시지
4,666 조회수