From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
TomOrr0W

Rounding Version of %u Time Format Code

Status: Declined

Any idea that has received less than 2 kudos within 2 years after posting will be automatically declined.

Currently LabVIEW Timestamps truncate values rather than round them when using the %u format code.  The numeric format codes all round values (0.499999 formatted with %.2g is 0.50).  I would like an alternative to the %u code that does the same.

 

See https://forums.ni.com/t5/LabVIEW/Timestamp-Formatting-Truncates-Instead-of-Rounds/m-p/3782011 for additional discussion.

 

 

 

My use case here was generating event logs.  I was logging the timestamp when parameters started/stopped failing.  To save space, I stored a set of data identical to the compressed digital waveform data type.

 

I had a couple parameters which would alternate failing (one would fail, then the other would start failing as the first stopped failing).  As they had different T0 values, the floating point math for applying an offset multiplied by a dT to each T0 gave values that would truncate to 46.872 and the other truncated to 46.873, leading to a strange ordering of events in the log file (both would appear to be failing/not failing for a brief time).

 

To make the timestamps match, I made the attached vi (saved in LabVIEW 2015) to do my own rounding (based off the feedback in the linked forum post).

4 Comments
Henrik_Volkers
Trusted Enthusiast

I think that is (and should stay) an expected behavior ..

 

Any time value (hour, second,...  ) should switch after it elapsed, no matter at what scale, and never point to the future due to roundings.

 

 

 

Greetings from Germany
Henrik

LV since v3.1

“ground” is a convenient fantasy

'˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'


TomOrr0W
Member

I am asking for a new time format code (or other added syntax to the time format string) that would allow you to choose to round.  The default behavior would stay the same.

 

Please note that the time stamp that rounded to a value ending in 46.872 was only 2 femtoseconds short of 46.873.  I didn't realize it at the time, but if you type the offset (43.824) into a double control, LabVIEW chooses to represent it as 43.823999999999998 rather than 43.8240000000000051 (so I didn't even need to use the multiplication of [43824 U64]*[0.001 DBL] to demonstrate the issue).  Someone in the other thread mentioned that the max resolution of the SI second is around 109 picoseconds, so a reasonable compromise would be to round all time stamps to the nearest 100 picoseconds before displaying/converting to a string unless the user specifies greater precision.

 

The idea that time stamps should never display a time in the (very near) future is new to me.  Several people in the linked thread implied that it is some sort of standard for digital representations of time to only change when a full [smallest unit displayed] has elapsed.  Perhaps if this idea can't/shouldn't be implemented, NI should add something to the time format documentation mentioning that it truncates timestamps so that people don't assume they will behave like other numeric types.

Henrik_Volkers
Trusted Enthusiast

All leads to the classical problem of numerical representation and errors due to the change in the representation.  You don't (or should not) compare analog (DBL) values to equality without looking at the numerical resolution and uncertainty. Same for the timestamp.

For analog measurements you coerce to the nearest value, for the time you record the elapsed time (If you look at your digital watch with hours and minutes, you want it to switch to 12:00 at noon, not at 11:59:30) . You deal with the same errors and problems  .. just shifted half the next resolution digit.

 

The timestamp is I64 for the seconds since ...    and a U64 for the fractions of the seconds. See it as a 64bit counter running at 18.44... EHz (18.44...e18 Hz) .. currently I'm not aware of a clock running at that speed 😉 (not shure about that, my colleages work on atomic clocks, they want to push it to e-18 uncertainty :D) 

So usually a clock will add more than 1 bit to that counter and every conversion to another representation results to errors.

If you calculate the difference between two timestamps in LabVIEW, the output is a DBL, truncated, so you loose resolution, however the resolution is still smaller that the clocks actually used. If you think that changing the treatment of the last digit slove your problem, you have overseen the problem and still have it.

 

Greetings from Germany
Henrik

LV since v3.1

“ground” is a convenient fantasy

'˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'


Darren
Proven Zealot
Status changed to: Declined

Any idea that has received less than 2 kudos within 2 years after posting will be automatically declined.