LabVIEW Idea Exchange

Showing results for 
Search instead for 
Did you mean: 

Add full support of 128-bit timestamps.

Status: New

Hi all,


the 128-bit fixed point fractional format of the timestamp type is missing support in primitives.


Often the user needs to fall back to the double representation of the timestamp. This happens, for example, when one wants to measure the difference between two timestamps. Also, this may result in loss of accuracy if one is not aware of what is going on under the hood. See discussion in here and example below.


In particular the following should be improved:


1) There is an asymmetry between the "plus" operator and the "difference" operator. Calculating the difference of two timestamps to get a double is as correct as it is adding a double to a timestamp, if we think that the type of a time difference is DBL. Why don't we get any warning when we sum "apples with oranges" while we get harassed when subtracting things of the same type?  That signal was enough for me to try to work out myself what was going on inside that block. I would say that only bad compilers warns the programmer about something that is perfectly legal.

2) However, I prefer a second option: the 128-bit fixed point fractional format should be printable both as absolute and relative time, as it is for the DBL representation of time. In this case the output of the difference operator applied to two Timestamps is again a Timestamp. In this case the sum of two Timestamps should be allowed. As this has been debated in the discussion above, users should be aware enough of the facts of Nature to understand that the relative time representation is more appropriate when one subtract two numbers that he/she interprets as absolute timestamps. In fact, I guess users were managing pretty well time differences when they got only the DBL representation of timestamps... Also, nothing in the language stops me to show any DBL number as an absolute time. Why should the language prevent me to show a signed 128-bit number as a relative time? 


The example below shows that the difference operator applied to Timestamps performs the subtraction as 128 bit before the conversion to double, therefore, internally, the difference timestamp is already calculated. Note that the inconvenient way of doing the difference (conversion to DBL before subtracting) is not enough even for microseconds accuracy.







AristosQueue (NI)
NI Employee (retired)

I agree with some of your post, but to get there, I'm going to go through some bits that I disagree with.


> I would say that only bad compilers warns the programmer about something that is perfectly legal.


Well, that's certainly not true. The whole point of compiler warnings is things that are probably mistakes even though they're legal. Scroll through the list of warnings for C, C++, LabVIEW, JAVA... every single warning is something that is legal.


Now, having said that... I am not an expert in the timestamp data type nor why it was made the way it was, but looking at the environment, I believe the coercion dots you are seeing *are correct*. LabVIEW is (apparently) not doing subtraction of two timestamps. Instead, it is doing subtraction of two doubles. Notice the Context Help when you hover over the terminal:


That tells me that LV is coercing both of those terminals to be doubles first and then doing the subtraction. In other words, yes, this is inefficient programming if you have those wires forked to other operations that also coerce to doubles. It's also letting you know that the coercion to double is happening first, before the subtract, so you may not get the results you expect. So the coercion dots are an important part of signaling that LV has *no data type capable of directly representing the difference between two timestamps*. Perhaps we should have that -- I don't know why it was designed the way it was designed. But at least given the current design, those coercion dots seem fairly important.


The Add doesn't need coercion dots because adding a double to a timestamp is a well defined operation. Subtraction of two timestamps is not -- they have to be coerced to a type that is well-defined first, and that means giving up something.


So, the second half of your post, where you talk about adding a relative timestamp data type -- that would be a way to address all of this. And that's the part of your post that I think is really good. Your idea's title kind of misses the point. The title should be "add a new data type to represent relative timestamps" or something like that.







It shows clearly that the difference block IS NOT DOING THE SUBTRACTION OF TWO DOUBLES. Otherwise the fields "t1 - t0 (good enough) [s]" and "t1 - t0 (bad) [s]" would be equal. Also, the sum of two timestamps is not even allowed by LV (at least LV 2012). 


In my opition, when LV rises the coercion warning, I can think that LV considers it "suspect". Then, with timestamps either LV errs because the operation is not suspect at all, or because it does not provide to the user the right output type to connect without coercion. You say that coercion dots are needed because the coercion is indeed potentially lossy, I can agree with you; then the 128 bit difference operator is missing. Hence my post: please allow people to do that difference without loss of information.


As you implicitly admit saying that the difference between timestamps is not well defined in LV,  the "new" datatype required to represent the time difference is a signed 128-bit number, the Timestamp itself. I am not sure if it is more convenient or more LV-isch to introduce a "new" datatype with exactly the same representation of the Timestamp, but different default visualization, with special rules for their algebra (alike algebra of pointers and size_t in C) or just extend the algebra of the Timestamp type by allowing summation and differentiation (apples + apples = apples, oranges - oranges = oranges). The second seems to me easier to implement, but maybe LV users are happier with the first.