LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

double to boolean conversion

I'll go a step further and propose that any floating-point comparison (to 0 or another non-integer value) should use the precision's machine epsilon. This is a unique value specific to the numeric precision (e.g. double-precision) used in this type of comparison operation. For more information on it, you can look it up on Wikipedia - it can get complicated so I don't want to explain it here.

Although this value is different for each numeric precision (i.e. epsilon for double-precision is ~2.2e-16 while epsilon for single-precision is ~1e-7), LabVIEW only offers a constant for double-precision machine epsilon at this time. However, it can easily be computed for any supported precision. The primary reason this value is needed is because using a fixed epsilon doesn't work for different data sets. While the current posted examples do allow for different epsilon values, you would have to know to compute the value using the appropriate machine epsilon.

An alternative is attached here. This VI is designed to handle double-precision values (DBL) and incorporates the use of machine epsilon. The same approach is valuable in comparing two floating-point values where the use of strict equality is rarely useful as well.
0 Kudos
Message 11 of 14
(2,355 Views)

epsilon or a small multiple of it is probably not suitable here unless we know that we are dealing with real DBLs.

We don't know where the number came from.

It could have been the result of a division of an integer by a small integer of converted from fixed-point datatype, for example. 😉

0 Kudos
Message 12 of 14
(2,333 Views)

Are we saying that it's not easy to convert a double to a boolean? 😉

Altenbach, this will make a great Labtoon....  LoL!

0 Kudos
Message 13 of 14
(2,318 Views)
I actually sit on the other side of the fence. Machine epsilon is always useful even if you don't know how your data is scaled. Why? Because it defines the resolution of the computing engine's arithmetic for the given precision. So, at least you have a starting point relative to the fundamental computations.

Realistically, you've always got a natural cap on your data, especially if you're doing fixed-point or integer (with scaling) operations. Even if it isn't obvious, typing in 1e-6 as an absolute tolerance versus 1e+10 as a scale factor is one and the same. Although I might argue that the latter should throw a red flag - perhaps using double-precision is overkill! Single-precision has a machine epsilon of ~1e-7 and is probably more appropriate if you're scaling factors are huge.

If you considering the two examples you mention, I think you'll find they are actually well suited to this scaling approach. A fixed-point value is bounded by definition and the integer division is typically scaled by the numerator or denominator depending on the usage (expansion or contraction). That doesn't mean examples to the contrary don't exist but, in my experience, they won't be solved robustly through the use of a fixed tolerance.
0 Kudos
Message 14 of 14
(2,315 Views)