02-08-2010 09:56 AM
This is a well known problem then working with floating point numbers. In fact you should be very careful then doing an exact floating point compare, and decision functions.

02-08-2010
10:27 AM
- last edited on
05-05-2025
10:50 AM
by
Content Cleaner
Gerd,
your assumption is quite attractive, but it is obviously not the way LabVIEW (or any other wellknown programming languages) handles floating point values.
This link gives some information on the handling of floating point values in programming languages (as in LabVIEW).
A direct typecast to a boolean array is not possible since boolean is U8 with value = 0 -> FALSE, value != 0 -> TRUE.
Since the typecast therefore collects packets of 8 bit into a boolean, the "boolean representation" using a simple typecast does for instance return the same pattern for 700 and 701....
hope this helps,
Norbert
EDIT: Please note that even the number 700 already creates the"display"-issue as well
02-08-2010 12:31 PM
Thank you all for your replies.
My problem was actually enlightened after the first reply as the rounding error I encountered was due to how round to nearest works and not because of the precision of a double. Thank you for pointing that out to me GerdW.
Thanks Norbert B for all information on DBL and floating point numbers.
Terje
02-08-2010 02:11 PM
Gerd is correct 700.5 can be perfectly represented in IEE754 Single Precision (as well as double since single is a subset) and virtually all programming languages and cpu's use the IEE754 standard.
For a single it is
01000100001011110010000000000000
Value = (-1)^sign * (1+Mantissa/2^23) * 2^exponent
I'll do all the math using rational numbers, so it can be verified with a calculator (since a calculator would likely suffer the same accuracy problems with fractional numbers if there were any for this number)
sign: 0
exponent: 10001000 -> 136 - 127 (exponent offset) -> 9
Mantissa: 1011110010000000000000 -> 3088384
(-1)^0 * (1+ 3088384/2^23)*2^9
1*(1+ 3088384/2^23)*512
(1+ 3088384/8388608)*512
gcd(3088384,8388608)=8912
(1+ 377/ 1024) * 512
(1024/1024 + 377/1024)*512
(1401/1024)*512
1401/2 = 700.5 exactly
Of course there are slightly less than 2^64 numbers that a double can perfectly represent. As it has been stated the vast majority of benine looking numbers with a fractional part (let alone odd ones) like 0.1 cannot be perfectly represented.
The problem is most likely that the algorithm for converting from floating point to decimal has rounding errors when ask for too many digits of accuracy. Since doubles only have a relative precision of about 16 digits, it's not surpising that would be problems trying to squeeze more out of them. Which Norbert seemed to be hinting at earlier.
On a side note, it's possible to do exact math with floating point numbers but you have to be extremely careful about the numbers and operations used. The simplest example being integer numbers with addition, subraction and multiplication. This used to helpful when you needed to do integer math on an integer larger than 32bit, and had doubles but lacked a 64 bit integer type. Or is helpful where you only doubles and no intergers.
02-08-2010 04:03 PM - edited 02-08-2010 04:09 PM
Hi Norbert,
your link describes exactly what I was talking about. SGL holds a mantissa of 24 bits (with hidden leading 1) and you only need 11 bits to represent 700.5! So even a SGL is ok to exactly hold 700.5... (Did I ever mention to typecast numbers in this thread?)
As I now can check this with LabVIEW at hand we can conclude to problem to be either a problem of LabVIEW or the underlying FPU commands. The conversion from (any) floating point format to string fails when you specify a precision higher than 17... This also applies to SGL (effectively holding 7 decimal digits) or EXT (holding 20 decimal digits), both show the same problem when displaying more than 17 decimal digits.
So Norbert, now it's your turn as NI insider: Could you check where the source of this problem is located? In LabVIEW or in underlying code/hardware?
02-09-2010 11:10 AM
Lots of good information in this thread so I won't bother trying to do a massive summary. I've attached a VI that works with a value, 700.5, which is perfectly representable in any floating-point representation LabVIEW supports. The VI shows the number behaves consistenly from different sources (bit pattern, constant & string) when stored in single precision.
The flipside is also checked out - the numeric display of the value. LabVIEW offers several ways to present numeric data. Since the floating-point representation can accurately contain a value with just over 7 digits, any attempts to display more digits has limited value.
See the attached VI for the results.
NOTE 1: It is not wrong to expect the numeric display to handle more digits correctly if the remaining digits are all zero. However, you can debate whether it needs to do so if it impacts the speed at which the underlying algorithm produces the correct digits up to the required numerical precision.
NOTE 2: For this value, even displaying well beyond the 8 digit limit produces consistent results.
NOTE 3: The cross-over point when the displayed value changes matches the number of digits supported by the double precision floating-point representation. My guess is this is not a coincidence.
02-10-2010 02:08 AM
GerdW wrote:Hi Norbert,
[...]So even a SGL is ok to exactly hold 700.5... (Did I ever mention to typecast numbers in this thread?)
[...]
So Norbert, now it's your turn as NI insider: Could you check where the source of this problem is located? In LabVIEW or in underlying code/hardware?
Message Edited by GerdW on 02-08-2010 11:09 PM
Gerd and Matt,
you are indeed correct about the representation of 700.5. I should have checked that before posting, but as you all know, there are certain....limits in floating point numeric representation (and i was to sure that this is the reason....)
Well, nevertheless, i got some information about this specific question:
There is a bug around in LabVIEW for quite some time now (don't know what version was the first one with it) which is reported in CAR #52392. This bug is only a display issue which occurs if one wants to display more than 17 significant digits. The issue is a rounding error which does not occur with any number. Another number known to create that issue is for instance "80".
Despite this incorrect behavior of the numeric display, i am wondering why someone needs to have more than something about 6 to 8 significant bits being displayed......
hope this helps,
Norbert
PS: "Cause we can" is imho not a real reason 😉
02-10-2010 02:17 AM
Hi Norbert,
thanks for clarification.
You asked for reasons:
Usually I agree with "8 digits is enough" - especially when dealing with measurement values.
But:
Once I made my own Mandelbrot/Fractal-Generator, based on EXT numbers. I knew the limited precision of floatingpoint numbers, but still displayed coordinates with a lot of numbers as I wanted to zoom in as deep as possible. But as calculation accuracy limited the results, I used just 17 digits for display...
I think this is a very good and qualified example on using/displaying a lot of digits!
02-10-2010 04:26 AM
I used this site http://babbage.cs.qc.edu/IEEE-754/Decimal.html and type casted the number to U8 in LV. The number 700.5 gave 4085E400000 as DBL and 442F200 for SGL in LV after type casting. The web site gave the same result. Then I tested the number 7.51. LV returned 4085E4147AE147AE and
and 442F20A4 after type casting. The web site returned 4085E4147AE147AE and 442F20A3. So for the SGL it was some difference. The numbers returned from LV was 700.51001 and the Web site gave 700.50995
So at least the number is converted correct. The "error" is in the function that the display use to convert the from stored data to a number we can read.
