05-22-2018 09:09 PM - edited 05-22-2018 09:10 PM
Comparing two different data types (single and double precision) of the same values, two different results are got using the same loop.
In some cases, the while loop executes different times.
The program and results are attached.
Who knows why the results are different?
Solved! Go to Solution.
05-22-2018 10:20 PM - edited 05-22-2018 10:21 PM
Many simple decimal fractional values (e.g. 0.1) do not have an exact representation in binary. Set the table to show e.g. 18 decimal digits and you'll see what's going on.
05-22-2018 10:33 PM
To add to altenbach's reply, you should really read this: http://www.ni.com/white-paper/7612/en/. While it is an NI document what it describes is true of every computing device in the world that uses IEEE 754 floating point notation and is not specific to LabVIEW. Its one of the common issues software engineers of all walks of life inevitably hit.