03-21-2011 04:57 AM
Hi!
I have one question about Labview data precision, maybe is a stupid question but it's getting me crazy.
If I use a numeric constant and I put for example 1us (I mean 1E-6) it's seem 1E-6 but I've found that it's not really 1E-6, in fact if I raise the precision and I use 17 digits, the 1E-6 become to 9,9999999999999996E-7.
Could anybody explain me what's happening to the numeric data?
Thanks
Solved! Go to Solution.
03-21-2011 06:03 AM
This is one of the most frequent questions ever asked.
The trick is in how computers represent floating point value. I can remember many good answers already given in the forum, but I'm not able to find the most complete and clear.
One of the answers is for example here.
03-21-2011 06:26 AM
Thanks