03-20-2012 11:01 AM
I have a program that reads a floating point number from a text file the number will have the values from 1.0, 1.1, 1.2 up to 1.9. I need to pick out the integer and decimal part of the number into 2 separate ints. I have code that works for most numbers except when the number is 1.2 or 1.4, these two numbers break the code. I have developed a small simple standalone program to separate the numbers 1.0 thru 1.9 and print out the results and I see the same thing with the typecast not working for the numbers 1.2 and 1.4. My program is attached. When I run it I get the below results in the output file. A double of 1.2 gets converted to 2.00000000 and then when I typecast it to an int I end up with 1 not 2. I have this same problem when I start out with a double of 1.4
a double of 1.000000 makes a int of 0 with tmp of 0.000000
a double of 1.100000 makes a int of 1 with tmp of 1.000000
a double of 1.200000 makes a int of 1 with tmp of 2.000000
a double of 1.300000 makes a int of 3 with tmp of 3.000000
a double of 1.400000 makes a int of 3 with tmp of 4.000000
a double of 1.500000 makes a int of 5 with tmp of 5.000000
a double of 1.600000 makes a int of 6 with tmp of 6.000000
a double of 1.700000 makes a int of 7 with tmp of 7.000000
a double of 1.800000 makes a int of 8 with tmp of 8.000000
a double of 1.900000 makes a int of 9 with tmp of 9.000000
Solved! Go to Solution.
03-20-2012 11:34 AM
This sounds like a rounding issue: 1.99999999999999998 will be truncated to 1, even if it is displayed (with six digits of precision) as 2.0. Did you check your floating point numbers at higher precision? And why not use rounding?
03-20-2012 11:40 AM