# LabWindows/CVI

cancel
Showing results for
Did you mean:

Solved!
Go to solution

## Accuracy problem with atof

There is following command sequence:

```double a = 0.0;
double b = 0.0;

a = atof("40.000000");
b = atof("39.998000");```

The result is:
a = 40.000000000000000
b = 39.997999999999998

There is a difference between the original value for b and the value after the conversion.

Is there any way in CVI to get the original value, without to lose the precision?

Message 1 of 13
(9,871 Views)

Message 2 of 13
(9,868 Views)

## Re: Accuracy problem with atof

Ok, I see the problem.

But is there in CVI no way  to round the value on the third decimal position after the decimal point?

Message 3 of 13
(9,863 Views)

## Re: Accuracy problem with atof

Do you need to round for display or computational purposes?

For display, you can use format strings like "%.3f".

For computation, I do not think it is possible.

There is only one way the number 39.998 can be stored (represented) and you have seen it already.

The link provided explains the issue.

S. Eren BALCI
IMESTEK
Message 4 of 13
(9,858 Views)

## Re: Accuracy problem with atof

We may have done a mis-service here ...

If you take 39.998 and use the website given in the other thread, you can see that the 32 bit representation closest to this number is 39.997997

Double representation can represent this number exactly.

atof is supposed to be conversion to a double, and it is, it's a front end to strtod().

If you use CVI to view the resulting variable (b in this case) you will see that  CVI reports this value as 39.997999999999998, not 39.998

If you pursue it further, you can see that the memory location for the double contains 39.998 when you set the format to double.

It's the hex sequence     39    B4    C8    76    BE    FF    43    40

which , if you swap it (Intel is little endian) and input it into the 64 bit hex to decimal converter at   http://babbage.cs.qc.edu/IEEE-754/64bit.html

results in 39.997997 in 32 bit and 39.998 in 64 bit representation.

SO ...

1. atof works as expected, it does convert 39.998 properly into a double that can, indeed, represent this number exactly.

2. CVI's "view variable" facility is rendering this double value as a float, apparently.  The format choices offer only Floating Point" and this isn't a double.

3.  If you print this number using sprintf and enough precision and as a double you will see the true value.

Most of the time you might not see the difference or care, but in this "corner case" we are seeing what I pointed out was an equal danger when considering this topic, and that is that the precision of the SW being used to display a floating point/double value can goof you by showing an apparent lesser precision than is really there.

This is at its worst when you have a test result and show a "Pass" or "Fail" result based on the true number while at the same time printing a numeric result that apparently contradicts the pass / fail decision.

So, I suppose you could ask NI why the CVI "view variable" facility can't display a double as a double, but the memory viewer can display it properly as a double.

Menchar

Message 5 of 13
(9,813 Views)
Solution
Accepted by topic author Zaruma

## Re: Accuracy problem with atof

Thanks for your advice. I solved the problem now, as I have multiplied the value by 1000 and had changed after calculating the value back.

Message Edited by Zaruma on 03-17-2010 07:05 AM
Message 6 of 13
(9,790 Views)

## Re: Accuracy problem with atof

If you use    double d = atof ("39.998");

it really does return an exact representation of 39.998 into the double d unless you've managed to link in a non-ANSI version of atof somehow.

It's only CVI's view variable that's maybe making you think it's not an exact epresentation of 39.998.  Or the value is getting assigned to a float somewhere along the line and then back to a double (see below).

I'll agree it's confusing - if you hover the cursor over d in the CVI source window when you're debugging, it reports the value incorrectly.  And if you look at it in the view variable window it is reported incorrectly there as well unless you switch to viewing memory directly as a double.

And if you do this:

double d = atof("39.998");
float f;
f = d;
d = f;

it gets even weirder, the precision keeps eroding.  d winds up truly being  39.99800109863281

If you do this :    double d = 39.998;   the CVI debugger shows you the same wrong display of the number.

But, if all you're doing really is     double d = atof ("39.998");     then d really is exactly 39.998  -   if it looks like anything else, you're being lied to (and indeed CVI debugger lies to you about this except when you look at the memory directly).

Now, if you're doing this   double d = atof (szSomeString);     and szSomeString is not really "39.998" then that would be the problem.

I don't know why NI couldn't fix this, it can't be that big of a deal and it is confusing.  The view variable window  tells you it's a double then lies to you about the value by displaying it apparently with float precision.  Maybe there's a reason for this but I can't see it off hand.

Menchar

Message Edited by menchar on 03-20-2010 08:15 PM
Message 7 of 13
(9,744 Views)

## Re: Accuracy problem with atof

Hi menchar,

I haven't been able to reproduce the behavior you're seeing. When I try it, either in version 9.0 or in 2009, the CVI debugger shows me the correct value. Is what you are doing any different than what I have in this screenshot?

Luis

Message Edited by LuisG on 03-22-2010 10:44 AM
Message 8 of 13
(9,721 Views)

## Re: Accuracy problem with atof

Luis -

Why do you think that 39.997999999 ... is the correct value?  It isn't.  39.998 is the correct value.

39.998 can be represented exactly as a double.

atof ("39.998") produces a double that perfectly represents exactly that value, 39.998.

Why isn't the display   39.998    when you hover the cursor over the variable d, which is a double?

Menchar

Message 9 of 13
(9,713 Views)

## Re: Accuracy problem with atof

To be completely honest, I didn't know for sure that it was the correct value. I guess I assumed that it was, for the reason that the debugger did show a different value for the double variable as it did for the float variable, and the value that it showed for the double was much closer to 39.998 than the float value was. So this suggested to me that the debugger was displaying it, as expected, as a double precision 64-bit value.

But now I did double-check using this online double-precision calculator. If you enter 39.998 as the decimal value, it does return the same value that the CVI debugger shows.

Luis

Message 10 of 13
(9,707 Views)