Hello - I have dug around a bit on this site concerning loss of precision for floating point numbers, and I have seen articles blaming this on the architecture, and one claiming that the problem was fixed in CVI 4.0. I am using LabWindows version 7.0, and I still see this problem. I have a double precision variable initialized to the value 16.62. When I start debugging, it shows up with the value 16.6200008. This requires a high-precision calculation, and it throws off all results following. I have tried this in two other environments, MS Visual C++ and Paradigm C++, and I do not get the same result, so I can't see this as an architectural issue. Any other suggestions?