LabWindows/CVI

cancel
Showing results for 
Search instead for 
Did you mean: 

The representation of negative zero

I need some help tracking down the source of a change in the output of a running program.  The issue is the representation of -0.0.
 
Running this program:
#include <ansi_c.h>
 
static float fval;
static unsigned int* iPtr;
 
fval=atof("-0.0\r\n");
iPtr=(unsigned int*)&fval;
printf("value is: %08X", iPtr[0]);
 
Will return the value 80000000 on newer hardware with CVI 8.1 runtime installed. 
Will return the value 00000000 on old hardware with the CVI 7.1 runtime installed.
 
I am trying to track down if this is a difference in the hardware or the CVI runtime library.   The new systems are intel core 2 duo, the older systems the single core cpus from around 3 years ago.
 
If anyone knows the route cause or can shed some light on the results from other systems or configurations this would be useful.
Thanks!
0 Kudos
Message 1 of 9
(4,740 Views)
You're peeking under the type system to look at the floating point representation, which, strictly speaking, is a no no. 

From the C language viewpoint, It's a design mistake if the representation matters.  That's maybe why the compiler writers could have changed the way it handles expressions to allow evaluation to negative 0 where they didn't before.

In IEEE 754 representation, negative zero exists, and is represented by a mantissa and exponent of all zeros except for the sign bit being set (which you saw with CVI 8.1).    Both micros you used were IEEE 754 machines.  There should be no difference in the use of the negative zero in any expression as compared to using a "positive" zero.

There is no mathematical concept of positive or negative zero, though some disciplines use it to mean something in a particular domain.

http://en.wikipedia.org/wiki/%E2%88%920_%28number%29

Menchar
Message 2 of 9
(4,732 Views)
I agree that there is very limited mathematical difference between 0 and -0, but in this case the values were converted from an ascii data stream and placed into a table in NV memory.  The specific issue is that the two different machines calculate a different checksum on the different represented binary data for the same input data stream.  The value of -0.00 is created when a very small negative value is truncated in the acsii data stream prior to conversion by atof().
While fixing the software to eliminate the value of -0.0 in memory is trivial, my issue is in fielded systems. 
If the issue only exists in a later version of the CVI runtime, it is possible that an update of the CVI runtime library by another NI application or hardware driver may introduce the -0.0 representation into a system that is otherwise working.  It is not a trivial thing to upgrade all the existing systems to the latest runtime.   If this is related to the CVI runtime behavior I need to nail down what versions could cause this behavior change.
Did you execute the code posted?  It will run in the interactive execution window.  What did the results look like?


Message Edited by mvr on 05-19-2008 12:37 PM
0 Kudos
Message 3 of 9
(4,729 Views)
My CVI FDS 8.5 IDE shows the negative zero (80000000).  But I rebuilt it from source.
 
Maybe it's a compiler issue, not a RTE issue.
 
You might try running a 7.1-built executable against the 8.1 or later RTE.
 
Not sure I understand your deployment scenario completely, but can you manipulate the source string to prevent the "-0"  in the first place?  Or use sscanf instead of atof and try to control things with the format specifier?
 
I do agree that trying to manage the CVI RTE version on a set of deployed systems is a nightmare.  We've been going through this due to an issue with the serial library re-write in CVI 8.1 (which is an RTE issue).  There's a jillion ways a newer RTE version can get onto a target system and then things break.  This is all due to a flaw in some hardware we're using.  While NI says the changes they put into the 8.1 RTE should all be backwards compatible, the fact of the matter is the changes expose subtle implementation flaws or details that didn't matter before.  NI's "right" but your deployed application is still broken.
 
I think C99 adds some stuff as far as formatting a floating point value as a hex string. 
 
We used to do this (and then calculate a checksum on the resulting ASCII string of hex digits) to move IEEE 754 double floating point values around different systems in messages.  The receiving side would verify the checksum on the message's ASCII string and then convert it as hex integers, then finally cast the value to floating point.    We still needed compatable underlying representations for it to work, but the checksum mechanism didn't care about the representation.  But, since you're doing checksum on the binary representation itself, you have the problem.
 
Menchar
 
 
 


Message Edited by menchar on 05-19-2008 12:30 PM
Message 4 of 9
(4,723 Views)

For what it's worth a 7.1 compiled source run on the new system with the 8.1 Runtime returned 80000000h.  So far it looks like there could have been a "fix" to the cvi runtime.  I remember reading your thread on the serial issue, tough situation.  I am probably in a better situation than you were in.  It could be argued that in this case the runtime is backward compatible, and that the representation of -0.0 without the msb set was in fact a bug and that it has been fixed.  But in this case whetever has changed has exposed an issue in our system.   I can accept that as perfect compatibility is too much to ask for. 

I do not control the input data stream.  Technically -0.0 is probably invalid but it is out of my control.  Gotta love a legacy system.   It is easy to deal with from a code standpoint as I can trap for the value 80000000h and convert it to 0, no issue there.  But my problem is similar to what you experienced in that a change to NI hardware or software in fielded systems can cause a runtime upgrade which will then expose this issue in our CVI based application.  The validation process for this software on a hardware system is not trivial.  If this is in fact runtime library related, I do have a paperwork based path to force an application software update if the runtime gets updated on the system.   But to do this I need to specifically state which runtime versions will trigger the change.  I am going to have to prove that I understand the nature of the issue,  when it will occur and how it will be corrected.  

Some of the usage of this system is very deeply buried in burocracy, I am sure you know how that goes.  

0 Kudos
Message 5 of 9
(4,716 Views)
Nowadays we tend to put a CVI runtime in each application's directory. OK this means that there may be unnecessary duplication of files on a system with many CVI programs, but hard drive space is cheap and it does provide some protection against the possibility of the installation of a newer runtime engine breaking a perfectly satisfactory older application. I've seen no end of problems caused by supposed improvements and bug fixes in newer runtimes
 
JR
0 Kudos
Message 6 of 9
(4,699 Views)
And we've considered the inverse of that, not deploying a runtime with the applications and doing a single, carefully controlled install of the required RTE version.  But this is still vulnerable to someone innocently installing almost any application or driver from NI or a CVI-generated distribution and unknowingly updating the RTE.
 
It's further complicated by the fact that NI won't even tell you what exactly the file manifest is for the RTE !  I.e., exactly which dll's are considered the RTE and how to recognize them in a system folder.
 
And you can't uninstall a CVI RTE using the Remove programs tab in Windows control panel ! 
 
I have to think a more sophisticated and professionally capable tool would provide full control over the RTE particulars.   You can only deploy the (single) RTE version that matches the IDE's version.    Even if you install two different CVI versions on our development system, only the latest version of the RTE will be used, without warning, even if you build an application using the older CVI version !
 
It all kindof works so long as your apps are always all OK with the latest RTE version, whatever NI decides that may be, but heaven help you if they aren't, you're in for a lot of hassle.
 
Menchar
 
 
0 Kudos
Message 7 of 9
(4,675 Views)

Hello mvr,

I can confirm that the implementation of atof changed between 8.0.1 and 8.1. The reason for the change was because of a bug in which the conversion was incorrectly being affected by the number of digits of precision in the input string (for example, "8.0" might have converted differently than "8.0000").

To fix this bug, we replaced our own implementation of strtod with that of a 3rd party implementation (for specifics on the implementation we're now using, look in the CVI online help, in the Important Information>>Copyright topic). That change must have been what introduced this sign bit change that you ran into. We didn't list this in the release documentation for 8.1 because our testing hadn't uncovered any behavior change as a result of the change. Obviously, we missed this one. I'm really sorry for the trouble this has caused.

Luis

Message 8 of 9
(4,618 Views)
Thanks for the reply Luis, this is going to help me out considerably.
We are having to explain ourselves a bit since the addition of a piece of hardware in our test set unrelated to my application caused a change in our software behavior.
We knew the runtime on the system was updated but did not see anything in the published bug fixes/updates that would have impacted our application enough to trigger a re-verification.  What helps me out is that first your information nails down the specific runtime versions that will trigger this event on our system.  So we can now explicitly document when this issue will occur.  Second, since it was an unknown issue to you, even with the extensive testing CVI undergoes, that puts it a class of issues that is sort of a "one off event".  This is much less painfull for us to deal with than had it been a case of we ourselves overlooked a known issue.
 
We are going to re-evaluate the use of a local copy of the runtime library in the application directory for these kind of projects where almost any change in the system can generate a considerable bureaucratic response.  Having the option to run CVI with either a global or local runtime is a significant advantage in this case.  It is just a fact of life that runtime changes will lead to the inevitable unexpected change in system behaviour.  I really appreciate the help of everyone in nailing this one down.
 
Thanks!
0 Kudos
Message 9 of 9
(4,611 Views)