I am attempting to use a LabView application to read an array of binary single precision floating point numbers transferred through a TCP/IP connection from a Windows C++ program. The endianization occurs before the values are sent to the Labview application. When I read the values in LabView, some values are interpreted correctly, some are not. For instance, the C program is sending a 6 as one element, and 7 as another. Labview interprets both as 6. The difference between 6 and 7 in binary format is 1 bit (bit #21 if counting from 0 in LabView format). There are 2 other values that show the same error- 459.67 is being sent, 395.67 is read by labview (1 bit difference, exact same bit... 7.5 is being sent, 6.5 is being read by LabView (1 bit difference, exact same bit). LabView reads that bit as 0, when it should be 1.
This seems very odd to me because most values are being read correctly, including other values with that same bit on. There are values being read correctly that are both before and after the incorrect values in the array, so it's not just an issue with an offset or something in the bit stream. Additionally, when attempting to read an array of values with all bits on, I get a strange pattern of 111110011111100111111001111110.
We have also verified that the binary representation for the values is the same on both machines, once you account for the byte swapping. What am I missing here? How in the world can some values come across correctly and others incorrectly? Any help would be greatly appreciated! Our fallback is to transfer everything in ASCII, which is going to greatly increase packet sizes.
Thanks,
Jason