LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to read binary floating point values from TCP/IP

I am attempting to use a LabView application to read an array of binary single precision floating point numbers transferred through a TCP/IP connection from a Windows C++ program. The endianization occurs before the values are sent to the Labview application. When I read the values in LabView, some values are interpreted correctly, some are not. For instance, the C program is sending a 6 as one element, and 7 as another. Labview interprets both as 6. The difference between 6 and 7 in binary format is 1 bit (bit #21 if counting from 0 in LabView format). There are 2 other values that show the same error- 459.67 is being sent, 395.67 is read by labview (1 bit difference, exact same bit... 7.5 is being sent, 6.5 is being read by LabView (1 bit difference, exact same bit). LabView reads that bit as 0, when it should be 1.

This seems very odd to me because most values are being read correctly, including other values with that same bit on. There are values being read correctly that are both before and after the incorrect values in the array, so it's not just an issue with an offset or something in the bit stream. Additionally, when attempting to read an array of values with all bits on, I get a strange pattern of 111110011111100111111001111110.

We have also verified that the binary representation for the values is the same on both machines, once you account for the byte swapping. What am I missing here? How in the world can some values come across correctly and others incorrectly? Any help would be greatly appreciated! Our fallback is to transfer everything in ASCII, which is going to greatly increase packet sizes.

Thanks,
Jason
Message 1 of 2
(3,369 Views)
Update:

Problem fixed. I say fixed and not solved because I don't know why it's fixed. I had been storing the TCP/IP read in a string and passing it through a shift register after each read. I would then concatenate it with the TCP/IP read result in the next loop. After each read, it would search the concatenated string for ASCII flags. When it found them, it would strip off the flags and then type cast the rest as single precision floating points.

I knew I would be getting the same number of bytes each time, so I ditched the ASCII flags and had just the binary values sent. This way, I expect to get all of the values in one TCP/IP read. No values are passed through a shift register to the next loop and there is no concatenation of the string outputs from TCP/IP read.

I'm not sure if it was the ASCII flags being included or something with the way I was manipulating the string that was causing the binary values to be interpreted incorrectly. Hope this helps someone else.

Jason
Message 2 of 2
(3,361 Views)