03-08-2010 02:00 PM
LabVIEW experts,
I have a math question about QNAN and typecasting. I have written code that will take a hex string and convert it to a floating point number. For example:
hex: 0000C0C3
I swap the bytes to get it to C3C00000, integer value 3284140032, and then typcast it using 0.00 as the type. When I come across hex strings which evaluate to negative numbers, such as 0000C0C3 and after I swap the bytes and typecast, I get -384. Using this same procecdure in C++, I get 1.#QNAN00. How does the LabVIEW type cast subvi actually work?
Solved! Go to Solution.
03-08-2010 02:05 PM
03-08-2010 02:18 PM
Reading my question over again, I don't think I explained my question to well. If I take 0000C0C3 as a string, pass it to the "Hexadecimal String to Number" subvi, I get an integer value of 49347, which I typecast to a double for a final value of 6.91499E-41.
If I swap the bytes of 0000C0C3 to C3C00000 and repeat the process, the "Hexadecimal String to Number" subvi returns an integer value of 3284140032, which I typecast to a double for a final value of -384.
The typecast subvi help file says it returns the value as *(type*)&x. When I use this same procedure C++, it works for all positive numbers, but I get a 1.#QNAN00 error on negative numbers. I'm just trying to understand how LabVIEW typecasts the negative numbers.
03-08-2010 02:34 PM
Hi shivels,
what happens when you typecast to a SGL (as you have a 4 byte number) in both LV and C?
Typecasting means reinterpreting a byte stream. 0xC3C00000 represents an U32 = 0d3284140032 or an I32 = -1010827263 or a SGL = -384. All use the same binary representation, just their interpretion is different...
03-08-2010 02:54 PM
03-08-2010 04:17 PM
Thanks guys thats the info I was looking for. I kind of thought LabVIEW had to be doing something funny, but wasn't sure. Now I'm off to try and figure out how to prevent the 1#QNAN00 in my C++ code.
Cheers,
Shivels