10-05-2011 01:48 AM
Hello,
I have a really (what I think) basic problem of trying to convert between a Hex number back to a "Numeric Constant" value so I can calculate some things.
The situation is: I am measuring, from a serial VISA Read, a distance from an Ultrasonic sensor. Its being read into an ordinary "String Indicator" set for hex display. At about 50cm distance with the ultrasonic sensor at a wall, I can measure "0032" as my hex display - Which is great, as 32 in hex is 50.
However, my problem is, how do I now go about converting this 0032 in hex into a usable decimal/numeric number??
Thanks heaps for you help.
Nick.
Solved! Go to Solution.
10-05-2011 01:54 AM - edited 10-05-2011 02:35 AM
10-05-2011 02:22 AM
Wow. Thankyou! That was so much easier than I imagined, my thing was huge.. and got no where! 😉
Thanks again!
10-05-2011 03:28 AM
You have to be careful when typecasting from a hex string into a numeric, unless your hex string and your numeric contain the same number of bytes.
For example, if you have 3 bytes (112233 hex) that represent 1 number (1122867 decimal), you would have to show it as a 32bit number. If you typecast it into a U32. The first byte of your hexstring becomes the most significant byte.
Your U32 then contains a value of 11223300 hex which is 287453952 decimal. Slightly different from the intended 1122867.
To avoid this, I generally always convert hex string to byte array then merge the bytes myself using the join numbers function.
Steve
10-05-2011 09:54 AM - edited 10-05-2011 09:57 AM
@Stephen C wrote:
You have to be careful when typecasting from a hex string into a numeric, unless your hex string and your numeric contain the same number of bytes.
For example, if you have 3 bytes (112233 hex) that represent 1 number (1122867 decimal), you would have to show it as a 32bit number. If you typecast it into a U32. The first byte of your hexstring becomes the most significant byte.
Your U32 then contains a value of 11223300 hex which is 287453952 decimal. Slightly different from the intended 1122867.
To avoid this, I generally always convert hex string to byte array then merge the bytes myself using the join numbers function.
Steve
Or you can simply force the string to be the correct size by adding the appropriate number of \0s to the left side of the string. Then simply do you typecast. This approach is a bit more general and you can change the size of your number without need of rewriting the code. Your solution has to be coded for each specific number size. The other alternative would be to do the typecast and then shift the resulting number the appropriate number of bits to the right. Again, this is a general solution that will work for any size number.
10-05-2011 10:22 AM
I agree Mark,
The typecast is the best/easiest method. My post was just to point out the pitfall to those who might search for or otherwise find this post in the future.
For any of the applications I have written, the datafield sizes were always fixed, thats why I opted for joining numbers manually.
Regards,
Steve
10-05-2011 11:08 AM
I all cases I've seen, binary strings are of fixed lenght based on the intended datatype, and picking the correct datatype is all that's needed. Potentially more serious is incorrect byte order (little endian vs. big endian). In this case an "unflatten from string" might be easier. Sure, there could be special cases such as 24bit integers.
Still, the practical use of these functions requires familiarity with the underlying binary representation. If this is not the case, the programmer is encouraged to run a comprehensive series of artificial data through the alogrithm to verify correct operation for all situations. For example we are not sure if I16 would be correct in the above situation. However, since we are measuring distances, I assumes U16 to be more likely the correct target.