LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Hex String to Numeric Value

Solved!
Go to solution

Hello,

 

I have a really (what I think) basic problem of trying to convert between a Hex number back to a "Numeric Constant" value so I can calculate some things.

 

The situation is: I am measuring, from a serial VISA Read, a distance from an Ultrasonic sensor. Its being read into an ordinary "String Indicator" set for hex display. At about 50cm distance with the ultrasonic sensor at a wall, I can measure "0032" as my hex display - Which is great, as 32 in hex is 50.

 

However, my problem is, how do I now go about converting this 0032 in hex into a usable decimal/numeric number??

 

Thanks heaps for you help.

 

Nick.

0 Kudos
Message 1 of 7
(5,373 Views)
Solution
Accepted by topic author nicklijic

Typecaset the 2 byte string to U16, e.g. as follows:

 

 

Download All
0 Kudos
Message 2 of 7
(5,361 Views)

Wow. Thankyou! That was so much easier than I imagined, my thing was huge.. and got no where! 😉

 

Thanks again!

0 Kudos
Message 3 of 7
(5,349 Views)

You have to be careful when typecasting from a hex string into a numeric, unless your hex string and your numeric contain the same number of bytes.

 

For example, if you have 3 bytes (112233 hex) that represent 1 number (1122867 decimal), you would have to show it as a 32bit number. If you typecast it into a U32. The first byte of your hexstring becomes the most significant byte.

 

Your U32 then contains a value of 11223300 hex which is 287453952 decimal. Slightly different from the intended 1122867.

 

To avoid this, I generally always convert hex string to byte array then merge the bytes myself using the join numbers function.

 

Steve

 

 

Stephen C
Applications Engineer
0 Kudos
Message 4 of 7
(5,342 Views)

@Stephen C wrote:

You have to be careful when typecasting from a hex string into a numeric, unless your hex string and your numeric contain the same number of bytes.

 

For example, if you have 3 bytes (112233 hex) that represent 1 number (1122867 decimal), you would have to show it as a 32bit number. If you typecast it into a U32. The first byte of your hexstring becomes the most significant byte.

 

Your U32 then contains a value of 11223300 hex which is 287453952 decimal. Slightly different from the intended 1122867.

 

To avoid this, I generally always convert hex string to byte array then merge the bytes myself using the join numbers function.

 

Steve

 

 




Or you can simply force the string to be the correct size by adding the appropriate number of \0s to the left side of the string. Then simply do you typecast. This approach is a bit more general and you can change the size of your number without need of rewriting the code. Your solution has to be coded for each specific number size. The other alternative would be to do the typecast and then shift the resulting number the appropriate number of bits to the right. Again, this is a general solution that will work for any size number.



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
0 Kudos
Message 5 of 7
(5,307 Views)

I agree Mark,

 

The typecast is the best/easiest method. My post was just to point out the pitfall to those who might search for or otherwise find this post in the future.

 

For any of the applications I have written, the datafield sizes were always fixed, thats why I opted for joining numbers manually.

 

Regards,

 

Steve

 

Stephen C
Applications Engineer
0 Kudos
Message 6 of 7
(5,299 Views)

I all cases I've seen, binary strings are of fixed lenght based on the intended datatype, and picking the correct datatype is all that's needed. Potentially more serious is incorrect byte order (little endian vs. big endian). In this case an "unflatten from string" might be easier. Sure, there could be special cases such as 24bit integers.

 

Still, the practical use of these functions requires familiarity with the underlying binary representation. If this is not the case, the programmer is encouraged to run a comprehensive series of artificial data through the alogrithm to verify correct operation for all situations. For example we are not sure if I16 would be correct in the above situation. However, since we are measuring distances, I assumes U16 to be more likely the correct target.

0 Kudos
Message 7 of 7
(5,283 Views)