LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Easier Way to convert Hex to Integer

Solved!
Go to solution

This problem is mainly to do with a hex string to integer conversion.  The device, connected via serial connection outputs hex in the format 0A00, when looking at it in hex.  The first two digits are the data, the last two are attached to seperate data points.  what I'm trying to do is to raster the last 2 digits off so that 2A00 becomes 2A and then convert the 2A into integer format (42).  I have come up with a method to do this, but it is very messy and occasionally results in data loss.  I have attached my vi (8.2).  The large integer being subtracted is a constant that continues to come up for reasons I am not aware of.

 

another issue which I am coding through is the fact that sometimes a byte slips by, making 2A00 become 00BB or something of that format.  Is there a way to scan the string, determine whether the first two digits or the last two digits are the valid ones and then seperate the valid string?  In this code, it would be necessary to allow for the value 0000 to read 0.

 

If there is confusion, here is some examples of incoming data and desired output

0100 - 1

0A00 - 10

0005 -5

000B - 11

etc.

0 Kudos
Message 1 of 5
(2,915 Views)
Solution
Accepted by topic author dark7flame

Since you are reading 2 bytes, it looks like you should be dealing with a 16 bit number. Typecast to U16 rather than I32 like you are doing now. And all of that string manipulation make it look like you are getting a string of ASCII formatted characters like "0" "A" "1" "B" rather than a 2 byte string of ASCII characters 0A and 1B. Then you can split the U16 number and take the higher order byte.

 

I can't explain why you would be having a byte slip by unless it has something to do with all that string manipulation you are trying to do.  But if it still happens, and you know one byte for the other is always zero.  Just take the high order byte and add it to the low order byte.  Since one is zero, it will have no effect on the other byte.

 

 

Message Edited by Ravens Fan on 07-07-2009 01:54 PM
Message 2 of 5
(2,906 Views)
There's something about this that doesn't make sense. You said that you're reading 2 bytes, and the first byte is the data, while the second byte is "attached to separate data points", whatever that means. Yet, in the list of examples that you provided you sometimes have the data being the first byte, and sometimes it's the second byte. This is inconsistent. Are the examples you provided what you get because of this "byte slippage"?
0 Kudos
Message 3 of 5
(2,897 Views)
yes smercurio, that is the byte slippage i'm referring to.  I think it has something to do with hardware issues, not software.  Good idea Ravens Fan, I implemented your method and it works just fine.  Adding the two "sections" ends up giving the correct output 100% of the time, even with the data slipping.  That gets rid of the hardware problem by compensating for it in the code.  Thank you all for your help.
0 Kudos
Message 4 of 5
(2,864 Views)

It may not be hardware. It's possible you're simply not keeping up with the transmission. You should consider buffering your serial read, and have your code operate on the buffer.

 


Adding the two "sections" ends up giving the correct output 100% of the time, even with the data slipping. 

I assume by this you are referring to simply adding the high and low byte output of the Split Number function. In that case, yes you'd get the number.

 

0 Kudos
Message 5 of 5
(2,857 Views)