11-02-2021 01:32 AM
I'm currently working on a temperature measurement program, and the temperature is sent in hexadecimal from the instrument. However, when I look at the protocol, I see that negative values are calculated using two's complement, so I would like to convert the hexadecimal data to binary, and then convert it to decimal if the most significant bit is 0, and calculate two's complement if it is 1.
How can I compare whether the most significant bit is 0 or 1?
If anyone knows, please let me know.
Translated with www.DeepL.com/Translator (free version)
11-02-2021 01:58 AM - edited 11-02-2021 02:05 AM
(The standard method for checking any particular biy is masking with a value containing only that bit set by AND'ing with your value, then checking for !=0.) Most likely you don't need any of this if you just scan or cast your binary data to a signed integer.
Can you attach a simple VI containing a typical received string as diagram constant? How many bytes per value? Is the data (1) a formatted string, using characters 0..F in normal display or (2) a binary string that "looks right" if the string is set to hex display?
(Please don't include spam for translation software at the end of your post. It is irrelevant for the problem!)
11-02-2021 02:15 AM
Upper part answers your question (assuming the value has 16 bits).
But just let LabVIEW do this conversion for you with one simple function.