01-19-2015 04:22 PM - edited 01-19-2015 04:23 PM
Hi all,
I am having problems using hex values in a formula node. It appears that the hex values are converted to decimal and then assigned to the variable. For example, if I initialize an int8 in a formula node to 0xFE (should be -2), I get 127 (the highest positive value an int8 can have). In order to get -2, I must initialize the variable to -0x02. However, if I initialize the variable to -2 decimal and view it in a indicator in hex, it displays the correct 0xFE. Is this typical behaviour? I didn't think negative hex values were used and I can't find it documented anywhere. Here is a VI that shows the "problem".
Thanks!
01-19-2015 05:10 PM
That looks like an unfortunate artifact of C behavior, where there are no negative numeric literals. The value "-2" is actually interpreted as the unary negate operator applied to the positive value 2. Similarly, all hex values are positive, but the negate operator still works on them.
I don't see a convenient solution to directly interpret the hex bits as a 2's complement value, or even to reinterpret the value via a cast. I'm not a formula node expert, though, so maybe others know of a way.
01-20-2015 01:22 AM
Hi brett,
int8 test = 0xFE
You defined a variable to be of type I8, but then you want to assign a value out of range of that datatype. LabVIEW will coerce the value to fit the datatype resulting in 0x7F stored in test.
int16 test2 = test
Now you copy the value stored in test over to test2. Why do you even expect a value of 0xFE here? In the line before you stored 0x7F in test, so test2 will also have the value 0x7F!
int8 test3 = -0x02
This was explained before…