I'm working on an FPGA project. I have a fixed-point number, with the a type (+/-, 17, 4):
If I take the absolute value of this number, I get as an output a type (+, 17,4). Naively that makes sense:
On second thought, though, we don't need the bit that tell us what sign it is, so I'm actually expecting a (+, 16,3) number. If I create a typical (+, 17,4) fixed point number, we see a range of this:
Is this a bug in the IDE?
Solved! Go to Solution.
I see the same in a LV2019 VI within default "My computer" environment (which is Windows).
The result of the Abs() function is an indicator, presented as FXP type <unsigned,17,4>. This indicator is not able to represent larger values than 8 (as shown in your 2nd image). No coercion dot in the block diagram.
Once I edit the FXP type of this indicator (by switching to signed and back to unsigned) it acts like expected for <unsigned,17,4>: now it can represent also values from 8 to 16. And one more note: still no coercion dot in the block diagram…
I guess some subtle bug in the IDE.
I don't think it actually is a bug, it's a numeric neccessity.
What's the lowest negative number your initial FXP can hold? -8 right?
What's the highest positive number your new FXP needs to be able to hold? 8 right?
What's the highest number either of your two proposed FXP numbers can hold?
+17,4 can represent 8
+16,3 can represent 7.99999999
You potentially lose information if you don't widen the result. This is because binary numbers are not symmetric around zero, there's always one negative number without a positive counterpart. And when performing Absolute values, this would otherwise lead to errors.
but the information shown in the properties dialog are wrong for this unsigned,17,4 FXP data type...
No they're not. They are probably not what you're expecting, but they're operating as planned.
It's a little-known feature of FXP handling in LabVIEW that where the overflow of a given numeric type is known, the resulting value range is stored somewhere as meta-data. This actually prevents code from expanding the width of FXP values at each and every stage.
If you look at this example, numerically, the range of possible values of incrementing an FXP +4,4 three times is actually minimum 3 and maximum 18. The values are correct. I agree it's confusing because it doesn't correspond to the full range of the final control type (FXP +5,5). But without this meta-information, we would otherwise have to expand the width of the datatype on EACH +1, resulting in a +7,7 datatype, massively over-inflated.
LV tracks this meta information wherever it can to allow widening the FXP datatypes only when actually logically necessary. I agree it's really confusing and have in the past asked for a separate display to be used for this (Datatype limits and Logic limits for example).
So the fact that the dialog above shows a range up to and including "8" for a +17,4 datatype is precisely why it needs to be a +17,4 datatype and NOT a +16,3 datatype, because then the maximum would be one bit lower than "8". Again, I agree it's confusing, but it's actually a really cool under-the-hood implementation which helps save a lot of resources on FPGA targets. I just wish it would be displayed in a more deliberate manner so that it can be better understood.
then I would call it a bug of the properties dialog to show values for encoding and range of a FXP which don't fit together.
A FXP of <unsigned,17,4> does NOT support a range of 0…8 with steps of 1/8192 (as shown in image 2 of the 1st message), but it can hold values from 0…16-1/8192, with steps of 1/8192 (as shown in image 3 in 1st message).
I agree it's really confusing and have in the past asked for a separate display to be used for this (Datatype limits and Logic limits for example).
Is there an LabVIEW Idea entry I can support?
If you accept it as being the properties of the precise item you right-clicked on, then it's correct under the assumption that "range" means logical range, for which there is no other precedent in LabVIEW.
Again, confusing, but not wrong.
Bug? Yeah, but which part?
I don't think there's an idea for this yet. I recall discussing it at some stage years ago. It's been sitting in my brain with a big red flag attached ever since I also thought LV had gone completely insane int he past.
Thanks, Intaris. That was very helpful. I also noticed that the output of the "negate" operator added another bit to my FXP number. This makes sense based on your explanation that the FXP isn't always symmetric about 0. Cool.