06-10-2016 12:57 AM
06-10-2016 02:20 AM
Hello Madottati,
Try co change the constant type to qame representation as the Enum.
If it U16 set U16.
In the Typecast help you can read
"Effects of Mismatching the Sizes of X and Type
This function can generate unexpected data if x and type are not the same size. If x requires more bits of storage than type, this function uses the upper bytes of x and discards the remaining lower bytes. If x is of a smaller data type than type, this function moves the data in x to the upper bytes of type and fills the remaining bytes with zeros. For example, an 8-bit unsigned integer with value 1 type cast to a 16-bit unsigned integer results in a value of 256"
That's hat happen in your case.
Regards.
06-10-2016 02:26 AM
@Madottati wrote:Hello all,
can anybody tell me why the result is 0?
Please check with the Representation of your Constant Inpu(I32)t and Reference Input Typedef (U16)
Try to give both with U16 you will get exact output as your input number
06-10-2016 02:27 AM
08-31-2018 02:56 PM
Yah, I think Sabri has a good explanation here. The size of the underlying data type is important. True, you can simply wire a virtually any numeric (dbl, uint) to the U16 enum and LV will make it work. But the result is probably 'brittle' with variations in LV (imho). Also, LV wines a bit by putting a red dot on the enum input indicated a type mismatch. The best approach is to match the size of the enum (U16) first (as Sabri says), then do the type cast using a constant created from the target enum. This works for me.
08-31-2018 03:45 PM
If you have 2018, not sure if in 2017, try Coerce to Type
mcduff
08-31-2018 04:15 PM
I can confirm the technique to work in LV 6.1, 7.1, 8.51, 2016, 2017. (I've been coding for a while ... 🙂 ) I'm not sure about 2018. I think the issue is the behavior of the "Type Cast" operator. If you don't mind a red dot here and there, just use the "To U16" operator (for an enum). But if you're using the "Type Cast" operator, you should match/coerce length before you type cast. If you do that, no red dots and no errors.
08-31-2018 04:23 PM - edited 08-31-2018 04:28 PM
I mentioned a "new" function in 2018; I believe it has been around longer, just not exposed. The technique you showed works, I am just illustrating another one that does not have the pitfalls of Type Cast, do not worry about matching types.
mcduff
edit: tried back saving a 2017 version, check if it works
08-31-2018 04:38 PM
Yah, roger that. NP.
Cheers!
08-31-2018 05:59 PM - edited 08-31-2018 06:16 PM
@pbisson wrote:
Also, LV wines a bit by putting a red dot on the enum input indicated a type mismatch.
That's not whining, it is simply a conversion. That is not the same as a typecast and the typecast while working for what you want to do IF and only IF both data types have the same size, is in fact not the right function for this. The conversion is the right one, there just wasn't an official conversion node in earlier LabVIEW versions that could explicitly convert to an enum.
The red dot can indicate potential trouble if it happens for instance on large arrays. But if you place a conversion node to replace a red dot you win nothing, because the red dot simply is an implicit conversion. And the Typecast is in many ways a pretty costly function unlike the C counterpart which is often just a compile time indication to the compiler. The LabVIEW version does all kinds of memory size checks and padding and truncating of the operands to avoid crashes and inconsistent results, that it costs more CPU to perform than a simple conversion.
This picture gives you an example where the red dot would indicate a problem. The upper two functions are really equivalent, as the red dot simply indicates an implicit conversion function. As you can see the intermediate double array that is created, while only temporary needed will explode the memory use of the algorithm considerably unlike in the lowest where that array never needs to be created since the conversion is done explicitly inside the loop on the scalar.