From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Unexpected behavior with Coerce To Type bullet (converting integer to enum)

Solved!
Go to solution

I've used the "Coerce To Type" bullet in production code for a few years, believing it to be generally vetted for certain uses, even if it isn't exposed on any palettes.  (If you're unfamiliar with it, stop now and read a seven-year-old thread on the IE describing it here.)  My use for it has really been limited to conversion of an integer into an enum type.  The typical need involves receiving remote data serially and wanting to convert a recovered value back to the originator's significance, like a message ID byte, error index, etc.  My evolved practice is to always define a "sentinel" value as the last enum entry, after the last "legitimate" value, such that an out-of-range value is pinned to that entry (e.g., "invalid (>=0x1A)" would fall after an enum where the last expected value would be 0x19).

 

The attached VI (I used LV2015, but looks like the behavior is present at least in 2014 and 2016) demonstrates some behavior with LV enumerations in general, as well as case structures driven by enums, that I hadn't noticed till now, and it has implications for use of Coerce To Type.

 

The tl;dr here is that the described behavior of CTT - it coerces a numeric into the range of the enum, and "pins" to the last enum entry if the numeric is beyond the enum's range - is NOT what happens.  The enum wire retains the underlying integer value outside its defined range.  Enum indicators on such a wire APPEAR to be pinned to the last entry (even their digital displays do too), but that's misleading.  And a case structure driven by an out-of-range enum (which doesn't need a default case if all enum values have cases) will select the "default" case IF one is specified, but if there is no default, will select whichever case happens to be first in the case order.  (I never knew that rearranging case display order on the block diagram had any code generation effects - EVER.)

 

Would love to hear feedback on this.

 

Dave

David Boyd
Sr. Test Engineer
Abbott Labs
(lapsed) Certified LabVIEW Developer
0 Kudos
Message 1 of 15
(4,291 Views)

Bug there is 2017 also, but, if you have 2017 use Number to Enum VIM, it appears to give the correct result.

Snip1.png

 

 

 

 

mcduff

 

0 Kudos
Message 2 of 15
(4,276 Views)

And LV 2018 should have the bug fixed.

Message 3 of 15
(4,262 Views)

Thanks, Stephen, for confirming it is behavior warranting a fix.  Frankly I'm surprised to find no prior reporting on this.  I only noticed it because it is the first time I did more than just display the cast output - I actually branched code from it.

 

Now comes the inevitable follow up question about where the fix lies within LabVIEW - care to lift the curtain on this? The existence of "enums with out of range values", the compiler's behavior with respect to the case selector, or the CTT node itself?

 

I understand if this is not something you want to get into.  But of course I'm curious.

 

Dave

David Boyd
Sr. Test Engineer
Abbott Labs
(lapsed) Certified LabVIEW Developer
0 Kudos
Message 4 of 15
(4,246 Views)

And in the meantime, would Variant to Data still be the workaround, or perhaps even the preferred method?

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 5 of 15
(4,241 Views)

Kevin,

 

I can confirm that Variant to Data, as a substitute, works properly.  In hindsight, I probably should have used it for this purpose.

 

(The bullet cast just feels better.  Why am I bringing variant data into the mix? Smiley Frustrated

David Boyd
Sr. Test Engineer
Abbott Labs
(lapsed) Certified LabVIEW Developer
0 Kudos
Message 6 of 15
(4,239 Views)

To avoid Variants, maybe use a "To unsigned word"  node followed by a "type cast" instead?  That's what I use and it seems to meet the required behavior.

0 Kudos
Message 7 of 15
(4,221 Views)

Kyle,

 

Using the typecast (aka square-peg-in-a-round-hole) node absolutely requires that the enum's datasize (which can be set as 8, 16, or 32 bits) is matched to the numeric's size.  In addition to being the classic solution, it is also a classic stumbling point for unwary/new LabVIEW programmers - size mismatches result in lost values.  Freshly created enums default to 16 bit representation, so the "To U16" bullet is a good first fix, though not guaranteed.

 

Thus, the Coerce to Type bullet was born, which was intended to preserve value during the retyping under all circumstances.

Dave

David Boyd
Sr. Test Engineer
Abbott Labs
(lapsed) Certified LabVIEW Developer
0 Kudos
Message 8 of 15
(4,209 Views)
Solution
Accepted by DavidBoyd

The bug is in the Coerce To Type primitive, specifically in the way it handles enums. I fixed it back in August... my fix should be in LV 2017 SP1. The compiler should never allow out-of-range enums, ever, but this node was doing that.

 

If you want the gory details... a coercion dot on a numeric-to-enum terminal should always pin the value. The vast majority of the time, the wire/terminal generates the pinning code. But there are some primitives that say, "I know how to handle out of range values myself... dear Wire, please don't pin it for me. Just pass me the value and I'll check it, and that way the value doesn't get stomped on, which may mean that we save on a data copy, which might save a whole array of data copies in some cases." Every primitive declares itself to have one behavior or the other -- the Coerce To Type declared (wrongly) that the value didn't need to be pinned by the wire... obviously, Coerce To Type has to actually modify the value in order to pass it downstream. Nodes that don't need the pinning keep their behavior internal. Nodes that do need the pinning are ones that pass the value downstream.

 

But the real mystery... why was AQ so interested in this unreleased and undocumented primitive that he fixed this bug AND the crashing bug caused when you use this primitive with LV classes AND put together an actual test suite for the primitive? That is a mystery for the ages. Not to change the subject or anything, but sometime in the next two weeks, http://ni.com/beta should open up to sign up for LV 2018 beta testing. If you're interested in that sort of thing.

Message 9 of 15
(4,194 Views)

@AristosQueue (NI) wrote:

But the real mystery... why was AQ so interested in this unreleased and undocumented primitive that he fixed this bug AND the crashing bug caused when you use this primitive with LV classes AND put together an actual test suite for the primitive? That is a mystery for the ages. Not to change the subject or anything, but sometime in the next two weeks, http://ni.com/beta should open up to sign up for LV 2018 beta testing. If you're interested in that sort of thing.


Oh you big tease! Smiley Very HappyHeart

0 Kudos
Message 10 of 15
(4,123 Views)