LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Upper Limit In Range and Coerce

I think you misread my post, Darin K. I was saying that checking an array index was a better example to justify the default settings than the one RavensFan posted, but I suppose it doesn't matter all these years later... 🙂

0 Kudos
Message 11 of 14
(1,720 Views)

Yes, I know this post is quite old.  The function itself is very old.  I decided to add a note since this page is the first hit searching "labview in range and coerce bug".  The inclusive lower limit and non-inclusive upper limit defaults make me think "Great!  Defaults are already set up for coercing to a valid array index."  Unfortunately, I've fallen for this more than once (shame on me, as the saying goes).  Darin K's code snippet (earlier post) seems like the "obvious" reason for the asymmetry in the limit defaults:

 

CoerceExample.png

 

Darin's code is correct: the In Range? output works correctly in this case, and is fine if you want to error out or do some other special handling.  But if your purpose is to force x to a valid index, assuming that the above array size use case is the reason for the limit selector asymmetry leads to the following flawed code:

inRangeAndCoerceArrayIncorrectUsage.png

 

The correct code takes extra time to write:

inRangeAndCoerceArrayUsage.png

(Aside: changing the upper limit diamond to be inclusive has no effect...just helps readability)

 

The context help states that the coerced value will "fall within the range":

inRangeAndCoerceContextHelp.png

 

Perhaps Clinton's lawyers could help us realize that to "fall within the range" does not necessarily mean In Range?  But to my thinking, the context help is only correct when both limits are included.  Seems like this should be the default.

 

As pointed out (kudos!), the detailed help gets the details correct:

"If x is not in range and the function is in Compare Elements mode, the function converts the value to the same value as upper limit or lower limit."

Another pointed out that coercing into the range for floats using machine epsilon would likely be a surprise (perhaps pleasant, perhaps unpleasant, but hard to observe either way).  I agree with that.  It would be buggy and arbitrary for coerced(x) to be in range for integers, but not for floats.

 

For the benefit for future googlers that may fall into this same snare, I'll share the rule that (hopefully!) will now stick in my head (and I wish was in context help):

The limits selector (filled in/hollow diamond) has no effect on the coerced(x) output.  The limits configuration only affects the In Range? output.

Message 12 of 14
(1,695 Views)

Of course the problem with all this is that getting the last valid element if the index is too high might lead to unexpected results. Why would the last element be so special?

 

If the control for desired index is non-negative, we could also use min&max (simpler, because there are fewer inputs and outputs ;))

 

0 Kudos
Message 13 of 14
(1,685 Views)

Thanks.  But as you mentioned, that pattern doesn't coerce negative values to zero, which I also want.  I'm providing a control that gives users a no-look way to pan around their data.  They can pan using arrow keys (on an index control, for example) rather than mouse drag or click in juuust the right spot.  Having the UI stop at the first or last frequency (or first or last plot) in their data makes perfect sense.

 

Usability is why I'm doing this kind of thing.  There are likely many other uses.  I'd reach for this pattern anytime I want to do something "reasonable" instead of erroring out.

 

Anyway, I probably got a bit carried away on this post.  This isn't a huge thing.  I just noticed others made the same mental leap I did, thinking that the excluded upper limit default was for array size purposes.  If it is, it doesn't work for coerced(x).

0 Kudos
Message 14 of 14
(1,673 Views)