LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
moderator1983

Random number function with option of including/excluding the endpoints

Status: Declined

National Instruments will not be implementing this idea. See the idea discussion thread for more information.”

I have seen the request of Random number function improvement, however further to that, my wish is to have an option to include or exclude the endpoints (just like In Range and Coerce Function).

 

Random Number with option


I am not allergic to Kudos, in fact I love Kudos.

 Make your LabVIEW experience more CONVENIENT.


14 Comments
GregSands
Active Participant

I don't think this is meaningful at all for real (DBL) random numbers. (The current Random function can return 0 - test it and get back to me when it actually does).

 

But if you were talking about a small integer range, then it might be useful.

AristosQueue
Proven Zealot

It is meaningful for real numbers, and is occassionally useful. It is more useful for integers, but saying "I want floating point numbers right up to 0 but not including 0" often avoids annoying asymtotes.

Darin.K
Trusted Enthusiast

> It is meaningful for real numbers, and is occassionally useful. It is more useful for integers, but saying "I want floating point numbers right up to 0 but not including 0" often avoids annoying asymtotes

 

I am curious which statistical measure used for pseudo-random number shows any "meaningful" difference between [0,1) and (0,1).

 

Assuming there is such a measure, which application sensitive to it would be relying on the dice function?  For all we know it is a simple wrapper around the junky rand() function.  (At least the Uniform White Noise VI has some documentation).

 

I see some utility for integers, but not much, just wire the limit you really want.

 

AristosQueue
Proven Zealot

It isn't a statistical measure that shows any advantage. It's the fact that I don't have to check the random number for equality to zero and then re-roll it if it is being passed downstream to some math function that is doing divide or otherwise limits if given zero. I've had cause to use that a couple times. Not very often, but occassionally.

GregSands
Active Participant

Surely it makes more sense to test for it explicitly on those rare occasions, rather than slow down the default.  But it's hard to know what the penalty would be without, as Darin says, documentation on how the random number is generated.

Darin.K
Trusted Enthusiast

You do understand that the chances of that function returning 0 are much lower than me winning the Lotto.  Twice.  This includes the fact that I do not even play the Lotto.

AristosQueue
Proven Zealot

> You do understand that the chances of that function

> returning 0 are much lower than me winning the Lotto. 

> Twice.  This includes the fact that I do not even play the Lotto.

 

Those chances rise significantly as soon as the code is deployed on a production machine and approach near certainty as the value of the process under control increases. And when it does crash, it will be closed as "Not Reproducible" because no one is looking for the random function to return zero.

AristosQueue
Proven Zealot

To be clear -- I don't have any use cases for floating-point that aren't checking for zero, and I've only had reason to do that a couple times in my whole career.

Darin.K
Trusted Enthusiast

Those chances rise significantly as soon as the code is deployed on a production machine and approach near certainty as the value of the process under control increases. And when it does crash, it will be closed as "Not Reproducible" because no one is looking for the random function to return zero.

 

I would probably suspect a cosmic ray hit my CPU causing bit errors before I would I would suspect the random number generator returned zero.

 

Let's break out the envelope here for some quick math.

There are 2^64 possible DBLs, and roughly half are between -1 and 1, so 1/4 are between 0 and 1.  That means 2^62 possible values which I will peg at roughly 10^19.

If your production machine is rolling the dice 1 billion times per second then you expect to hit zero once in 10^10 sec, one year is pi x 10^7 sec, so every 300 years or so.

 

Near certainty?  Not unless your production machine is a supercluster running a version of LV I am unaware of.

 

And as GregS mentions, if one bug per few hundred years is too much risk for you to take, the proper course is to validate the input of the affected function.  Output validation?  What good is that, you have no control over where that output goes.  Input validation?  Yes, you do have control over what comes into your function.

AristosQueue
Proven Zealot

I tend to treat all errors that I know could happen as something worth at least considering a fix for, and if the fix is trivial enough to fix, then do it up front (i.e. test for zero and re-roll if needed). Cosmic rays could flip a bit -- if it was possible to add a Boolean check and rule that out for my whole program, I'd suggest we should all add that Boolean check. That unfortunately requires significant hardware, so I don't worry about it.

 

It's not that it is too much risk to take; it's that the risk was so easy to eliminate.