LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Creating random array without for loop

Solved!
Go to solution

 Hi,

In all the examples that I found, a for loop is used to create an array or vector. Is there a more efficient way to do it than adding the element one at the time with a for loop? 

I can't believe there is no way to do rand(10000,10000) without making a long slow for loop.

Thanks for the help.

RMT

0 Kudos
Message 1 of 8
(3,787 Views)
Solution
Accepted by topic author RaphaelMT

For loops aren't really all that slow, but yes there are functions to be found under the function palette at Signal Processing-->Signal Generation.  In addition to the Uniform Distribution that I suspect you'd get from "rand(1000,1000)", there are other kinds of noise distributions available.  Feed them a # of samples and other appropriate parameters and get back an array full of that particular kind of random distribution.

 

Note: if rand(1000,1000) produces a 1000x1000 two-dimensional array, the efficient LabVIEW approach would be to generate a 1x1000000 distribution with one of the above function calls, then call "Reshape Array" to make it be 1000x1000.  LabVIEW won't actually copy the data around, it will just know to treat that big data space of a million values as though it's a 1000x1000 layout.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 2 of 8
(3,778 Views)

@RaphaelMT wrote:

 Is there a more efficient way to do it than adding the element one at the time with a for loop? 


In addition to what has been said, no CPU can create such an array in parallel, so no matter how it is done, a stack of FOR loops would be competitive. One way or another, loops are in there, even if you don't see them! LabVIEW knows exactly how much memory to allocate and will do that once at the start, then fill the allocation with data. If you would run it again with the same size, the allocation is already done and does not need to be repeated.

 

FOR loops are one of the most efficient structures. If you don't want to see them, wrap them into an inlined subVI and it will be reduced to a single function. 🙂

 

The above assumes that you are doing it right using autoindexing. (Of course if you do an ill conceived Rube Goldberg construct with shift registers initialized with an empty array and elaborate constructs using "insert into array" or pushing the array in and out via value properties or local variables, all bets are off. So think first.)

 

Message 3 of 8
(3,744 Views)

Thanks for the clarification. I'm more use to matlab and avoiding for loops is usually far more optimal in computing time.

0 Kudos
Message 4 of 8
(3,722 Views)

Thanks.

0 Kudos
Message 5 of 8
(3,721 Views)

There is a small problem with generating random numbers in a parallel loop    the dice need a seed and that seed changes automatically on each call.  The array will still contain the same pseudo random values but, they may be in a randomized order dependent on the OS schedule. 

 

Normally this caveat is not a problem in practice:D

 

And yes, there is no hardware on your CPU that generates multiple pseudo random values at the same time.   You need special equipment for that.


"Should be" isn't "Is" -Jay
0 Kudos
Message 6 of 8
(3,718 Views)

So if I want to create an array in general, while using the for loop, labview knows hom big my error will be before starting to fill it?

0 Kudos
Message 7 of 8
(3,707 Views)

@RaphaelMT wrote:

So if I want to create an array in general, while using the for loop, labview knows hom big my error will be before starting to fill it?


It only knows how big the array will be (The error can still be any size 🐵 😄

 

Unlike in a while loop where the final number of iterations is unknown, a FOR loop always knows the number of iterations before it starts, and thus the resulting array size. (With conditional indexing tunnels, the resulting output array could be smaller, but LabVIEW still know an absolute upper size boundary to allocate, even if some is not used at the end. Still much more efficient than incremental allocation).

 

The LabVIEW compiler is an absolutely amazing thing, almost magical! Start reading here or the presentation posted here.

Message 8 of 8
(3,705 Views)