LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

find indices of elements in 1d array

Solved!
Go to solution

OK, quick summary for 1M size arrays:

 

  • [In range and coerce] is slightly faster than [Abs, <1].
  • The conditional tunnel creates frequent spikes.
  • Preallocating the output, then trimming later is better (code #3)
  • Building the array as in Chris's code is about 8x slower (not shown, but as expected. Only OK if a match is a rare event)
  • For smaller array sizes, the difference are not significant enough to warrant more advanced code.

 

Message 11 of 19
(1,625 Views)

altenbach,

 

Thanks for that, hence why you are a champion! I included the boundaries in my coerce as I thought that's what he states, my mistake though. Insert into array should have been my go to with pre-allocation, I'll remember this blunder forever, never to repeat it!

CLD | CTD
0 Kudos
Message 12 of 19
(1,618 Views)

ChrisK88 wrote: Insert into array should have been my go to with pre-allocation

No.  It is Replace Array Subset with pre-allocation.  Insert Into Array is just another version of Build Array.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 13 of 19
(1,603 Views)

D'oh! 

 

Yeah, that makes much more sense.. replace the array data instead of building it and carrying that memory buffer around. Looks like I'll be revisiting Show Buffer Allocation prior to posting from now on. Quick fix ain't always the right thing, just learned that the hard way, publicly..

CLD | CTD
0 Kudos
Message 14 of 19
(1,601 Views)

Just thinking outside the box, alway try to find alternative solutions.

Here are two completely different approaches.

They are within a factor of 2 of the fastest so still OK. (WHILE is faster than FOR)

 

0 Kudos
Message 15 of 19
(1,600 Views)

My last comment in this thread..

 

So obviously the less arrays you carry in any loops is better for memory allocation, but in the case of build array or insert into array like I originally had, you would say that's not efficient because LabVIEW is creating a copy of the array within that loop and if it's huge, then obviously performance takes a hit? I'm trying to understand the compiler, which is something I struggle a bit with.. But with replace array subset, we are not taking a copy of the array like build array would, rather just replacing the original array that was wired to input? 

 

Sorry for my ignorance regarding this matter, I genuinly want to be efficient and a better programmer and this is a lesson for me..

 

EDIT:

Looking at the code again, I see I have to carry two arrays in my code, therefore way more buffer usage than what is needed if I just used "Replace Array Subset" .. 

CLD | CTD
0 Kudos
Message 16 of 19
(1,590 Views)

@ChrisK88 wrote:

 

So obviously the less arrays you carry in any loops is better for memory allocation, but in the case of build array or insert into array like I originally had, you would say that's not efficient because LabVIEW is creating a copy of the array within that loop and if it's huge, then obviously performance takes a hit? I'm trying to understand the compiler, which is something I struggle a bit with. 


If you built an array in a loop, the compiler does not know what the final size will be and needs to guess. Initially it will allocate a small amount, but whenever it runs out of space, it needs to reach out to the memory manager and request a larger contiguous chunk of memory. Then it needs to copy the entire existing array into the new and larger location. (Arrays are contiguous in memory!).  This process repeats regularly but it preemtively allocates progressively larger extra space, guessing that it will keep growing. (This is quite smart. If it would need to request a new allocation for each added element, it would take forever!).

 

If we preallocate with a size of the worst case scenario (=all indices retained), there is only a single call to the memory manager and the entire loop code operates "in place" (this is the magic word!). After the loop has finished, we need to trim back to the actually used elements, but that is also cheap.

 

Once allocated, the array size itself has basically no effect on speed. Even if you would initialize with a 10M element array, the loop itself would take about the same amount of time.

Message 17 of 19
(1,572 Views)

@ChrisK88 wrote:

 

Also, in my company we can not use OpenG or any non native LabVIEW primatives, crazy source control rules, so I always have to re-invent some stuff, like search array, where in LabVIEW it only outputs the first index, not an array of indices. 


I'm sure preaching to the choir doesn't help, but OpenG has some of the most relaxed licensing possible.  They basically say you can use the source anyway, including modifying it, and use it in commercial applications.  So you can actually just copy the source that OpenG is doing into a new VI and save it as your own keeping the BSD license of course.  There is no practical reason to not allow OpenG at a company other than paranoia, or stubbornness against open source software.  It's proabably saved me multiple man months (maybe a year) of work to design, develop, test, document, and maintain and I'm just one developer.  Of course when it comes to performance OpenG isn't the greatest especially on array manipulation like this, and part of the reason is because you can do optomizations that OpenG can't know of.  If all your values you want to find are between 1 and -1 you can do tricks like some already shown to do it more efficiently.  

0 Kudos
Message 18 of 19
(1,512 Views)

I need to get clarification from the upper management regarding that, because we use other open source stuff, but may only use it internally and not to "sell off" products, blah blah.. 

 

I think you hit the nail on the head with "paranoia" but I don't write the rules, only am forced to abide by them.. 

 

I have colleagues who also work for government defense agencies that have similar issues with open source code. We have re-use libraries but that's just typical utilities developed over years of re-writing the same functions over and over again.. From what I gather I'm missing out on a lot of cool stuff with OpenG, I'm actually going to go get the technical and/or legal precedence that's stopping us and see if it's legit or just a mandate from someone 10 years ago.

CLD | CTD
0 Kudos
Message 19 of 19
(1,506 Views)