LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Request: Make these VIs faster for large arrays

Solved!
Go to solution

Well i think i've managed it:

 

Complete 4.png

 

This now does the same test in 17ms!!! Taking the dog for a walk worked wonders, came to me in a flash. The relative row only needs to be done on the first iteration as all other iterations have to be the same row. On the first iteration i calculate both row and column relative data, for the remainder i only do the column. As i initialise the array full of 0's i can ignore the row for all other iterations as it always equals 0.

 

Please, if you beat this time with your 2009 version dont tell me. Ignorance will be blissSmiley Tongue

 

Is this the same adjustment you made or is there still yet another improvement i missed?

 

Rgs,

 

Lucither.

------------------------------------------------------------------------------------------------------
"Everything should be made as simple as possible but no simpler"
0 Kudos
Message 41 of 59
(1,387 Views)

Good morning. Hey, you are still working in this! 🙂

 

 


@Lucither wrote:

On the first iteration i calculate both row and column relative data, for the remainder i only do the column. As i initialise the array full of 0's i can ignore the row for all other iterations as it always equals 0.

 

Is this the same adjustment you made or is there still yet another improvement i missed?


 

The idea is good, but the bulk of the speedup is due to something else that you probably did by accident. (obscure clue ;))

 

Still, the implementation of your idea is not optimal, because you can remove that case structure again and process the row data in the outer loop, right? That should give you another couple of ms.

 

As I said the idea "in general" is a good one, because I just used it over a cup of coffee to bring mine down to ~6ms. 😮 😄

 

(my times are with debugging disabled, make sure you do that too for another ms or two)

 

I'll try it in 2009 later....

 

Did you ever try to bechmark the original two VIs (chained together) with the current 101x dataset? I gave up after 10 minutes or so. I am just curious what kind of speedup we achieved so far, might be a record. 😉

0 Kudos
Message 42 of 59
(1,370 Views)

Last time i think, I made the improvements you pointed out (removing the inner structure and doing the row on the outer loop was minimal timewise but better style), After removing debugging i was getting 10ms, here is my FINAL version:

 

Complete 5.png

 

Although i never highlighted it in my last post i was aware that the main reason for the code becoming quicker was due to the fact that now we dont build an internal 1d (row,column) array element to insert into the array, we simply insert the individual values.

 

Array insertion benchmark.png

 

When the above benchmarks are run seperately the bottom loop is x8 faster. Just the fact that we are building the 1d array to insert slows down the process a lot.

 

Just for your interest i benchmarked the OP's original code as you said, one feeding the other. I truely left it to run for an hour, it still wasnt finished, i went down town to look at possible laptops to buy, came back and 10 minutes later it finished. A total of 2 hours and 35 minutes and 30 seconds!!!. (or 9330751 ms) If we use my best of 10ms, using LabVIEW 2009, we get an improvement of x933075 faster. If we use your best of 6ms we get  x1.555 MILLION times faster!!! I have always known that using build array functions are bad and that care must be taken when handling arrays but i must admit this is even a shock to me. That just by carefully writing a small section of code can be the difference between 6ms and 2.5 hours!! I think if you where ever going to highlight the importance of this to someone then this would be a great example.

 

I think the lesson here is that if you need to use 'Get Date/Time' function to do your benchmarking you need to rethink how your doing it! Smiley Very Happy

 

Anyway, this has been fun (Your right, i am very sad Smiley Sad). Has even made someone who was aware of these problems even more so.

 

Thanks for playing along.

 

Rgs,

 

Lucither.

------------------------------------------------------------------------------------------------------
"Everything should be made as simple as possible but no simpler"
Message 43 of 59
(1,344 Views)

 


@Lucither wrote:

Last time i think, I made the improvements you pointed out (removing the inner structure and doing the row on the outer loop was minimal timewise but better style), After removing debugging i was getting 10ms, here is my FINAL version:


 

OK I spend a few more minutes looking at all this.

On the default data, my fastest version is still about 30% faster than yours. 🙂

If I have time, I'll clean my version up and post it tomorrow night.

 

Your version has a bug:

If the input matrix is sparse and there are rows with no 1's at all, your code generates incorrect results, because your row increment cannot be larger than 1 (except for the first element). This is not a problem with the current input data, but still a fundamental flaw that must be corrected.

 

 

 

Message 44 of 59
(1,302 Views)

 

 


 

Your version has a bug:

If the input matrix is sparse and there are rows with no 1's at all, your code generates incorrect results, because your row increment cannot be larger than 1 (except for the first element). This is not a problem with the current input data, but still a fundamental flaw that must be corrected

 


 

 Ah, of course, thanks for pointing that out. Have made the adjustment:

 

 

Complete 6.png

 

With this Mod im still getting 10ms. Is yours 30% faster running in 2009?

 

Will be very interested to see your code. Am struggling to think how if you are running it in 2009 your still squeezing out an extra 30% improvement!. Saying that though i was suprised when i improved on my 300msSmiley Tongue

 

Rgs,

 

Lucither.

------------------------------------------------------------------------------------------------------
"Everything should be made as simple as possible but no simpler"
Message 45 of 59
(1,295 Views)

I've tried to understand what it does in general, but i find optimization interesting. Cant you cut the inner "Sum of array" (or move it outside) as you're adding the 2D array already?

 

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 46 of 59
(1,284 Views)

Hi Yamaeda,

 

Im not sure if you are responding to my post. If you are and your refering to the inner 'Sum Array' this cant be removed as the inner for loop only loops over all values in the passed array that have '1's'. This will change possibly every inner loop, the 'sum array' calculates how many 1's there are in the new row and therefore how many times the inner loop needs to execute. That is why it is also connected to a case structure, if there are no 1's then i ignore and go to the next row.

 

The only way you could remove the inner 'Sum Array' was if we were going to build and array of sums externally and then 'Auto-index' them into the loop. Not sure if this would be more efficient?

 

Rgs,

 

Lucither.

------------------------------------------------------------------------------------------------------
"Everything should be made as simple as possible but no simpler"
0 Kudos
Message 47 of 59
(1,278 Views)

Stabbing in the dark (i'm not sure it's faster, but it feels so) - Instead of Initializing an array for the loop, would it be faster to Reshape the incoming?

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 48 of 59
(1,269 Views)

It would not surprise me if you are right. Perhaps try it. Better yet, just wait until Altenbach post's his version. Im sure if its faster it will be in his.

 

Rgs,

 

Lucither.

------------------------------------------------------------------------------------------------------
"Everything should be made as simple as possible but no simpler"
0 Kudos
Message 49 of 59
(1,263 Views)

 


@Yamaeda wrote:

Stabbing in the dark (i'm not sure it's faster, but it feels so) - Instead of Initializing an array for the loop, would it be faster to Reshape the incoming?


 

Well, my original suggestion does that. From the current benchmarks, it is about 4x slower in LabVIEW 2009.

 

Keep in mind that all code under discussion does very (very!) little, so the loops spin extremely fast and small changes in code can have large effects. For example just operating on a size=2 array instead of two scalars causes an order of magnitude penalty.

 

For all typical inputs, all versions are "fast enough", so this exercise is mostly academic. 😉

 

Remember, the original code took several hours for what we now do in a few ms. The original dataset was 100x smaller, so we would process it in sub-milliseconds...

0 Kudos
Message 50 of 59
(1,239 Views)