From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Decimate and build 1d array into 2D array


@LukasW wrote:

I'm also curious how your benchmark works.


It's just a plain three-frame sequence feeding a chart to see the variations.

 

 

 

0 Kudos
Message 11 of 14
(1,139 Views)

Altenbach,

It seems there is a problem in the last loop (at least on the screenshot): number of rows (8) is number of iterations and number of elements for array subset function, number of columns (100000) is not used. So you are replacing 64 elements (first 8 columns), it can affect benchmark results.

Message 12 of 14
(1,092 Views)

@Alexander_Sobolev wrote:

Altenbach,

It seems there is a problem in the last loop....


Thanks. Yes you are right. N needs to be wired with the number of columns, not rows. Now it's about the same speed as reshape. The single element loop ist still significantly slower, thought. Makes more sense.

 

And, yes, the decimate/build version is about 2x faster, with the disadvantage that it is not really scalable without a change in code.

0 Kudos
Message 13 of 14
(1,074 Views)

It took me a while but here are my results:

 

Array Benchmark.jpg

 

I used altenbachs benchmark and included the decimate and build just out of interest. Its not really applicable beause scalability is more important than speed (at least if we're talking µs).

The test was run on a PXIe-8135 Target with 8 rows and 7200 columns of data.

(The results on my LabVIEW VM are pretty much the same but with a lot more jitter).

 

Would reshape and transpose be my best option according to the results? Regarding memory allocations altenbachs column loop would be better, yet reshape and transpose runs a lot faster.

 

Thanks for your input!

 

Lukas

Message 14 of 14
(1,034 Views)