LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Better way for inserting column into array?

My ears are open for new things. Propose a correction, please.
0 Kudos
Message 11 of 14
(362 Views)

@tst wrote:
  1. For those sizes, I wouldn't bother with any optimizations. Your larger array takes up less than 80 KB of RAM, which is not a lot. It should be fast either way.

I used this dimension just for explaining the situation. As of now, I don't face any memory issue or any other performance issues. But I have come across this case a number of times when the array sizes were bigger.

Seeing if any more efficient way is there to accomplish this.

0 Kudos
Message 12 of 14
(349 Views)

@altenbach wrote:

Both methods use a buffer allocation, so they might be very similar. Since we don't know what the compiler is doing, you could also try this:

 

 

 


Woudn't it take longer to execute if the number of rows were higher? (Assuming the appending will happen serially, row after row)


@altenbach wrote:

 

I am assumung that you only need to append once and not append more and more columns as the program progresses. Is this correct? In any case, you should consider operating on the transposed version, because it is always easier to append rows instead of columns (rows are adjacent in memory, while appending columns require more data shuffling under the hood

 

In any case, you should simply wire up the alternatives and do a proper benchmark if you are planning to use this on much larger arrays in the future, for example. Can you give us some context how you are planning to use this?


In my application new columns will be appended as the program runs. It is for maitaining a recent data history in a data acquisition application. Each row is for each channel and newly acquired samples should be appended after previously acquired samples(history).

0 Kudos
Message 13 of 14
(343 Views)

@AJ_CS wrote:

In my application new columns will be appended as the program runs. It is for maitaining a recent data history in a data acquisition application. Each row is for each channel and newly acquired samples should be appended after previously acquired samples(history).


Then I would recommend to  keep the internal data structure such that new rows are appended instead of columns. Since you talk about "recent history", you might look at a fixed array size (initially all NaN) and do a "in place" solution. That would be much more efficient than constantly growing and shrinking large 2D arrays.

0 Kudos
Message 14 of 14
(336 Views)