From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Efficiently appending large two dimensional data to an array

I have a two dimensional array with 64 columns, I am reading the data from a DAQmx function, as the new data comes I append it to the old data using the append to array function. This process is taking 100% of the processor resources making my program in efficient to do other functions. From my previous postings I got suggetions to intially declare a big array and then replace the new data with an index. Could any one verify whether this is the best solution? if that is the best solution could any one provide me an example how to do it. I appreciate any help.

Thank you,
Mudda
0 Kudos
Message 1 of 6
(2,444 Views)
Here is an example showing both methods (LabVIEW 7.0). The lower method will be much more efficient.

A good place for more information is Application note 168: LabVIEW Performance and Memory Management. In particular the section starting on page 16.

(edit: added reference)

Message Edited by altenbach on 04-05-2005 09:22 AM

Message 2 of 6
(2,443 Views)
The one thing you'll have to watch out for when using Replace Array Subset is you can't replace an element that isn't there. So when you get to the end of your initialized array and still need to add more to it, you'll need to use the Insert into array to add some more space.

I've modified the example a bit to show one way to this. Doing this is still much more efficient than inserting on every iteration, since the memory allocation only happens when needed.

Ed


Ed Dickens - Certified LabVIEW Architect - DISTek Integration, Inc. - NI Certified Alliance Partner
Using the Abort button to stop your VI is like using a tree to stop your car. It works, but there may be consequences.
0 Kudos
Message 3 of 6
(2,430 Views)
Yes, this is a way to prevent data loss if your system has no defined upper boundary. I would argue that growing it by 5 rows at a time is still very inefficient. The initial allocation should be big enough for 99% of all your typical runs, then grow the array only in the rare cases where extra room is needed.

At one point, it will no longer be efficient to keep all this data in memory. Each machine has its limits! It could be better to stream the data directly to disk and keep the stuff in memory lean.
0 Kudos
Message 4 of 6
(2,423 Views)
Good point. I actually only changed it to 5 rows so he could see the new ones being added. I guess I should have mentioned that, but didn't think of it.

Definitely make the initial array as large as you think you'll ever need, that way the "Insert" should rarely ever have to run, but will be there just in case so you won't lose any data. Of course dumping the data to a file when the initial array gets full is a good option as well. After writing the file, you could overwrite your array with 'NaN' again to start over. But you'll have to make your own index counter using a shift register since the loops iteration terminal won't reset to zero.

Initializing it with the 'NaN' value will give you an easy way to detect where your real data ended.

Ed


Ed Dickens - Certified LabVIEW Architect - DISTek Integration, Inc. - NI Certified Alliance Partner
Using the Abort button to stop your VI is like using a tree to stop your car. It works, but there may be consequences.
0 Kudos
Message 5 of 6
(2,420 Views)
Thanks to both of you guys that helped me a lot.

mudda.
0 Kudos
Message 6 of 6
(2,417 Views)