using Version 6.1 on Win 2000. I have attached a VI that takes a 2D array of data and builds six XY data sets to graph on a single XY graph. This VI works, however I am concerned that it will get slow when the data set starts to get some size, especially since this function is called between one and ten times per second depending on the sample rates. Any sugestions on improving the effeciency of these VI is welcome. Or any ideas on getting the data to the XY graph better.
Yes, your program can be translated into much simpler code (look ma, no loops! 🙂see attached image) and I would think it will be somewhat more efficient.
What is an upper estimate of the array sizes? The main problem is the fact that all your arrays are constantly growing, so you are well advised to try to find a better solution IF your arrays are getting pretty big. You could for example keep only the last N points in a FIFO buffer of constant size (start with all NaN and unused slots won't get graphed). (edit: typo)
Ideally if you are dealing with very large sets of data, then you should try to allocate the memory beforehand.
The best way in terms of memory is to initialize an array that is at least as large as the max amount of data that you think you will have. Once this is done, then you can insert data and pull out data as desired. By initializing the array you allocate the full memory chunk and assure that you won't be bogging down the processor as you add new information to the array.
Another thing you want to make sure of is to not display the data until absolutely necessary. When you present the data on the front panel it will use some of your resources and slow things down.