I have a VI set up which acquires data into an xy graph. The axes are saved independently. Currently, when I output the arrays used for my xy graph into a file, it gives the x and y coordinate for each file. The x-coordinates are ordered appropriately and coordinate appropriately with the y-axis, but repeat with each scan eg...
x y 0 .2 1 .3 2 .4 3 .5 0 .21 1 .32 2 .4 ... etc.
What would be the best method of averaging the repeated scans? Eventually, I will have to differentiate points using a third coordinate (-1, 0, and 1), and I will have to set up this program to be able to handle that, which is a further complication.
I apologize since this seems like it ought to be routine, but I have always had extreme trouble in dealing with arrays in LabView, especially with establishing indices in multidimensional arrays.
Thank you for whatever advice you can provide.
Please find attached my current VI so perhaps you can see better what I mean.
Implemented code from Mike successfully. For other people in the same boat, make sure to disable indexing as the array enters the for loops. That held things up for a while.
About the complex extended precision, prior to that, data was being truncated, so I switched over to that format. I imagine that's a bit of overkill, eh?
Thanks to everyone here for help with this.
I need to modify this code sometime soon to deal with the third dimension I mentioned (-1, 0, and 1), I'm thinking of (somewhat wastefully) creating a 1D array that stores all those values, then somehow sort which data pairs belong in which of the three pairs of x and y axes and to average and take into account the fact that each pair of axes will not have equal numbers of data points (an x axis might read 0, 0.1, 0, 0.3, 0.4, where the third value was not entered). Would you have any recommendations on how to implement that? I may have to overhaul the program completely as it is since after approx. 24,000 measurements, it slows down significantly.
First of all, correct code should not slow down significantly, so something else is wrong. You are growing arrays in shift registers without bounds, so this will tax the system after a while. You shoud stram the data ti disk and only accumulate the two fixed size 2D arrays as described below. Atleast you could clear the feedback nodes after each file write in your current implementation. There is also no reason in the world to do EXT and CXT datatypes. Way too much baggage for this application.
Here's a quick draft how you could do the 2D thingy. Just keep two 2D arrays in shift registers and for each data point, increment the element at x,z indices. One array keeps the sum of all elements and the other the count. At the end, just divide the two 2D arrays. Of course you would need to adjust the size of each dimension according to the desired granularity.
Matt G wrote: (an x axis might read 0, 0.1, 0, 0.3, 0.4, where the third value was not entered).
Well, a value of zero is not the same as a missing value. How can you tell the difference?
Thanks for the advice... ended up doing a 3d array instead of two 2-d arrays, but I am finally feeling more comfortable with arrays from your examples. That method of accruing data in one dimension while accruing number of acquired values in another was beautiful. Somehow, instead of just outputing zero, my program outputs NaN when there's no value, but that's actually a plus for sorting this all out. And I've replaced complex extended precision data with doubles. Now to get the instrument working properly .