03-29-2021 10:38 PM
I have an Arduino streaming 10 data points into my LabVIEW program. My goal is to create a 2d array of the last 10 sensor readings so as to later on average them for use in my program. I'm currently running into a problem trying to delete the first (oldest) row. Instead of deleting, it is growing the array indefinitely.
I am also having a problem my 'Last 10 readings' has 13 columns instead of 10. I am assuming this is because the 'fract/exp string' function to number is reading some of the characters as variables. Is there an easy way to avoid this?
Thanks,
Kfly
03-30-2021 01:31 AM
You can do this with a circular buffer - begin by using Initialize Array and a NaN constant (double constant) and 10x10 (sensors by history length).
Then in the loop, use Replace Array Subset to replace a line or column at a time (your choice, but wire the appropriate indices to RAS).
You can store the "next index" in a shift register and increment it, and unroll the array as needed to get it into the right order.
With 1D arrays this is made easier by Rotate 1D Array, so you could also consider an array of clusters of an array of values (the cluster containing a single array of the history of one sensor) but that will require bundling/unbundling in your loop.
You could also have your cluster contain an array of all sensors at a single point in time - then your rotate is easy. Which is better probably depends on how you want to use the data, but either can be made to work.
03-30-2021 03:33 AM
Actually, it occurs to me now that I try it out that you could also just use a 1D array...
Here I used the same code you already had for parsing, along with the "Example of what the buffer would read" string (thank you for providing this, much easier to test!)
The output array here shows the content of this snippet after executing 9 times, obviously with the constant "read" value it will become quite boring after the 10th! But it would work with real data more effectively.
03-30-2021 04:25 AM
For more detailed discussions of running averages, have a look at our NI Week presentation form 2017 (part 2).
Most efficient is to keep a buffer of the last 10 acquisitions in a 2D array and replace the oldest row with the newest at each iteration. In a second shift register keep the sum of all rows and just add the newest and subtract the oldest with each iteration. Then divide by the history size or [i+1], whatever is smaller.
Here's how that could look like:
03-30-2021 07:48 AM
Thanks! This is exactly what I was looking for
03-30-2021 08:04 AM
@altenbach wrote:
For more detailed discussions of running averages, have a look at our NI Week presentation form 2017 (part 2).
Most efficient is to keep a buffer of the last 10 acquisitions in a 2D array and replace the oldest row with the newest at each iteration. In a second shift register keep the sum of all rows and just add the newest and subtract the oldest with each iteration. Then divide by the history size or [i+1], whatever is smaller.
Here's how that could look like:
And then add code to handle NaN, Inf and -Inf...
Once a NaN, Info or -Inf is added to the sum, it never goes back.
Another problem is when the values have a big range. This is not hypothetical, it happened to me just recently. A device returned a big value as an error, 1E300 or something. That will ruin the average, even when the 1E300 is removed from the history. The resolution of a double just isn't infinite. A extended doesn't help, there still is a limit.
I choose to do simply the sub each iteration for <20 iterations. For larger averages, I use the 'efficient' method, but I do check the sum for NaN, Inf and -Inf. If it's one of those, and if the iteration that leaves the averge is NaN, Inf and -Inf, I do a manual sum to reinitialize the sum.
Of course, you could be lucky enough that this doesn't happen in your situation. I wasn't...
03-30-2021 10:56 AM - edited 03-30-2021 11:02 AM
wiebe@CARYA wrote:
Once a NaN, Info or -Inf is added to the sum, it never goes back.
Yes, one needs to filter NaN, Inf, and -Inf if this is a possibility. Not sure if this can occur by parsing a string from an instrument having limited DAQ resolution (dozen bits or so). If the number cannot be parsed, we get whatever the default value is wired to "fractional string to number" (Unless the string is literally "inf", "nan", etc.).
Maintaining a separate array for the sum is optional. It is easy to calculate the average from the 2D array where NaN drop out again after 10 reads.
(Of course there is probably a direct way to scan the raw string into the DBL values, without the need for all that replacing, slicing, dicing, and decimating, but that's a different discussion. One possibility:
If we discard inputs, we can no longer use [i], but need to maintain a count in a shift register incremented for valid data. We could toss the entire received array if an error occurs or we could even toss individual elements by setting them to zero and maintaining a "valid count" for each array element separately to do the averaging.
Another solution to the original problem would be to place a "mean ptbypt" inside a parallel FOR loop configured for 10 instances and autoindex over the array, but that does not really scale well either for larger arrays. 😄