LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

efficiently time average a lot of data

You can place it in a parallel FOR loop configured for x parallel instances. Or you can build your own, keeping an array of means. I posted an example long ago, try to find it (sorry posting by phone).
Calculation of the median only requires partial sorting (look at "quickselect").
0 Kudos
Message 11 of 15
(1,034 Views)

How accurate does the median need to be? How wide is the range of input values?

 

If all you want is an approximation of the median, you could maintain a fixed size histogram in memory. You can even maintain a resolution that is higher than the number of bins by always filling two adjacent bins with fractional values depending on the fractional value of the input with respect to the bin spacing. All my suggestion operate on fixed size arrays and are thus very efficient.

0 Kudos
Message 12 of 15
(1,014 Views)

The longest range (history) overwhich I will need to calculate a median is about 200 samples.  I have 8 channels of data and 1000 scan points per channel, so I will need to calculate 8000 medians.  My data update rate is 20 Hz. 

 

With respect to the accuracy of the median, an approximation might work, but I'm not sure. 

0 Kudos
Message 13 of 15
(997 Views)

Since you seems to know all final sizes, it would help to do everything in-place. Prepending array data to a 2D array (as you do in your first code image) is the least efficient, because things need to be constantly reallocated from scratch.


@tysonl wrote:

The longest range (history) overwhich I will need to calculate a median is about 200 samples.  I have 8 channels of data and 1000 scan points per channel, so I will need to calculate 8000 medians.  My data update rate is 20 Hz. 

 


I don't understand your calculation. Do you need a "rolling median" of the last 200 values updated for each new point or just a median of consecutive 200 point sections.

Do you need to display the medians as data is aqcuired or could all be calculated in a post processing step once all data is available?

0 Kudos
Message 14 of 15
(987 Views)

I am doing a rolling median on the data as it arrives.  As is typical, the median is calculated on whatever data is available until the 200 buffer is filled.  Once filled, the newest data is pushed into the buffer and the oldest data is popped off.  This is basically a rolling median filter. 

0 Kudos
Message 15 of 15
(951 Views)