You simply can't know what the mean is going to be of 10 points...if you havn't received those points yet. You can wait until you get 10 points and only calculate your values after that.
Well, you need to simply wait until you have 10 point before dividing by the mean and then wait again. If you already have the final array, simply loop to get all sequential subsets and process each. Here's a quick example.
(What is your LabVIEW version? For older versions, the code needs to be changed a little bit. EDIT: changed to 2012)
I have 2012 version.
The purpose is to not wait for all the points(which is actually much larger, in the range of thousands).
I am receiving these blcoks of points(10) in real time from a device,and i want to process them.
however the result of processing the samller blocks of data should be similar to as if i had the whole file (for example 8000 points),.
I have attached an image of an example output graph.
The blue plot is the ideal plot.(using all the points from a file)
The green plot is using slightly fewer points,
and the red plot is using much less points (which is closer to the number of points i will be using.)
as you can see the red plot and blue plot are different. The shape is similar however the values are quite different.
well there is a high pass filter and a log after this stage, the results are the final result after all the processing.
but i was thinking that it is the division by mean that is causing the variation.
I have to admit that I am quite lost as to what the goal of this exercise is. 😞