09-08-2010 08:05 AM
Also, insist on a CAR (Corrective Action Request)) and psot it here so that we can keep track of it.
Shane
09-15-2010 05:12 PM
Let me try to explain further some details about the peak-detector algorithm. As stated in the help, this algorithm takes 'width' points and performs a quadratic fit. If width is five, then the first fit is for point indices 0-4. The next quadratic fit is over point indices 1-5. Each fit then attempts to qualify the peak as legitimate by looking at several criteria.
First, the peak is qualified as a peak or valley using the concavity (sign of second derivative). To protect against almost flat data, the second derivative is also thresholded.
The peak amplitude is also checked against the threshold parameter, and rejected if not larger than the threshold.
Then the peak location is checked to make sure it is located within a certain fraction of the interval that is centered on the datapoints used for the fit. In other words, if the width is 5, then the interval center is at 2 (for indices 0-4) and the interval width is a fraction of the 'width' parameter. This is a sanity check to make sure the peak is near the center of the current interval, and not near the edge or even outside the interval.
Then there is a check for current and previous peak locations. This is to handle the case where the data is very smooth and many quadratic fits are locating the same peak. If the previous fit identified a peak, and the current fit also identified a peak both fits are checked for how close the peak is to the center of the data points used in the fit. The peak that is closest to the center of its data is kept as the current peak estimate.
This function has been around for some time, and has been tweaked for a number of aberrant datasets. In some cases there was a threshold that was changed, or additional logic added to handle certain cases. In some cases the dynamic range of the data can challenge even double precision arithmetic.
All that being said, it is quite possible to have data that seems to have a peak which should be identified but isn't. If we were to fix the peak detector to handle a particular dataset it is possible that we would change the behavior for another dataset. You could then have slightly different noise and still not find peaks that "should" be found, or find peaks that "should not" be found.
There is a certain underlying assumption in the peak detector algorithm that the data is relatively smooth. If the data is very smooth, then using a 'width' parameter of 3 makes perfect sense. If the data is not smooth, then a larger width allows a type of implicit smoothing that can help with some noise. However, if you want less surprises then you should take explicit control of the smoothness of your data. There are two approaches I have used with some success.
First, if your are unsure that a peak is due to noise, then make the data more smooth without changing the basic shape. This can be done by resampling. If you resample (interpolate) by a factor of 4 or 5, then your data will have peaks that are well separated in terms of the number of points between them. This allows the internal logic of the peak detector to be more predictable.
Second, if your data does have some higher frequency noise, try smoothing it before detecting peaks.
By doing either of these you will be more closely satisfying the underlying assumption of smoothness, and the peak detection results will be more predictable.
I have taken your original VI and modified it to allow for resampling and/or smoothing. The resampling is implemented using the 'Rational Resample.vi' and the smoothing is implemented using Savitsky-Golay filter, Moving average, and equiripple lowpass filter. If you don't smooth the data and resample by 5 then the three peaks located near 10560Hz are consistently identified all values of the width parameter up to 15. If you interpolate this way, then using a width of 3 is probably the best choice. Smoothing improves the situation, and after smoothing it may not even be necessary to resample. Anyway, the VI allows you to play with the options a bit. The Savistky-Golay filter seems to be best for your dataset, and is a good choice for this kind of application.
I hope this helps.
-Jim
09-15-2010 09:55 PM
09-16-2010 01:48 PM
I ran your data through a debug build of the analysis library and found the specific condition that seems to be causing the peak to be ignored. We have a requirement that the slope in the center of the data of the previous fit and the current fit be opposite sign. The idea is that if the data is smooth, and the previous fit slope is positive, and the current fit slope is negative, then there is a peak in there. If both slopes are positive then we have not "reached" the peak yet. Around LabVIEW 8.0 we added a bypass for this behavior for widths of 3 and 4 so that some small and narrow peaks could be found. This was an arbitrary choice, but again highlights the assumption that the data is fairly smooth. After extending the bypass for width 5 the behavior is similar to width 4 (3 peaks are identified).
You could argue that this is a bug, and by inspecting the data it seems as if there should be a peak identified, but only if the other criteria are ignored. It could also be argued that the data is not very smooth over the smaller widths, and so a larger width is indicated at this point.
Another way to think about the peak detector is to consider the simplest case. Suppose the peak detector fit quadratics to every width points, and returned all results that were concave down. This would lead to a large number of peaks, but most of them would be fairly redundant, because they are really identifying the same peak in the data. Now consider how to remove the duplicate peaks. Which quadratic fit most correctly identifies the actual peak in the data? One constraint could be that the identified peak lies within the width data points. Another constraint is to enforce that the slope changes sign between two consecutive fits. The logic is mainly to prune the possible peaks to correctly identify a "real peak" in the data just once. This tradeoff has been adjusted and tweaked but is not perfect 😞
I tried another experiment with your data. Consider only those 5 points in your image. Increase the amplitude of the second point by .09% and a peak is now detected with for a width of 5.
Attached is a LV7.1 version of the code. Please note that the code is a little different because not all the LabVIEW 9.0 functions are available in LabVIEW 7.1.
Does this clarify the situation?
-Jim
09-16-2010 08:01 PM
@DSPGuy wrote:
Does this clarify the situation?
-Jim
Hi Jim, Yes, it clarifies..
Thank you for the detail explanation.. it's great.
09-17-2010 02:30 AM
I wanted to add my thanks also.
This is a thread I'll be bookmarking for the next time a question like this one comes up. Great stuff.
Shane.