LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Problem with LabVIEW function

Solved!
Go to solution

No.  The LabVIEW dataflow paradigm does not allow a function or VI to modify its inputs. (I won't say that there are no exceptions to this but I do not know of any).  If data is passed through a node, that node may modify the data, but then the output is a different wire and thus a different variable.

 

Lynn

0 Kudos
Message 11 of 36
(2,346 Views)

I've worked with the peak find funtion before and had problems udnerstanding where the results come from.

 

One thing to note:  If you have a window of 5, the HEIGHT of the found peak is defined by the differences in amplitude between these five points, not the absolute amplitude (AFAIK).  Given that your data seems to be very finely resolved, giving it five points to find a peak seems to be trying to model the circumference of the earth my measuring a football field.

 

Given these poor starting conditions, you can have significent problems when the 5 points do not interpolate well and you just miss having the absolute height difference required for detecting a peak.  If you want it to be more robust, you need to give it a larger window.

 

You might think that finding wrong peaks and missing existing peaks are different problems but really they're not.  It all has to do with how your search parameters fit your data.  If you want NI to post you an excel sheet highlighting why this produces problem X in your code, then be my guest but you've already received an answer to your question. Your peak finding is unreliable because you're specifying a window which is simply much too small.

 

Shane.

0 Kudos
Message 12 of 36
(2,338 Views)

I've been looking at the VI you posted and have come up with following observations:

 

1) You simply need a larger window for detecting the peaks.  This is painly clear.

2) In the extreme case you are testing I'm not sure the algorithm is detecting the peaks correctly.  For the example of width 5 for example it seems to actually be detecting a valley instead of a peak even though you have told it to only find peaks.

3) Your original post stating that the values returned for windows width of 3 and 4 are not correct.  You claim they are because they're close enough, but these are also not "correct".  You only obtain correct results from window size 6 and upwards.

 

Shane.

0 Kudos
Message 13 of 36
(2,331 Views)

 


@johnsold wrote:

No.  The LabVIEW dataflow paradigm does not allow a function or VI to modify its inputs. (I won't say that there are no exceptions to this but I do not know of any).  If data is passed through a node, that node may modify the data, but then the output is a different wire and thus a different variable.

 

Lynn


 

I guess you misunderstood my question.. I get what you meant. Data at input is definitely not modified. (unless it's "call by reference" type of operation instead of "call by value".. something like that.

 

What I'm asking is, after the data enters the peak detection VI, did it goes thru a "smoothing operation" which modify the values before it goes thru the "peak detection function".. or the data is directly worked on by the peak detection function (curve fitting)?

 

 

0 Kudos
Message 14 of 36
(2,292 Views)

 


@Intaris wrote:

 

One thing to note:  If you have a window of 5, the HEIGHT of the found peak is defined by the differences in amplitude between these five points, not the absolute amplitude (AFAIK).  Given that your data seems to be very finely resolved, giving it five points to find a peak seems to be trying to model the circumference of the earth my measuring a football field.

 

Given these poor starting conditions, you can have significent problems when the 5 points do not interpolate well and you just miss having the absolute height difference required for detecting a peak.  If you want it to be more robust, you need to give it a larger window.

 

Your peak finding is unreliable because you're specifying a window which is simply much too small.

 

Shane.


 

Interesting point..

 

But you missed an important observation. The peak was detected with smaller subgroup of 3 and 4. So this is not an issue of the "width too small".

 

It's like, modeling the circumference of the earth by measurement of a tennis table works, but by measuring the football field doesn't work.

 

Using small width should potentially result in more peaks, but not miss the detection of a peak.

0 Kudos
Message 15 of 36
(2,287 Views)

 


@Intaris wrote:

I've been looking at the VI you posted and have come up with following observations:

 

1) You simply need a larger window for detecting the peaks.  This is painly clear.

2) In the extreme case you are testing I'm not sure the algorithm is detecting the peaks correctlyFor the example of width 5 for example it seems to actually be detecting a valley instead of a peak even though you have told it to only find peaks.

3) Your original post stating that the values returned for windows width of 3 and 4 are not correct.  You claim they are because they're close enough, but these are also not "correct".  You only obtain correct results from window size 6 and upwards.

 

Shane.


 

That part in red. THAT, is a problem too. Doesn't that a BUG?

 

I'm not posting to ask "what was wrong".. I'm asking Why is it giving the wrong result..

 

As for pt 3, did you actually examine the data points in detail?

 

The first peak is actually correct and is within expectation for width 3 and 4, according to the article on how the peak detection function works. For bigger width, due to some "smoothing" effect, the peak is shifted.

 

 

 

 

0 Kudos
Message 16 of 36
(2,281 Views)

 


@Limsg wrote:

 

 

That part in red. THAT, is a problem too. Doesn't that a BUG?

 

I'm not posting to ask "what was wrong".. I'm asking Why is it giving the wrong result..

 

As for pt 3, did you actually examine the data points in detail?

 

The first peak is actually correct and is within expectation for width 3 and 4, according to the article on how the peak detection function works. For bigger width, due to some "smoothing" effect, the peak is shifted.

 


 

Of course it's a problem.  If you read it again carefully you'll see that I'm supporting your idea that it might be returning bad values.

 

I've told you before as to the WHY..... You need detailed information about the algorithm to answer that.  You won't get that from a non-NI poster so please bear that in mind for the further posts.

 

As for Pt 3 I examined the points in full detail.  I even overlayed the smoothed (quadratic fit function) over the points ans I have to say that the first peak returned for widths 3 and 4 are more than 4 actual data points off the "real" peak. Since this is outside the actual window you are defining, this cannot be described as "smoothing".  It's a wrong result (in your definition).

 

What has me a but confused is that on the one hand you're willing to accept a willy-nilly definition of a "correct" peak (Pt 3) yet on the other hand demanding absolute rigorous results from an algorithm being used with massively sub-optimal parameters.  Something has to give.

 

Shane

0 Kudos
Message 17 of 36
(2,277 Views)

 


@Limsg wrote:

 

Interesting point..

 

But you missed an important observation. The peak was detected with smaller subgroup of 3 and 4. So this is not an issue of the "width too small".

 

It's like, modeling the circumference of the earth by measurement of a tennis table works, but by measuring the football field doesn't work.

 

Using small width should potentially result in more peaks, but not miss the detection of a peak.


 

I haven't missed anything.  Trust me.  If you want to get further with your problem, listen a bit more closely.  We are trying to teach others here.

 

You seem to have only a rudimentary understanding as to how the peak detection algorithm works.  Although I don't have each and every last intricate detail, I DO understand the problems having a too small window can have on such operations.  I don't point this out to slam you but rather to reassure you that I'm not just talking out of my orifice.  I'm trying to impart some hard-learned experience if you're willing to listen.

 

Using small width should potentially result in more peaks, but not miss the detection of a peak.  While this may make human logical sense, it is simply not accurate.

 

If you take a small subsection of ANY data set (football field, Tennis court, Bathroom as opposed to the entire Earths surface) and fit it expecting to be able to find the true circumference of the earth you will have erroneous results.  How erroneous the results will be depends on the local deviations from the overall trend you wish to observe.  Obviously each and every measurement has certain errors associated with them and it is perfectly possible (unfortunately often so) that a completely insufficient cross-section of a set of data can produce some apparently quite convincing results.  This often depends on exactly how the local deviations are spread within the insufficient sample.  One day the answer will be so, the next day a bit different.  Only when you take a large enough sample size will the result be acceptable within tight limits day after day after day.

 

What we have here is that the local deviations of your SINGLE curve presented happens to give good (but not correct) results for window sizes 3 and 4.  Window size 5 just has bad luck and hits a "perfect storm" of local variations over general trend and turns up with empty hands.

 

What happens if you take 100 such curves and analyse the trends?  I bet you'll see much more deviation in the results with decreasing window size.  This is a measure of the (lack of) robustness of your fit.  You can't test the robustness of ANY algorithm on a single data set.  Try it out with lots of real data sets and you'll see that the robustness (the ability to generate consistent results between different data sets) will increase with increasing window size.  Going below a certain window size ends up giving you a situation where you're fitting more noise than signal, even if for a single curve it looks like it might be "OK".

 

That's all I have to say at the moment to this topic.  I have already hinted at the fact that I suspect there MAY be funny things going within the algorithm but the only person who can confirm or deny that is someone with access the the source code of the algorithm.

 

Shane.

0 Kudos
Message 18 of 36
(2,272 Views)

Well said, Shane.

 

I modified your program to fit a parabola to the data points near the peak.  I wanted to see what it does.  I have attached a LV8.0 version (untested) of the program.  I cannot save it back to version 7.1, so you may want to post it to the Save to Earlier Versions thread.  The images below show what I did.

 

I create an array of X-axis values because the General Polynomial Fit.vi requires it and I use it in the XY graphs later.

 

I extract a subset of length SubGroup Size starting half of SubGroup Size before the first peak located by the Peak detector VI.  I fit a polynomial of order 2 (parabola curve) to the subset.  I calculate the peak location from the point where the fitted polynomial has zero slope.  I calculate the peak amplitude from the polynomial at the peak location.

 

I plot both the original data and the fitted polynomial on the same graph.

 

Run this with values of the SubGroup Size ranging from 3 to >30. Once you get above 9 or 10 the fit looks very good and the peak location and the peak amplitude change very little as the Size changes.  At Size = 5 the fit does not works because the Peak detector does not find the 10583 peak.

 

Lynn

0 Kudos
Message 19 of 36
(2,256 Views)

You forgot the Attachment

0 Kudos
Message 20 of 36
(2,251 Views)