LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Easiest Nonlinear Fit Ever - Feel Like I'm Going Crazy

Solved!
Go to solution

Hey everyone,

 

I've used the Levenberg-Marquardt non-linear curve fitting VIs before with great success.  I've used them a few times to fit complicated functions with multiple free parameters and the functions have always shined so long as I set up my model vi, limits, and initial parameter guess well.  Now I'm trying to fit the simplest function ever and I'm bombing badly.

I have exactly 1 (ONE!!) parameter I'm trying to fit.  The function it goes into is dead simple.  But the way it's working makes me feel completely nuts:

1. No matter what I choose as my initial parameter, I always get the exact same result out in the 'best fit parameters' array, even though by choosing different values manually, I can clearly get a better fit and the residual changes

2. No matter what, it always executes exactly 6 iterations.  This happens even if I set the termination tolerance extremely small, or even if I set my maximum iterations less than 6!
3. My optimal value will be somewhere between 1e14 and 1e17 I believe.  I was wondering if the huge swing in orders of magnitude was what was causing the algorithm to crash (maybe because it wasn't moving the initial value enough to even see a difference for the first couple of iterations or something), so I also tried redoing my function as two parameters: one being the coefficient of a scientific notation number (with limits between 1 and 10) and the other being the exponent (with limits between 1 and 20).  I then processed those two parameters in the model to turn them back into a single number and use THAT number in the function but it still bombed just as badly.  I restored the example I posted to the original single parameter fit.

I'm just completely baffled.  I'll admit this is the first time I've ever tried to use these VIs to fit a single free parameter.  Are these VIs only designed to handle multiple parameters?  If so, why didn't my experiment that I described in (3) above work or at least do something?  Also is the huge swing in possible values causing me trouble?  I feel like I've fit stuff with huge swings of possible values before.

The fact that I always get exactly 6 iterations is what's really making me feel nuts.  How is it even getting to 6 if I set the maximum iterations to less than that?

I searched the forum for the last couple hours looking for something like this.  I honestly couldn't find anything.  Really sorry if this is a crazy stupid question that's been answered before.

Help much appreciated!  Thanks!

Download All
0 Kudos
Message 1 of 5
(2,979 Views)
Solution
Accepted by Westonly

"1. No matter what I choose as my initial parameter, I always get the exact same result out in the 'best fit parameters' array, even though by choosing different values manually, I can clearly get a better fit and the residual changes"
I tried changing the parameter manually, and found that the residue got smaller if the parameter became closer to 0. Actually, I couldn't see any change in the residue for values of the parameter below 37 or so, due to the scaling of the problem. What is a better fit for you?

 

The fact that I always get exactly 6 iterations is what's really making me feel nuts.
The function count includes the function calls for the numeric partial derivative computations, and we don't check the function count for termination until after the partial derivatives are computed. Default for numeric derivatives is central difference.

 

The partial derivatives are very small in magnitude for this problem.  Looks like the stepsize at each iteration is only one or two magnitudes above machine epsilon. Because of the very small relative change in residue induced by the small step-size, Lev-Mar may be reporting that it is converged. Strongly recommend rescaling the problem, although this may be a moot point given that a manual search for the parameter seems to show that the parameter is tending to 0.

-Jim

Message 2 of 5
(2,894 Views)

Thanks DSPguy!

 

So I rescaled . . . instead of letting my initial parameter vary between 1e10 and 1e20 I limited it to stay between .00001 and 10000, and set the initial guess to 1.  Then I multiply it 1e15 INSIDE of the model, and multiply the final result by 1e15 to get my answer.

 

When I did that it was still railing at the lower limit, but it was doing 45 iterations so I at least felt like it was doing SOMETHING.

 

I experimented with eliminating the first few points of my data which have the oscillations.  I thought the lev-mar would kinda average those out and I could get away with having them in there but as soon as I removed the first 9-10 points, the model snapped to a result suddenly.  With 8 points removed it rails at the lower limit, and then with 9 points removed it's golden and I end up with a result right in the middle of my range like I expect.  It's just a little fragile . . . I have to window the portion of the curve that I want fit or it dies.

 

Thanks for explaining about the partial derivative count thing.  It was making me question reality.

 

In general, does the LEV-MAR algorithm expect 'smaller' input parameters?  It seems to me like when it starts wiggling the values to see how the function evolves over the parameter space it would take a proportional amount of whatever the values are to begin with as a starting point and it shouldn't matter how big or small they are.

 

Anyways, thanks again!

 

-Wes

0 Kudos
Message 3 of 5
(2,873 Views)

Glad you are working now.  Lev-Mar does not, in general, expect small values.  Your problem has Y-values that are on the order of 1E0, and X values/parameter on the order of 1E13.  This wide range of values can and does have numerical consequences. I tried implementing explicit derivatives for your model function, and even then the scaling caused problems.

Your problem appears to be linear, or can be rearranged to be linear.
Your model is of the form:

y = m*Ln((a+x)*x/k), where m=0.02567, k=7.2093E+19.

                     Divide both sides by m and exponentiate gives:

exp(y/m)=(a+x)*x/k

                    rearrange a little

k*exp(y/m) - x^2 = a*x

This is a linear fit with intercept set to 0. When I apply this to your data I get a=1.16278E+15

The distance being minimized is different than the original problem, but this may be a more robust solution for you. This could also be used as an initial guess for Lev-Mar.

 

-Jim

Message 4 of 5
(2,776 Views)

Thanks again!

 

When I first started out with this problem I felt like the Lev-Mar was overkill if I could somehow just rearrange my problem as linear, or exponential, logarithmic or whatever so long as the result I was looking for could be expressed as some combination of the output parameters of one of those fits.  If I could do that then I didn't need 'iterations' aside from what the normal regression algorithms perform and I could essentially just solve for my result instead by performing a regression and then manipulating the fit parameters.  When I tried to rearrange my formula though that quadratic term inside the exponential kept messing me up and I could never isolate it the way I wanted.  It didn't occur to me to try to simplify it by setting the intercept to zero like you did.

 

I think I do still need to minimize the difference according to the non-simplified function, but this is definitely a better initial guess than nothing.

 

I really appreciate your help!

 

-Wes

0 Kudos
Message 5 of 5
(2,726 Views)