From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Nonlinear Fitting

cancel
Showing results for 
Search instead for 
Did you mean: 

nonlinear curve fit - termination conditions, scaling

Is there a description of all of the factors involved in the termination of a fit, including lambda, weighting and scaling, function and parameter tolerance?

 

I've used the LM fitting routine in the various configurations, which works nicely for most cases. However, if there are any exponential changes involved, it seems that scaling or weighting is needed, and the scaling or weighting seem like they should depend on the form of the data. Any guidelines would be helpful, e.g. for phase lead-lag compensation with a constant multiplier: H=K(1+jf/fo1)(1+jf/fo2)/[(1+jf/fp1)(1+jf/fp2)]. 

 

Scaling for K is straightforward, but what about the poles and zeros (fo, fp)? They can also make the value of H change over orders of magnitude.

 

How do you know if the solution is the best one, or if further crunching will result in a better solution?

 

Changing lambda has a big effect on the quality of fit, but the residue doesn't seem to decrease monotonically with reduction in lambda. 

 

Any help would be appreciated, even a pointer finger in the right direction. Although I've had background that should allow me to understand the program better, that part of my brain may be permanently glazed over. Even so, now would be a good time to learn more about it. My apologies if this is already explained in detail somewhere - searches keep showing the same info, which either means that's all there is, or I don't know enough about the problem to ask the right questions.

0 Kudos
Message 1 of 3
(4,026 Views)

The stock VIs for nonlinear fitting are open source. You can double-click them and inspect the code.

 

Lambda determines the balance between steepest descent (high lambda) and conjugate gradient. A higher number is more stable if we are far from the minimum. It starts out at 10 and gets multiplied by 0.4 if the fit improves and multiplied by 10 if the fit gets worse. These settings provide a good balance on a mix of problems. I have my own custom VIs where starting lambda and the scaling factors can be set, optimized for a particular problem. It typically will not change the fit quality, just speeds up the initial convergence.

 

Termination happens if an error occurs, lambda gets >10000, the max number of iterations has been reached (default=200) or if the function tolerance is <1e-8 (by default). You can open "check convergence" to see how the tolerance is calculated.

 

Weighting is per data point. By default, all points are weighted equally as 1. If you give an array of weights, you can give different relative weight to any points, you can even set the weight for certain points to zero and they will be ignored. For example if part of your data is very noisy, you could reduce the weight in that area or even ignore it.

 

Your function is very tricky, because you have a lot of divisions. Some of the parameters (or combination of parameters) can lead to a division by zero. If these are always positive, maybe you could use constrained nonlinear fit and set the minimum to reasonable values so division by zero cannot occur. Maybe you can re-parameterize the function to make it more stable, e.g. fit for "ifp2=1/fp2" instead of "fp2", etc. or even for for 1/H or log(H)? If course all this will change the error calculations for the direct parameters.

 

It also seems very important to have good initial parameter guesses. You have a lot of symmetry and e.g. swapping fo1,fp1 with fo2,fp2 will give a solution that is equally good. Starting significantly closer to one of the solutions will be more likely to find it. If you start halfway between them, the direction to the optimum is ambiguous and it can end up in the woods.

 

Feel free to attach your model and some typical data so I can play around a little bit. It is difficult to judge by just looking at the formula.

0 Kudos
Message 2 of 3
(4,019 Views)

The vi, function, and representative data files are attached. I included a file with fit results from a brute-force program I wrote. It works, but is too slow, and I need the correlation matrix to proceed further - values that should be constant (parasitic circuit board elements) are not all constant. The fit values in the word doc should be fine as initial guesses for the LM routine.

 

I started to dig further into the constrained LM program, but need to figure out how to save a revised copy if I make any changes, so not to effect other programs with cross linking, and how to trace execution in the clone of 'check convergence', in order to see which condition is causing the termination. 

 

The fit is okay for some data if the guesses are very close to final, but if the initial guesses vary within a 'reasonable' range, the fit is not so good, and could be improved, likely by changing the termination conditions appropriately, or weighting, or selective scaling, or redefining the parameters as suggested earlier.

 

I'm using the 'Gain - ...' model, not so much the 'Generic...' model.

 

Thanks for the help. It is very much appreciated!

 

 

0 Kudos
Message 3 of 3
(3,987 Views)