From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Nonlinear Fitting

cancel
Showing results for 
Search instead for 
Did you mean: 

Downhill Simplex (Nelder-Mead)

This is the first public release of a 2006 effort to create a "drop-in" replacement for Nonlinear curve fit that uses Nelder-Mead instead of Levenberg-Marquardt.

 

The only advantage is the absence of numerical partial derivatives and it is thus useful in the rare cases where the partial derivatives are problematic (e.g. if the function model is microscopically rough due to e.g. the use of iterative procedures).

 

Disadvantages are typically much slower convergence compared to LM and lack of good intrinsic parallelization (All partial derivatives of LM can be calculated in parallel, for example!)

 

The algorithm is based on Downhill Simplex nD, and retains some of the shortcomings. For example the initial simplex is generated by a simple "+1", which might not be a reasonable scale. Of course the scaling will adapt to the problem after a few iterations unless we step off a cliff. 😄

 

To make it "pin compatible" with Nonlinear curve fit, I have added "weights" and the definition of termination conditions. An empty matrix is returned for the covariance.

 

While old, this code has not been used much and is thus not fully tested. Similar to Nonlinear Curve Fit, it can easily be used for multiple dimension in the independent variables. Many improvements are possible.

 

 Examples:

You can use any of the examples for nonliear curve fit and replace the fitting VI with this version. Here are some observations using the front panel defaults:

Ellipse fit: works fine

Sum of three Gaussians with offset: Does not converge because (1) the initial guesses are too poor and (2) the initial simplex scaling (+1) is way too off. If works if better initial estimates are entered manually.

Gaussian surface with Offset: Works fine, but converges much more slowly as expected, of course.

 

(Attachment is in LabVIEW 2015)

 

EDIT (9-25-2017). I also attached the same down-converted to LabVIEW 8.2. There is nothing fancy in the code, so it should convert nicely, but I have no way to test. Please report back if there are issues. You should be able to open this in any LabVIEW version version 8.2 and newer.

 

Message 1 of 13
(9,206 Views)

An example where Levenberg-Marquardt fails due to problems with the partial derivatives can be found here. In this case, Nelder-Mead continues to work fine.

0 Kudos
Message 2 of 13
(9,128 Views)

Hi everyone,

 

after testing altenbach's Nelder-Mead VI with several problems of different complexity I have found it to be very useful and reliable.

However, if there are several solutions (i.e. two or more sets of parameters result in the same functions, the parameters are therefore ambiguous) the algorithms sometimes gives unrealistic parameters as a solution. 

Therefore it would be useful to give the VI some constraints as maximum/minimum value of the parameters in order to avoid unrealistic solutions.

 

Does anyone have an idea about how to implement that in Nelder-Mead algorithm?

 

Best regards,

 

Hobgoblin

0 Kudos
Message 3 of 13
(9,093 Views)

Yes.  The central step in the NM Algorithm is to compute the value of your Objective Function at each point of the Simplex.  If you want to impose a constraint (such as Parameter X lies in [a, b]), then if the Simplex point violates the constraint, you multiply the Objective Function by 100, basically saying "This is not a good point", and the Simplex will "recoil" from this point.

 

You can find this technique (and the Method, itself) described by doing a Web search.  I've incorporated this into my coding of the Algorithm.

 

Bob Schor

0 Kudos
Message 4 of 13
(9,087 Views)

Not at all surprised that the solution may not be unique. How does a good initial guess help here. For example with I am doing there is already a fairly well established set of initial guesses/estimates.. Also is the levenberg M immune to that? ( I guess not too)

[BADGE NAME]

0 Kudos
Message 5 of 13
(9,084 Views)

BlessedK,

     You would really benefit, I think, in a course on Numerical Methods.  There you would learn about coming up with numerical solutions (meaning "put an actual number to it, not a symbolic representation like "pi") to problems, and would understand what it means to approximate an answer.  For example, how do you represent "pi", or how do you compute square root of 2?  Neither can be exactly expressed as a Dbl ...

     Minimization problems involve "finding the minimum in some N-Dimensional Space".  There are various methods, including "Brute Force" (try "all the numbers in the Space and choose the one giving the minimum"), which has obvious problems.

     There are two common sources of "variability" (or "error") that often arise.  One is that the Space may have multiple "dimples", local minima isolated from each other, so if you start with a poorly-chosen Initial Guess, you may find a minimum, but the "wrong" one.  Here, a good Initial Guess gives you a better chance of converging to the "minimum you want" (notice I didn't say the "right" minimum ...).

     The other problem is that the space may become "flat" near the minimum -- if you move in any direction a small amount, the value of the Objective Function might not change significantly (meaning when you do the computation, you get essentially the same value).  If you are using a Gradient Descent (Levenburg-Marquardt) method, this is equivalent to having a Gradient near (or at) zero.  With the Simplex Method, this is equivalent to having all the points of the Simplex have the same "value", in which case the usual step is to use the centroid of the Simplex.

     All of which comes down to "accepting the Approximation", and perhaps putting some tests in your code to detect some of these situations.

 

Bob Schor

0 Kudos
Message 6 of 13
(9,076 Views)

Thanks Bob schor,

 

lengthy response as usual plus asking to take a course on numerical methods 🙂 but I asked a simple question.

 

I understand all that you said.

 

My question is: is the levenberg M approach immune to the issue of non-unique solutions ?

 

Would using good initial guesses tend to keep the results within a realistic range or not. Whatever you consider "correct" or "right"

Is again not unique. You specify "a range" as the constraint.

 

[BADGE NAME]

0 Kudos
Message 7 of 13
(9,075 Views)

Sorry, BlessedK -- I'll try to restrain my teaching tendencies.

 

 is the levenberg M approach immune to the issue of non-unique solutions ?  No.

 

 

Would using good initial guesses tend to keep the results within a realistic range or not.

Difficult to say, as "good initial guesses" is a tad ambiguous, as is "a realistic range".  If the problem does have a Global Minimum, and your initial guess is closer to it than to other minima, you are "more likely" to find the Global Minimum.

 

Bob Schor

 

0 Kudos
Message 8 of 13
(9,068 Views)

Thanks. Greatly Appreciated.

 

Not saying you should restrict your teaching tendencies. Nothing bad with answering the questions directly first, then expand on the points. (Why/how so etc) 

Those details are definitely very useful as are those from Altenbach too. I always appreciate both of your inputs in particular.

For some of us the focus is on the ultimate goal (the bigger picture) and these things only serve as a "tool". Perhaps of utmost importance to us is first recognizing the right tools we need. (Fit-for-purpose). We first focus on how to use the tool to achieve our immediate goals and then later gradually follow-up on other equally but not immediately important details.

Please keep up the details. I am mostly a self-taught person but I also know that teachers facilitate understanding. (Except if every book on earth were written by K.A Stroud)

 

 

 

[BADGE NAME]

0 Kudos
Message 9 of 13
(9,063 Views)

Thank you for your input, that approach seems very reasonable. 

And on the problem of "several" solutions: I often have models, where it can be easily shown that there are several, sometimes even infinitely many, solutions, that lead to the exact same result. The algorithm (doesn't matter which one you use) of course can't distinguish between a physically meaningful and physically nonsense solution. 

0 Kudos
Message 10 of 13
(9,045 Views)