From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Problem in fitting when I am giving f '(X,a) (Partial derivatives) in the model

Solved!
Go to solution

Hi Folks,

 

I am having trouble with non linear curve fitting when i am using partial derivatives attached in the model vi..

 

I would like to first brief what i was trying to do. I  wanted my data to fit as sum of 3 gaussians. I have read few forum posts and could implement it. Now in my reference model when i am giving f(x,a) alone the VI works perfectly fine. But when i am gave gradients (i.e. f '(x,a)) in the model, the fit is not proper. 

 

I am attaching the two VIs (model and the fitting VI) and also results with and without gradients given. Please help me figure out why it isnt giving a correct fit when given gradients.

 

The reason behind using f '(x,a) is to increase the convergence speed, because the application i am building can't afford large convergence time.

With partial derivatives.PNG

0 Kudos
Message 1 of 9
(3,362 Views)

Hey i was not being able to attach my VIs for some reason in the forum. It says file extensions not supported. So i have attached snippets to it.

Download All
0 Kudos
Message 2 of 9
(3,359 Views)

Here are the VIs..

 

Thanks in Advance

0 Kudos
Message 3 of 9
(3,354 Views)

Thanks for attaching the VIs.  You could have just said you were using the 3 Gaussian model that ships with LabVIEW ...  But never mind, your question is clearly related not to fitting Gaussians, but to exploring the two modes of the LM Algorithm, with and without the Gradient being given.

 

So I have Good News and Bad News.  I checked your Math, and you do have the correct equations for the Gradient, and it does appear that you implemented the LabVIEW code to use it correctly.  The Bad News (as you already know) is that by specifying the Gradient, the entire LM routine seems to go out the window.

 

I tried a Single Gaussian version of your code, with the Peak at 80.  With the Gradient, the peak moves to around 110, is much lower, and much broader.

 

There's something wrong with the Gradient.  I'm not (yet) sure what it is, but I'm very suspicious of the X values that are being plotted -- it almost sounds to me like somehow we have forgotten the "dt" factor in the Gradient computation.  [I'll admit I tend to use "M" languages, such as Mathematic, Maple, and MathCad, for derivatives, not LabVIEW ...].

 

Bob Schor

0 Kudos
Message 4 of 9
(3,337 Views)

The math for the gradient is incorrect. You need to multiply the middle wire (going into the built array with 3 inputs) with -1.

0 Kudos
Message 5 of 9
(3,334 Views)

@altenbach wrote:

The math for the gradient is incorrect. You need to multiply the middle wire (going into the built array with 3 inputs) with -1.


Try this.... (I took the liberty to cleanup a few inefficient code constructs, but there are probably many more improvements possible)

 

A debuggin step would be to apply the numeric partial derivatives as used inside the fitting routines and compare with yours.

 

(I used my parallel model template to calculate them and do a comparison with your results. Try it! If delta is 1e-6, the two results differ by less than a factor of 0.000075, close enough!)

 

Message 6 of 9
(3,326 Views)
Solution
Accepted by Rex_saint

Actually, the real bug is that you do b-x instead of x-b (as labeled). Since it gets squared for most other places, the sign does not matter there, but it matters for the middle parameter derivative. Once you do the switcheroo on the subtraction input, the rest falls into place. 😄

 

 

Message 7 of 9
(3,314 Views)

Son-of-a-gun, Altenbach's right (as almost-always)!  I was looking at the equations (which are correct), and failed to notice that the LabVIEW subtraction of x-b was really b-x.  When I fixed this one function and ran the code, I worried that the fit was missing -- when I make the line thicker and brighter, I realized I missed seeing it because it went right through the center of the data points!  Wow.

 

BS

Message 8 of 9
(3,294 Views)

 

 

Thank you Altenbach... I feel so stupid at the mistake. I still don't understand i checked all the nodes. Especially this one but i failed to notice the error. U have an eye, that of an eagle. Awesome sir, Thanks a lot. And Bob thank you for participating

 

0 Kudos
Message 9 of 9
(3,236 Views)