From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Levenberg Marquardt Optimisation issue

Solved!
Go to solution

Hi,

 

I'm looking to use the Levenberg Marquardt algorithm using a percentage error optimisation technique instead of the absolute error technique. As far as I know the Levenberg Marquardt algorithm in Labview uses the absolute error technique (i.e. Y-f(x)) when calculating out the errors. How do I change this to use the percentage error technique, where the error = [(Y-f(x))/Y]*100? I modified the error calculation within the algorithm to calculate the error this way but the results weren't satisfactory. I'm wondering if I'm overlooking anything or is there a lot more to this than meets the eye? I know in other programs like Mathematica you have the option to choose how you want the error calculated but I don't see this option in the Labview vi. Any help appreciated.

 

Thanks,

LMS

 

Additional Information:

I am using Labview version 8.2

I changed the subvi ABX to calculate the error differently. The ABX vi contains a subvi called LM function and gradient. The output of this subvi calculates the error as Y - best fit. I have changed this calculation to be [((Y-best fit)/Y)*100] to get the percentage error instead. This affects alpha, beta and chisqr. I believe that I may need to make more changes to the algorithm to implement the percentage error technique but I'm not sure what changes to make. The results I recieved from the above changes were worse than the results I got from the absolute error method (error = Y - best fit). This shouldn't have been the case. I'm implementing the VI format of the Levenberg Marquardt algorithm and not the formula string. Any help appreciated.

 

Thanks,

LMS

0 Kudos
Message 1 of 8
(3,791 Views)
Solution
Accepted by topic author lms17

Hi LMS,

I am not sure if I have understood you correctly.

However as the Lev-Mar algorithm calculates Y-f(x). If you divide this output by the best non-linear fit (Y) and multiply by 100. This will give you your desired effect.

 

I hope this helps you. 

Ashish Naik
Automotive Business Development Manager
National Instruments UK
Message 2 of 8
(3,749 Views)

Levenberg-Marquard uses a weighted chisquare = Sum[(Y-(f(x))² x weight)].

 

I would think you should get close to what you need if you would make the weights to be a function of Y for example.

Just create an array of weights according to your array of Y data and wire both the the fitting VI.

 

I would not mess with the NI code. 🙂

Message 3 of 8
(3,737 Views)
Just wondering: Did you solve your problem?
0 Kudos
Message 4 of 8
(3,709 Views)

Hi,

 

I tried using the weights but this did not work for me as the results were still worse. In the end, I used the NI VI just as it was and I changed the equations I was inputting to this VI. For the percentage error, I set Y = 1 and f(x) = f(x)/Y. In other words, I divided the equation by Y. There was no need to multiply both sides by 100. The results I got were identical to the results I got in Mathematica using the percentage technique. Thanks for all yer help.

 

LMS17

0 Kudos
Message 5 of 8
(3,531 Views)

Hello,

 

I'm pretty new at labview.

 

The last post help me very much to understand the Chisqr. 

 

I want to understand better the calculation of alpha and beta.

 

In numerical recipes C, beta=-1/2 plus second derivate of chisqr.

I can´t find the constant factor -1/2 in ABX.

 

I can´t find the factor 1/2 of the covariance as well.

 

When I try to understand the calculation of alpha, I´m complete lost.

Is the same equation of Numerical Recipes C (15.5.7) or a different one?

 

In LM function and gradient is just the gradient or second order partial derivates, I saw there order 2

 

 

Thank you very much. 

0 Kudos
Message 6 of 8
(3,238 Views)

mmlf wrote:

I want to understand better the calculation of alpha and beta. 


... continued here. Let's keep it all in one place. 😉

0 Kudos
Message 7 of 8
(3,194 Views)

altenbach wrote: 
... continued here. Let's keep it all in one place. 😉
 
 Sorry, bad link. Here it is.... 🙂

 

0 Kudos
Message 8 of 8
(3,177 Views)