I'm trying to fit the power spectral density curve from PSD vi with L-M fitting vi. I use the Lorentzian function to fit it in that vi.
Apparently it looked working well, but I realized that the the fitting coefficient varies a lot with the initial value. I understand I have to make the
initial value very close to the expected value. But it should not be a matter in some range. With same range of different initial value. I have no problem to get consistent value using Orgin software. As I understand, Labview and Origin use same method for this fitting.
In Labview, for instance, if I set the initial value 10, it gives me about 15 and 24 for 20 something like that. Of course the mean squared error is changing. But I don't want to interate the initial value to get minimum mean squared error. I searched the forum but could not find right answer. Please help me out.
A single lorentzian line is a very well behaved curve for fitting and the result should not depend on th initial estimate as described.
What version of LabVIEW are you using?
What is the Y-range of your data? if it is very small, it prematurely thinks it is converged even if it is not. In this case, you MUST wire the stdev input with an array of equal size as your data, initialized with value appropriate for your data.
Post your code if you don't mind.
LabVIEW 8.0 has seen a major overhaul of these functions and will work much better.
Thanks, alternbach. I'm using Laview 7.1. I'll post the code later.
I already looked at your 1Dfit. It looks like it generates gaussian or Lorentzian curve and fit with L-M vi. and it worked very well.
My case is a bit different I think. I have PSD curve. It ranged 1E-1 to 1E-9. I want to fit this with Lorentzian formation.
So I have a equation to fit and I'm using L-M vi that I can type the equation. Using the vi I have to put an initial value. As I posted, the fiiting is strongly depending on the initial value not likely Origin or Igor. What should I try now?
A few suggestions: If your numerical values are quite small, and
you are not specifying a weight input to Levenberg-Marquardt.vi, the
algorithm implemented in LabVIEW 7 may converge prematurely due to a
hard-coded absolute tolerance. This tolerance is the weighted
least-squares residue in each iteration. The premature
convergence may be severe enough that very few Lev-Mar iterations are
actually executed, causing increased sensitivity to the initial
guess. The fix for this problem is to specify an array of weights
appropriately sized to increase the weighted residue. You might
try creating an array of weights that is roughly 1/min(y-values),
perhaps 1E6 for your data. This bug has been fixed in LabVIEW 8.0.
You might also try to improve your initial guess, although as Christian
ppoints out, Lorentzian fitting is typically well behaved, and should
not be overly sensitive to initial guess. Instead of using the
max-value of the array, you could try using the Peak Detector.vi to
obtain a better estimate of the peak location and amplitude.
Christian, any experience with that?