LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Fitting small data with a lorentzian curve

Highlighted

Hello,

 

as described above I'm trying to fit some measurements with a lorentzian curve. Currently I'm using the Nonlinear Curve Fit Vi. I will attach everything that is needed, since at the moment I'm just playing around.

 

My Problem is that the amplitudes are in the range of 1E-12 to 1E-9.

Now I found this:

"What is the Y-range of your data? if it is very small, it prematurely thinks it is converged even if it is not. In this case, you MUST wire the stdev input with an array of equal size as your data, initialized with value appropriate for your data."

 

I guess my Y-range is very small, but I don't get the rest of the quote. Since the Vi doesn't have a stdev input?

 

I swapped the LavMar with the bounded LavMar and switched it to TRDL. It actually generates a good fit until you put the offset to 30 or 0.5. So neither is really giving me what I would like to have.

 

 

I have attached every needed VI.

"Messung mit Fit.vi" is the one that you need to start. I know the code doesn't look pretty. As you can see it generates values for a lorentzian curve, with hardcoded values.

"Ansprechbarer Lorentz-Fit.vi" is the Vi where my LavMar is implemented.

"Lorentzfunktion.vi" is my Model for the lorentzian function.

The last VI isn't important, but it's needed since I use it.

 

 

Greetings

0 Kudos
Message 1 of 5
(1,374 Views)
Highlighted

Ok what I now did is just multiplying my Y-Values with 1E6 and dividing the Y-Values after LavMar. Since the characteristic is given by the parameter that doesn't change with the scale it should work like that, but is their another way? Since It will also increase the noise.

 

0 Kudos
Message 2 of 5
(1,352 Views)
Highlighted

@Vermax wrote:

 

"What is the Y-range of your data? if it is very small, it prematurely thinks it is converged even if it is not. In this case, you MUST wire the stdev input with an array of equal size as your data, initialized with value appropriate for your data."

 

 


Where did you get this quote?  I don't know the details of LabVIEW's implementation of LM, so I don't know if the "convergence criteria" are "absolute" or "relative" scaled.  Note that there is a "Weight" input which could be used to normalize your data.  Try taking the Standard Deviation of your data and making the Weight an initialized Array of size N whose values are 1/SD, which might be what the quote is recommending.

 

Bob Schor

0 Kudos
Message 3 of 5
(1,338 Views)
Highlighted

http://forums.ni.com/t5/LabVIEW/fitting-with-Lorentzian-curve/td-p/285300

 

Thats the link.

 

There is only 1 measurement per datapoint. So I can't calculate the SD for every Point. I could do it over the whole measurement, but isn't that pointless? Since the array will have the same value for every datapoint?

0 Kudos
Message 4 of 5
(1,327 Views)
Highlighted

No, it is not pointless.  Yes, it is scaling it up, which is not what "weight" is intended to do (it is, indeed, intended to let you say "These points are measured extremely accurately compared to those points, so weigh them more heavily when fitting the function").  You are trying to "solve a different problem" (which depends on knowing the inner workings of the LM algorithm, so it's a bit of guesswork whether this makes sense at all).

 

The LM algorithm is an iterative "converging" algorithm that "crawls" towards the "best" answer, usually defined as the one with the closest fit to the data.  However, at some point in the process, you might be comparing Solution 1 with Solution 2, and find that (a) both solutions fit almost equally well, differing by, say, .0000000001, and (b) Solution 1, itself, differs from Solution 2 by a "trivial" amount, say .0000000001.  But if the data are such that the parameters start small, you might reach Condition (b) too early unless you scale the data up in some "reasonable" manner.  This, I think, is what is being recommended.

 

The (old) post from 2005 that you referenced noted that the LM algorithm had been "fixed" in LabVIEW 8.  I'm assuming you are using a much more recent version, so this whole issue might be moot ...

 

Bob Schor

0 Kudos
Message 5 of 5
(1,319 Views)