LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Nonlinear curve fitting Lev-Mar vs constrained Lev Mar

Solved!
Go to solution

The nonlinear Lev-Mar curve fit vi works but when changed to the constrained vi it does not fit. The Fit simple gamma variate demo shows this behavior.

I think the constrained version will be faster and would like to be able to use it.

Help appreciated

JD

0 Kudos
Message 1 of 14
(1,239 Views)

Can you do a "save for previous" (2020 or lower). Most here don't have LabVIEW 2022.

0 Kudos
Message 2 of 14
(1,219 Views)

Here is the LabVIEW 2019 version

0 Kudos
Message 3 of 14
(1,202 Views)
Solution
Accepted by topic author Prof_Dee

Thanks. I was actually able to look at it on a VM with LabVIEW 2022.

 

The problem is that your x array and y arrays have different sizes. The non-constraint fit can gloss over that, but the constraint can't.

Your X array has 360 elements, while your Y array has 361 elements. Once you delete one element from Y, both fits succeed! Try it!

 

Here's what happens if I delete the last elements (of course you need to decide what needs to go).:

 

altenbach_0-1669486629502.png

 

0 Kudos
Message 4 of 14
(1,198 Views)
Solution
Accepted by topic author Prof_Dee

Oh, such a simple error!

Thank you very much.

JD

0 Kudos
Message 5 of 14
(1,194 Views)
Solution
Accepted by topic author Prof_Dee

On a side note, here's a "near literal" translation of your model into graphical code. It is about 40% faster on my rig.

 

(Yours could probably be streamlined a bit more too if you would take some of the scalar calculations out of the loop. I have not tried, because I don't do text based code. 😄 )

 

 

altenbach_0-1669493753973.png

 

0 Kudos
Message 6 of 14
(1,161 Views)

Thank you,

That is a great help.

I had tried that too, but yours is a better implementation.

I have about 100K of these fits to do in a single brain scan so anything to speed things up helps a lot.

0 Kudos
Message 7 of 14
(1,152 Views)

I think you really need to fit the raw sparse data instead of the derivative of splined data at a much finer (fake!) resolution. No need to resample with a much larger number of points that all directly depend on the original sparse data anyway. (I haven't studied how the function would look like.). I am sure it would be significantly faster, and more importantly, more honest. You cannot base your goodness of fit on 360 points if you actually only have 25!

 

You should really eliminate all these small loops because there is always the danger that they will never stop. For example in your "resample" code, the wile loop would never stop if the user would accidentally enter a negative interval. Not bulletproof!

 

Also, wiring [i] to index array is the same as autoindexing, no need to wire N.

 

altenbach_0-1669496123774.png

 

 

 

0 Kudos
Message 8 of 14
(1,147 Views)

Thanks,

I will try it out.

I wanted to increase my timing resolution, hence the resampling.

JD  

0 Kudos
Message 9 of 14
(1,122 Views)

@Prof_Dee wrote:

I wanted to increase my timing resolution, hence the resampling.

JD  


Think about it -- by resampling, you are adding "guesses/estimates/values based on pre-set assumptions" on a fixed data set, then hoping to increase "timing resolution" based on the "real" and the "made-up/interpolated" data.  If you attempted to publish this data and a reviewer who knew a bit of math and statistics saw it, you'd be in Deep Weed ...

 

Bob Schor

0 Kudos
Message 10 of 14
(1,081 Views)