I am interested in some more details of two routines of the advanced analysis library:
1) NonLinearFitWithMaxIters: what is the stop / convergence criterium? a change in chi^2 less than ...?
By the way, I would like to have a NonLinFit Function which would a) allow to specify error bars / weighting and b) return the number of iterations used. The latter would permit to judge if the number of specified iterations was sufficient by far or by luck, the former to take into account experimental errors or counting statistics.
2) What is the difference of the two algorithms in Convolveex for the user? Is there any preference for one method or the other, e.g. time versus memory consumption ... I could not find any such information.
Solved! Go to Solution.
1) did I get you right: you want this function to work vice versa: instead of giving the maximum number of iterations you would like to set the stop/convergence criterium used by the Levenberg-Marquardt-Algorithm. As one output of this function, you like to get the number of iterations.
2) Performance of the two different convolution algorithm (Linear and FFT)
thanks for addressing these questions.
1) Here I'd like to know a) the convergence criterium of the current NonLinFit routine, and b) wanted to suggest an extended NonLinFit routine which i) permits to use weighting factors and which ii) returns the numbers of iterations actually needed. I.e. I'd like to set a maximum limit of say 1000 iterations and obtain the information that e.g. 120 iterations were actually needed to reach convergence.
2) For the Convolveex function two different models are available. Here I'd like to know which model one should choose... the direct or the FFT one.
unfortunately, it is impossible for me to have a closer look at the code of these functions. I forwarded your request to our developer team in the US. Their answer may take some time. As soon as I get new information, I will post them here!
thank you for your ideas for new functions! Our developer-team noticed them and maybe, you will find them in a future version of our software. If you like, you may use the "Product Suggestion Center" next time which can be found at
For the ConvolveEx-Function, they gave me the following information:
"For small data sets, the Linear algorithm is faster. For large data sets the FFT algorithm is faster"
They will put this information in the future documentation of the CVI-function.
The team is still working on your inquiry about the NonLinearFitWithMaxIters
my colleage from USA told me:
"For NonLinearFitWithMaxIters, the iteration will terminate either when the number of iterations exceeds the maximum value specified by user or when SSE (the summation of square error) of the last iteration is smaller than the smallest SSE among all previous iterations and the difference between the two is small enough."
I hope, that this information satisfies your inquiry! If so, please click the "solved"-button on the right side!
Have a great weekend
thanks for the answers. However, it is still a little bit too early to apply the 'solved' button :-)
1) The answer from the US concerning the NonLinearFit routine is fine, but not very specific: what is criterium used for 'the difference is small enough'? Is it the criterium from the Numerical Recipes Levenberg-Marquardt routine, is it an absolute or a relative value...?
2) For the Convolveex function, what is the NI definition of a small versus a large data set? Are 1000 data points already large? And besides speed, is there not any difference in the resulting data?
3) Thanks for the 'suggest feature' link, I wasn't aware of it.
Thank you, and enjoy the weekend too,
1)The criterion for NonLinearFit routine is a bit complicated. We take both absolute and relative value into consideration. So the iteration stops when either of the absolute or the relative difference researches a certain value.
2)1000 data points should be viewed as large in choosing the algorithm for Convolveex. The major difference between the two algorithms lies in the speed. The resulting data could be viewed as the same except the extremely small numerical difference. If you choose FREQ_DOMAIN algorithm, the function performs FFT first, then multiplication in the frequency domain and finally the IFFT to get the time domain data. So FREQ_DOMAIN algorithm will be much faster when the data set is large. (More than 10 times faster for 1000 data points on my machine)
Hope my answer helps.