I have an function f(x) that I minimize to find parameters for an algorithm. The algorithm consists of a coordinate transformation matrix. The transformation matrix parameters are currently found using the Unconstrained Optimization VI (simplex). I have had quite a bit of success using this VI and defining my algorithm as an objective function. I previously used Excel Solver for this application. The data set changes over time and certain parameters are tracked.
Whenever I perform an optimization, I move the minimum parameters found to the initial guess parameters. I am finding that my initial guess parameters will remain stable for a while but at times will guess incorrectly. This can happen if I feed the algorithm bad data. It is difficult to determine that the data is bad initially, but it causes the minimization to trail off in some crazy direction that is inconsistent with the real world. This happens after repetitive minimizations and is a result of using my found results for the next iteration. However, in an ideal world, my last found results would be the best initial guess for the next optimization I perform.
One major improvement would be to constrain my initial guess parameters. In particular, I would like to constrain the parameters I use in finding the transformation matrix (in my objective function) with lower and upper limits: 9 < R < 12, for example. Can anyone with experience using the optimization VIs, knowing the information above, help point me in the right direction? I came across the Constrained Nonlinear Optimization VI but it appears to work much different than the Simplex method; so I am not sure if this would work. I will try this next. Any thoughts?
I tried using the Constrained Nonlinear Optimization VI. I first tried it without any constraints (min or max constraints) on my parameters and it appeared to work. I could change the initial guess parameters and they would eventually end up the same minimization each time (with the same set of data). Once I add the parameter constraints, however, the VI freezes on the first iteration and I have to abort. I am not sure why and I do not get any errors.
After some testing, it appears that the cno_isisubopt.vi subvi inside the Constrained Nonlinear Optmiziation VI is locking up. The code is attached. If I had to guess, this probably has to do with the input L advancing to Infinity and never allowing the while loop to stop. I am not sure what the internal workings of this VI are so this appears rather difficult to debug!
This is an error that is not tracked or handled by the Constrained Nonlinear Opt Labview VI; thus freezing my program.
Unfortunately I cannot post my objective function. It worked well in Unconstrained Optimization VI. However, I am noticing that on the first iteration of running the Constrained Nonlinear Optimization VI that the ISISUBOPT subvi (inside the CNO vi) is getting hung up because a while loop is not being satisfied. I cannot figure out why because I do not know what this loop is doing. Screenshot of loop in the isisubopt.vi is attached showing which section is not being satisfied. I think this loop has to do with the minor iterations; I tried setting cno settings to max iteration of 0 but that did not work either.
Finally some results:
Still not sure why the Constrained Nonlinear Opt.VI (CNO.vi) freezes up. However, I originally used 30 variables in my objective function that I was trying to minimize. The Unconstrained Optimization VI (UO.vi) gave good results on my variables; I tried these results as my initial guess parameters on the CNO.vi and it got stuck in the subvi as before. I then reduced my number of variables to 15 and tried again. This time I had better success and the CNO.vi did not get stuck. After testing, however, it seems that the CNO.vi is more prone to error in my initial guess parameters than the UO.vi, causing the results to move to the extreme constraints. This may or may not be a problem depending on how well I can consistently provide it good initial guess values. But there again is my problem with the CNO to begin with. I think I will need to spend more effort verifying that my initial guess parameters are close to actual and once I know that my measured data is probably accurate, then perform a UO or CNO.
Are your constraints simple bounds on the variables? It may be possible to transform the constrained problem into an unconstrained problem and apply the unconstrained optimization algorithm that you have already had success with. Can you give some detail about your constraints?
check your "function tolerance" variable in the "stopping criteria" for the CNO. Set it high reasonably. To my understanding the "function tolerance" is a fraction of the f(variable) aka cost function. If it is too low, the while loop never ends (and the L rises ad infinitum). And I share your opinion: that while-loop contains an awful and disgusting bug...
I learned that increasing the tolerance (see above) does not make sense on the long run. I had to modify the while loop in question, so that it ends also, when L rises ad infinitum. See attachment. Now, the optimisation ends succesfully for very low tolerances, too.
I think you are on the right track. But in your bug-fix solution you have to replace the Equal?-sign by a Not Equal?-sign. Otherwise the loop is stopped after the first run. But you want to stopp it when "L" can't be increased anymore, since it reached infinity.
In my case your solution led to out-of-constraints parameters.
An additional comment:
I have the feeling that this error occurs, when you start your optimization too close to the optimum.