The L-M algorithm requires first partial derivatives, and no other
algorithm is available in the generalized least-squares part of GSL.
Yep. Shame on me for not RTFM, but the page labeled "Minimization
Algorithms without Derivatives" clearly explains that there are no
derivative-less solvers available at this time. What confused me was
that there is a framework in place for derivative-less solvers, but no
solvers to go with the framework. For instance, the method
gsl_multifit_fsolver_alloc() exists.
derivatives, you might want to consider using the GSL implementation of
the Simplex method:
Interesting. That seems much better suited to my original line of
thinking. I may try both.
One more question: In looking over the "Search Stopping Parameters"
page, I'm not entirely clear about the values
gsl_multifit_test_delta() accepts (epsabs and epsrel). If after j
iterations my system is:
F_1(a_j, b_j, c_j) = error_1_j
...
F_i(a_j, b_j, c_j) = error_i_j
...
F_m(a_j, b_j, c_j) = error_m_j
Then after the next iteration it will be:
F_1(a_k, b_k, c_k) = error_1_k
...
F_i(a_k, b_k, c_k) = error_i_k
...
F_m(a_k, b_k, c_k) = error_m_k
where k = j + 1, each parameter (a,b,c) has been perturbed slightly in
the direction of the best-fit solution.
What I'm interested in minimizing is the root mean squared error of
the set of m error values. Is that what the docs refer to when they
use the term "absolute error", or is it some other measure?
_______________________________________________
Help-gsl mailing list
address@hidden
http://lists.gnu.org/mailman/listinfo/help-gsl