- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I commonly work with problems of the form where a maximum needs to be evaluated for a non-linear user-defined function F(x), where x denotes either a scalar or vector of function inputs and an analytical description of the gradient does not exist. I commonly my own routines for this purpose, but was wondering whether a better option is available through the MKL. I have not been able to find any reference beyond the "non-linear least squares" procedure that requires the problem to be twice differentiable. Any advice would be greatly appreciated,
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
A common class of nonlinear least squares problems have the attribute that the final value of the sum of the squares of the functions is "small" in some relevant sense. That is, the data being "fitted" are correctly modelled by the set of functions being used.
For nonlinear optimization where the final function norm is not 'small', the requirement of second-order differentiability is not absolute, because the computational algorithms gradually build up approximations to the derivatives.
See Prof. Mittelmann's Web page at http://plato.asu.edu/sub/nlounres.html for useful links.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page