- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

1 Solution

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

^{T}J.

Link Copied

5 Replies

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Typically, nonlinear least-squares routines avoid the expensive calculation of the Hessian. This is one of the reasons why we should not apply a general multivariate optimization routine to a least-squares problem.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

^{T}J.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

*> if it is simply using J^T J for the Hessian at every step of the iteration, it's not even worth{y} of considering*

That statement is probably based on a literal interpretation of a mathematical description of what is done in the MKL routines.

Typically, instead of forming the normal equations

**J**

^{T}(x_{k})**J**

**(x**

_{k})**s**

_{k}= -**J**

^{T}(x_{k})**r(x**and solving them, as compact mathematical notation in algorithm descriptions may indicate, the overdetermined equations

_{k})**J**

**(x**

_{k})**s**are solved using orthogonal factorization. A similar situation: we may write the solution of (n linear equations in n unknowns)

_{k}= -r(x_{k})**A x = b**as

**x = A**, but in software the inverse is never formed and used this way.

^{-1}b- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page