- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

1. A digamma and polygamma function. These functions come up when working with the log densities of probability distributions that include beta and gamma functions. The digamma function is the first derivative of the log gamma function, and knowing that first derivative is helpful when computing the mode of certain distributions. These functions are available in GSL, but not MKL.

2. A symmetric rank-1 and rank-k update for the Cholesky decomposition of a matrix. These functions would be equivalent to the _syr and _syrk BLAS functions, but would take Cholesky decompositions of the A matrix as the input, and return the Cholesky of A as the output. Why is this useful? The fastest way to compute the density of, or draw from, a multivariate normal distribution is by using the Cholesky of the precision matrix to convert to/from a standard MVN. When this matrix changes repeatedly, I need to do the update, and then take the Cholesky decomposition over and over. This seems inefficient to me. I know that there are some algorithms out there that can do this, but it would be quite nice if MKL included that functionality.

3. Trust region algorithms that are more amenable to general nonlinear optimization, as opposed to solving a nonlinear least squares algorithm. All that would be necessary here would be some efficient code to solve the trust region subproblem, given a current estimate of a gradient, Hessian and trust radius. An example would be the Steihaug conjugate gradient algorithm.

Please let me know if requests like this should go elsewhere. Or, please let me know if there are existing solutions to these problems that I don't know about. But it seems to me that these three things would be of general interest, and could take advantage of optimizations of the Intel architecture in a way that most users would not be able to implement on their own.

Thanks,

Michael

Link Copied

3 Replies

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Could you describe more deeply why current version of Trust region solver in MKL is not convenient for you?

With best regards,

Alexander Kalinkin

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

All is OK. This is the right place to give us your feedbacks and requests regarding MKL.

Regarding your requests,

Item 1 we have the similar request and we just added your notes.

Regarding item 2 the new Feature Request has been submitted to the internal database and we will back to you as soon as this functionality will be available.

Item 3 Alexander has already asked you. Could you please give us the more details

--Gennady- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

In my ideal dream world, MKL would include a nonlinear optimizer that works similar to the ones in R or MATLAB: the user supplies a pointer to a function that evaluates the function and the gradient, and the MKL routine does the rest (including quasi-Newton Hessian approximations) A more flexible approach might be how MKL does NLS now: supplying separate functions for initialization, iteration, convergence checking, etc.

Why is there a need for this functionality to be in MKL? Many available nonlinear optimizers use line search methods (as in R and GSL). However, trust region methods tend to be more robust when Hessians are nearly singular, and in my problems, line search methods always seem to crash. The problem is that solving the trust region subproblem can be hard to implement efficiently. Also, most existing quasi-Newton methods use BFGS methods, which always give a positive-definite Hessian, even when the true Hessian isn't. SR1 updating (which is why I wanted the rank-1 update to the Cholesky of a matrix) does not have this restriction. Unfortunately, I have not been able to find an existing implementation of a trust region optimizer with SR1 updates. Therefore, I coded my own: the Steihaug conjugate-gradient trust region method with preconditioning in Ch. 7 of the Nocedal and Wright (2006) Nonlinear Optimization book.

So what routines might MKL want to supply? for starters, solving the separate trust region subproblem efficiently (using dogleg, Steihaug, Lanczos, or other methods), and computing BFGS, SR1, and L-BFGS updates for Hessians, inverse Hessians, and Choleskys of both. It would also me nice to have a straightforward routine for vectorized or parallel numerical differentiation (user supplies a pointer to a scalar function pointer, and MKL estimates a gradient vector at a particular point).

I could go on and on about this, but I hope this adds some more clarity.

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page