Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

Highlighted
##

JohnNichols

New Contributor II

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

11-29-2018
07:22 PM

24 Views

IMSL Conversions

I stumbled across an interesting program, which is however based on ISML Routines.

XTR = Transpose(X) XTX = matmul(XTR, X) ! XTXinv = .i.XTX ! Bhat = XTXinv.xt.X.x.Y ! call make_vec(Bhat,Q,P,betaHat) !! SS = (Y-(X.x.Bhat)).tx.(Y-(X.x.Bhat)) ! SSinv = .i.SS

The .I. inverts the matrix XTX, but I was wondering what is the best method in MKL to do these steps. XTX is in the test case a 2 by 2 matrix with only non-zero diagonal. I tried getri but kept getting access code violations.

Thanks

John

4 Replies

Highlighted
##

mecej4

Black Belt

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

11-30-2018
04:38 AM

24 Views

Does your question relate to fitting a model to data and then obtaining the covariance matrix? There is a simple solution in Lapack/MKL for doing the first part: **?gels()**. Regarding following up with calculating the covariance matrix using Lapack routines in MKL, see http://icl.cs.utk.edu/lapack-forum/archives/lapack/msg00367.html .

An alternative is to use the VSL routines in MKL, which were never part of Lapack; see https://software.intel.com/en-us/articles/overview-of-summary-statistics-ss-in-intel-mkl-v103 .

The MKL forum ( https://software.intel.com/en-us/forums/intel-math-kernel-library ) is a better place for questions of this type.

Highlighted
##

JohnNichols

New Contributor II

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

11-30-2018
07:01 AM

24 Views

Dear mecej4:

Thank you for your response. I forgot about the forum -- old age is catching up.

Yes I am trying to work out what the heck the algorithm is in the code, it is very messy code and the ISML routines make it an interesting challenge.

Why did you change your picture, although this one is quite nice.

John

Highlighted
##

jimdempseyatthecove

Black Belt

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

11-30-2018
07:20 AM

24 Views

>>*Why did you change your picture, although this one is quite nice.*

Perhaps to show Schrödinger's cat, when out of the box, is alive and well.

Jim Dempsey

Highlighted
##

avinashs

New Contributor I

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

11-30-2018
12:29 PM

24 Views

In general, a majority of the numerical routines for linear algebra in IMSL can be replaced by the LAPACK routines in MKL. In your code, the recommended procedure would be to use the routines under orthogonal decomposition. See "LAPACK Least Squares and Eigenvalue Problem Routines" and "Orthogonal Factorizations: LAPACK Computational Routines".

The first step is to decompose X = QR where Q is orthogonal and R is upper triangular. Use DGEQRF or DGEQPF. Then X'X = (QR)'(QR) = R'Q'QR = R'R since Q'Q = I. Then inv(R'R) = inv(R)*inv(R'). Hence, the problem reduces to finding the inverse of an upper triangular matrix, which is simply a matter of backsubstitution and for which MKL routines are available. Almost all of the steps involving transpose of X, formation of (X'X) and its inverse or the hat matrix are thus eliminated. In certain cases, the inverse of R need not be formed either.

The solution philosophy above follows the general rule of numerical analysis: the inverse of a matrix is rarely, if ever, needed in explicit form for practical applications.

For more complete information about compiler optimizations, see our Optimization Notice.