I stumbled across an interesting program, which is however based on ISML Routines.
XTR = Transpose(X) XTX = matmul(XTR, X) ! XTXinv = .i.XTX ! Bhat = XTXinv.xt.X.x.Y ! call make_vec(Bhat,Q,P,betaHat) !! SS = (Y-(X.x.Bhat)).tx.(Y-(X.x.Bhat)) ! SSinv = .i.SS
The .I. inverts the matrix XTX, but I was wondering what is the best method in MKL to do these steps. XTX is in the test case a 2 by 2 matrix with only non-zero diagonal. I tried getri but kept getting access code violations.
Does your question relate to fitting a model to data and then obtaining the covariance matrix? There is a simple solution in Lapack/MKL for doing the first part: ?gels(). Regarding following up with calculating the covariance matrix using Lapack routines in MKL, see http://icl.cs.utk.edu/lapack-forum/archives/lapack/msg00367.html .
An alternative is to use the VSL routines in MKL, which were never part of Lapack; see https://software.intel.com/en-us/articles/overview-of-summary-statistics-ss-in-intel-mkl-v103 .
The MKL forum ( https://software.intel.com/en-us/forums/intel-math-kernel-library ) is a better place for questions of this type.
Thank you for your response. I forgot about the forum -- old age is catching up.
Yes I am trying to work out what the heck the algorithm is in the code, it is very messy code and the ISML routines make it an interesting challenge.
Why did you change your picture, although this one is quite nice.
In general, a majority of the numerical routines for linear algebra in IMSL can be replaced by the LAPACK routines in MKL. In your code, the recommended procedure would be to use the routines under orthogonal decomposition. See "LAPACK Least Squares and Eigenvalue Problem Routines" and "Orthogonal Factorizations: LAPACK Computational Routines".
The first step is to decompose X = QR where Q is orthogonal and R is upper triangular. Use DGEQRF or DGEQPF. Then X'X = (QR)'(QR) = R'Q'QR = R'R since Q'Q = I. Then inv(R'R) = inv(R)*inv(R'). Hence, the problem reduces to finding the inverse of an upper triangular matrix, which is simply a matter of backsubstitution and for which MKL routines are available. Almost all of the steps involving transpose of X, formation of (X'X) and its inverse or the hat matrix are thus eliminated. In certain cases, the inverse of R need not be formed either.
The solution philosophy above follows the general rule of numerical analysis: the inverse of a matrix is rarely, if ever, needed in explicit form for practical applications.