Lately, I have run into an accuracy issue while using Intel MKL. Let me briefly summarize what I did, I have been using C++ as the main language and MTL4 as the matrix library. Since MTL4 gives direct access to the internal data represetation of CSR matrices through pointers, it is easy to interface MKL and use the routines from the MKL library.
I have some template functions for MKL library routines, such as
I programmed a symmetric lanczos solver in C++ however results of some orthogonalizations in this routine are differing from the results of MATLAB which is also known to use MKL internally. Is there a way to increase the numerical accuracy of the computations in MKL? Especially for an operation like
t - Z Z^T M f t
Basically this is a projection operation with a projector of P = I - Z Z^T M
t = P t,
where Z is a dense rectangular matrix, M is a sparse matrix and f and t are dense vectors. I programmed this operation in 4 operations:
+ I first form Mf by sparse blas routines and keep this in a variable called Mq + I compute Z^T Mq by blas level 2 routines, namely, cblas_dgemm and store the result as vec1 + Continue with Z vec1, in the same way, and store that as vec2 + and result the computations with daxpy, for t = t - alpha vec2
So basically I chain some matrix -vector operations.
I wrote this part in detail because my MATLAB and MKL supported C++ implementation start to diverge after this operation at some point in the iteration. And in the end if I use the vectors of my C++ implementation, I can not get the results that MATLAB gives for instance some similar operation to the above cases in MATLAB results in
and in C++ implementation
so the result differences are not very low.
I was wondering if I am doing something wrong and could there be a way to improve the accuracy of sparse(dense) matrix- vector multiplication accuracy in MKL.