Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Intel Community
- Software
- Software Development SDKs and Libraries
- Intel® oneAPI Math Kernel Library
- a numerical accuracy issue using MKL and MATLAB

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

utab

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

01-02-2012
04:42 AM

61 Views

a numerical accuracy issue using MKL and MATLAB

Lately, I have run into an accuracy issue while using Intel MKL. Let me briefly summarize what I did, I have been using C++ as the main language and MTL4 as the matrix library. Since MTL4 gives direct access to the internal data represetation of CSR matrices through pointers, it is easy to interface MKL and use the routines from the MKL library.

I have some template functions for MKL library routines, such as

+ sparse matrix - dense vector multiplication

+ symmetric sparse matrix - dense vector multiplication

+ dense matrix ^T dense matrix multiplication

+ dense matrix dense matrix multiplication

I programmed a symmetric lanczos solver in C++ however results of some orthogonalizations in this routine are differing from the results of MATLAB which is also known to use MKL internally. Is there a way to increase the numerical accuracy of the computations in MKL? Especially for an operation like

t - Z Z^T M f t

Basically this is a projection operation with a projector of P = I - Z Z^T M

t = P t,

where Z is a dense rectangular matrix, M is a sparse matrix and f and t are dense vectors. I programmed this operation in 4 operations:

+ I first form Mf by sparse blas routines and keep this in a variable called Mq

+ I compute Z^T Mq by blas level 2 routines, namely, cblas_dgemm and store the result as vec1

+ Continue with Z vec1, in the same way, and store that as vec2

+ and result the computations with daxpy, for t = t - alpha vec2

So basically I chain some matrix -vector operations.

I wrote this part in detail because my MATLAB and MKL supported C++ implementation start to diverge after this operation at some point in the iteration. And in the end if I use the vectors of my C++ implementation, I can not get the results that MATLAB gives for instance some similar operation to the above cases in MATLAB results in

9.949738746078907e+02

and in C++ implementation

995.028

so the result differences are not very low.

I was wondering if I am doing something wrong and could there be a way to improve the accuracy of sparse(dense) matrix- vector multiplication accuracy in MKL.

Thanks in advance and best regards,

Umut

Link Copied

1 Reply

SergeyKostrov

Valued Contributor II

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

01-06-2012
06:10 AM

61 Views

Quoting utab

I can not get the results that MATLAB gives

...

I was wondering if I am doing something wrong and could there be a way to improve the accuracy

...

Whatprecision are you using in both cases? Single or Double?

In general, results of some floating point calculationscoulddifferent if in one test-case a Single-

precision is used and in the second test-case a Double-precision is used.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

For more complete information about compiler optimizations, see our Optimization Notice.