- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
as I'm trying to calculate eigenvalues with ARPACK of a very big matrix, I am in the need of efficient sparse matrix-vector multiplication routines. Unfortunately, at the same time I need more precision than just double precision. Now my question: Is there any support of the MKL sparse Blas matrix-vector multiplication routines (in particular mkl_*bsrgemv) for complex(16) matrices and vectors or some kind of workaround for mkl_zbsrgemv to gain more precision?
Thanks,
Martin
as I'm trying to calculate eigenvalues with ARPACK of a very big matrix, I am in the need of efficient sparse matrix-vector multiplication routines. Unfortunately, at the same time I need more precision than just double precision. Now my question: Is there any support of the MKL sparse Blas matrix-vector multiplication routines (in particular mkl_*bsrgemv) for complex(16) matrices and vectors or some kind of workaround for mkl_zbsrgemv to gain more precision?
Thanks,
Martin
Link Copied
2 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In complex(16) compilation, public source code would do as well as could be done by detailed hand coding.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
yes,mkl doesn't support quad precision data types.
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page