Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
Announcements
FPGA community forums and blogs on community.intel.com are migrating to the new Altera Community and are read-only. For urgent support needs during this transition, please visit the FPGA Design Resources page or contact an Altera Authorized Distributor.

Sparse Blas with extended precision

boeseskimchi
Beginner
802 Views
Hi,

as I'm trying to calculate eigenvalues with ARPACK of a very big matrix, I am in the need of efficient sparse matrix-vector multiplication routines. Unfortunately, at the same time I need more precision than just double precision. Now my question: Is there any support of the MKL sparse Blas matrix-vector multiplication routines (in particular mkl_*bsrgemv) for complex(16) matrices and vectors or some kind of workaround for mkl_zbsrgemv to gain more precision?

Thanks,
Martin
0 Kudos
2 Replies
TimP
Honored Contributor III
802 Views
In complex(16) compilation, public source code would do as well as could be done by detailed hand coding.
0 Kudos
Gennady_F_Intel
Moderator
802 Views
yes,mkl doesn't support quad precision data types.
0 Kudos
Reply