- Marcar como nuevo
- Favorito
- Suscribir
- Silenciar
- Suscribirse a un feed RSS
- Resaltar
- Imprimir
- Informe de contenido inapropiado
Hi,
as I'm trying to calculate eigenvalues with ARPACK of a very big matrix, I am in the need of efficient sparse matrix-vector multiplication routines. Unfortunately, at the same time I need more precision than just double precision. Now my question: Is there any support of the MKL sparse Blas matrix-vector multiplication routines (in particular mkl_*bsrgemv) for complex(16) matrices and vectors or some kind of workaround for mkl_zbsrgemv to gain more precision?
Thanks,
Martin
as I'm trying to calculate eigenvalues with ARPACK of a very big matrix, I am in the need of efficient sparse matrix-vector multiplication routines. Unfortunately, at the same time I need more precision than just double precision. Now my question: Is there any support of the MKL sparse Blas matrix-vector multiplication routines (in particular mkl_*bsrgemv) for complex(16) matrices and vectors or some kind of workaround for mkl_zbsrgemv to gain more precision?
Thanks,
Martin
Enlace copiado
2 Respuestas
- Marcar como nuevo
- Favorito
- Suscribir
- Silenciar
- Suscribirse a un feed RSS
- Resaltar
- Imprimir
- Informe de contenido inapropiado
In complex(16) compilation, public source code would do as well as could be done by detailed hand coding.
- Marcar como nuevo
- Favorito
- Suscribir
- Silenciar
- Suscribirse a un feed RSS
- Resaltar
- Imprimir
- Informe de contenido inapropiado
yes,mkl doesn't support quad precision data types.
Responder
Opciones de temas
- Suscribirse a un feed RSS
- Marcar tema como nuevo
- Marcar tema como leído
- Flotar este Tema para el usuario actual
- Favorito
- Suscribir
- Página de impresión sencilla