Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

Using FEAST for large matrix

Hazra__Dhiraj_Kumar
308 Views

Hello,

   I am presently working with FEAST to find eigenvalues and eigenvectors for a symmetric matrix. I need to solve N X N matrix with N ~ 10^6- 10^8.     

  Now I have few queries :

   1. SInce the size is large it is not possible to allocate this storage in a desktop (it has 8GB ram). Is there any way to handle large matrix of this size ?

    2. The matrix is also expected to be sparse for which I expect to store it in a compressed format and that can save some memory space. But the eigenvector matrix is also of the  dimension N X N  which I have to pre-allocate before calling FEAST. Hence the compressed storage will not be of much help. Is there any way to solve this problem ?

    3. Since FEAST fpm uses 64 iparm of MKL_pardiso, I have checked that iparm(60)  helps to work using disk space storage. Can I use that in feast to solve this large problem ? However, in this case also I guess I have to pass eigenvectors (N X N) to FEAST which I have to pre-allocate. Can I use disk space somehow for this?

    My program is working for moderate size matrices (10000 X 10000).

   I would appreciate any help in this regard.

Thanks,

Dhiraj

 

 

 

0 Kudos
2 Replies
Alexander_K_Intel2
308 Views

Hi,

You are correct - general approach to reduce memory size of internal MKL pardiso is using ooc algorithm (iparm(60)). But memory allocation for matrix Q is still needed. The only way to reduce size of matrix Q is divide search interval on several subintervals with reduced number of eigenvalues in each of it (but you have to know such estimation for each subinterval). After you need to call EE functionality for each subinterval and find eigenvalues for in each subinterval in loop.

Thanks,

Alex

0 Kudos
Hazra__Dhiraj_Kumar
308 Views

Hello Alex,

  Thanks a lot for your quick reply. I shall try your suggestion.

Thanks,

Dhiraj

 

 

0 Kudos
Reply