Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
6828 Discussions

## All eigevalues and eigenvectors of large sparse symmetric matrix

Beginner
278 Views

Hi Everyone,

In advance I am sorry for my noobish question. I am a physics PHD student
and basically I use python for my math/physics problems.
But now I have a problem which requires more computing capacity and
I intend to use Fortran, MKL and supercomputer.
Those three things are very new to me :).

So my problem is:

• I have a system of 17576 atoms and real symmetric hessian matrix which describes
interatomic interactions and has 52728x52728=2780241984 elements.
• This matrix is very sparse: 1.56% of elements are nonzero
• The minimum and maximum of eigenvalues and density of states are known
• I need to diagonalise this matrix and find all eigenvalues and eigenvectors  with high precision

At the moment I am looking at FEAST and its implementations on MKL. However, examples given in solvers_eef directory does not use MPI and I don't know how to make my diagonalization script scalable. I understand that internally scalapack and MKL PARADISO uses MPI for parallel solvers. It is also written that high precision can be obtained if in single given interval there are less than 1000 eigensolutions. So my idea is to calculate whole eigenspectrum by splitting this problem into multiple intervals with less than 1000 eigenvaluesss and this can be done in parallel fashion.

So my questions are:

1.  Is it ok to use FEAST for this kind of problem or I should use something different?
2. If FEAST is suitable for this problem, how should I distribute resources?
For example if I use 4 nodes each with 32 cores can I divide whole problem
into 64 intervals for each interval giving 2 MPI threads or my intuition
is wrong about how should I approach this problem.
3.  How should I compile my program to make it optimal and scalable. I have already posted a question about this :) https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/707686
4. Also suggestions about what should I read or learn is very appreciated, because I don't know from where to start and I don't have anyone to consult

Thank You for Your attention

Lukas

3 Replies
Employee
278 Views

Hi Lukas,

Sorry for the delay.  I saw your issue and have several question about this.

1) about your hardware : you mentioned  17576 atoms  and  use 4 nodes each with 32 cores

Do you mean the you have  Intel® Atom™ Processors?   or other Hardware?

2) compiler and OS etc inforamtion

3)  about the matrix  52728x52728=2780241984 elements.

As I understand, you hope to use MPI version of FEAST, which is not support in current MKL version.  So  have  you tried   put all matrix to feast directly? ( if no, I may recommend you try it , if it works, then all of rest question will not question)

or you can try Cluster Pardiso ( MPI supported) or scalapack.  all of them should be able to work.  and there is example code under MKL folder. (please note, when install MKL, please use customize install, thus Cluster version will be installed).

Best Regards,

Ying

Black Belt
278 Views

Ying H. (Intel) wrote:

1) about your hardware : you mentioned  17576 atoms  and  use 4 nodes each with 32 cores

Do you mean the you have  Intel® Atom™ Processors?   or other Hardware?

No, he means atoms, which Democritus and Dalton speculated about and theorized as the basic building block of everything in the universe. The atoms that you and I and our computers are made of, and we would have more of them if Intel could refrain from stripping the electrons out of a large fraction of those atoms :) .

Sorry, I could not pass up the opportunity! No offense intended!

Employee
278 Views

Hi,

If you want to solve full Eigen problem and find all eigenvectors than it's better to use dense scalapack. ExtendedEigensolver can support distributed computation throw RCI interface. However resulted eigenvalues matrix is dense so you will not get any benefit from sparse computations.

Thanks,

Alex