- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi Everyone,

In advance I am sorry for my noobish question. I am a physics PHD student

and basically I use python for my math/physics problems.

But now I have a problem which requires more computing capacity and

I intend to use Fortran, MKL and supercomputer.

Those three things are very new to me :).

So my problem is:

- I have a system of 17576 atoms and real
**symmetric hessian matrix**which describes

interatomic interactions and has 52728x52728=2780241984 elements. - This matrix is very
**sparse**: 1.56% of elements are nonzero - The minimum and maximum of eigenvalues and density of states are known
- I need to diagonalise this matrix and find
**all eigenvalues and****eigenvectors**with high precision

At the moment I am looking at FEAST and its implementations on MKL. However, examples given in solvers_eef directory does not use MPI and I don't know how to make my diagonalization script scalable. I understand that internally scalapack and MKL PARADISO uses MPI for parallel solvers. It is also written that high precision can be obtained if in single given interval there are less than 1000 eigensolutions. So my idea is to calculate whole eigenspectrum by splitting this problem into multiple intervals with less than 1000 eigenvaluesss and this can be done in parallel fashion.

So my questions are:

- Is it ok to use FEAST for this kind of problem or I should use something different?
- If FEAST is suitable for this problem, how should I distribute resources?

For example if I use 4 nodes each with 32 cores can I divide whole problem

into 64 intervals for each interval giving 2 MPI threads or my intuition

is wrong about how should I approach this problem. - How should I compile my program to make it optimal and scalable. I have already posted a question about this :) https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/707686
- Also suggestions about what should I read or learn is very appreciated, because I don't know from where to start and I don't have anyone to consult

Thank You for Your attention

Lukas

Link Copied

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi Lukas,

Sorry for the delay. I saw your issue and have several question about this.

1) about your hardware : you mentioned **17576 atoms **and **use 4 nodes each with 32 cores **

**Do you mean the **you have Intel® Atom™ Processors? or other Hardware?

2) compiler and OS etc inforamtion

3) about the matrix 52728x52728=2780241984 elements.

As I understand, you hope to use MPI version of FEAST, which is not support in current MKL version. So have you tried put all matrix to feast directly? ( if no, I may recommend you try it , if it works, then all of rest question will not question)

or you can try Cluster Pardiso ( MPI supported) or scalapack. all of them should be able to work. and there is example code under MKL folder. (please note, when install MKL, please use customize install, thus Cluster version will be installed).

Best Regards,

Ying

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Ying H. (Intel) wrote:

1) about your hardware : you mentioned

17576 atomsanduse 4 nodes each with 32 cores

Do you mean theyou have Intel® Atom™ Processors? or other Hardware?

No, he means atoms, which Democritus and Dalton speculated about and theorized as the basic building block of everything in the universe. The atoms that you and I and our computers are made of, and we would have more of them if Intel could refrain from stripping the electrons out of a large fraction of those atoms :) .

Sorry, I could not pass up the opportunity! No offense intended!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi,

If you want to solve full Eigen problem and find all eigenvectors than it's better to use dense scalapack. ExtendedEigensolver can support distributed computation throw RCI interface. However resulted eigenvalues matrix is dense so you will not get any benefit from sparse computations.

Thanks,

Alex

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page