Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
47 Views

FEAST solver missing eigenvalues

Jump to solution

I am solving a sparse eigenvalue problem between emin and emax.  When I use a subset of this range, I find extra eigenvalues.

Attached is the test program and matrices.  The test program output is

Intel(R) Math Kernel Library Version 11.3.1 Product Build 20151021 for 32-bit applications

Searching between -10 and -0.1
Found 14 eigenvalues:
-0.834635 (1.19813)
-0.656935 (1.52222)
-0.464994 (2.15057)
-0.458775 (2.17972)
-0.436459 (2.29116)
-0.429403 (2.32881)
-0.394109 (2.53737)
-0.391432 (2.55473)
-0.356238 (2.80711)
-0.349315 (2.86275)
-0.304226 (3.28703)
-0.298108 (3.35449)
-0.266397 (3.75379)
-0.24941 (4.00946)
eps=-5.89044

Searching between -10 and -0.4
Found 12 eigenvalues:
-1.83242 (0.545725)
-1.74229 (0.573959)
-1.39579 (0.71644)
-1.07728 (0.92826)
-0.995946 (1.00407)
-0.834693 (1.19805)
-0.657065 (1.52192)
-0.526781 (1.89832)
-0.465162 (2.14979)
-0.458787 (2.17966)
-0.436485 (2.29103)
-0.429825 (2.32653)
eps=-10.8525

Where the number in brackets is (-1/λ), as I am solving the inverse of the actual problem.  For this application, I am most interested in the lowest result, so returning 1.19 rather than 0.54 is a substantial error.

Matlab, and the textbook solution, both agree with the second set.  Is there anything in the MKL that can give me confidence I have actually found the lowest solution?  My concern is that iterating up through smaller search intervals will both cost performance, and could potentially still be missing solutions if I cannot get the size of interval.

 

0 Kudos

Accepted Solutions
Highlighted
Employee
47 Views

Hi Alan,

I took a look at your issue and the problem I see here is that almost all eigenvalues are clustered near zero and the following eigenvalue problem has more than 20 eigenvalues within the given search interval. Indeed, if we increase accuracy by adding contour points (fmp[1]=16 or more instead of 8 by default) eigensolver in this case returns info = 3 that corresponds to the case when the given subspace size is too small. The reason why default number of contour points didn’t give enough accuracy is that it is algorithmically hard problem to separate clustered eigenvalues. Where did this problem come from? Maybe there is more suitable way to use Intel MKL to solve it.

The best solution is to avoid putting edges of the search interval near eigenvalue clusters. If you are unaware of that possibility, increase the number of contour points for better accuracy.

To ensure that you indeed have found the lowest eigenvalue you can use PARDISO with mtype=2, msglvl=1, to check that A-(λmin-eps)B is positive definite(eps is a small number to make system nonsingular). Pardiso will return error = -4 in case if the matrix is not positive definite quite fast.

Best regards,

Irina

View solution in original post

0 Kudos
4 Replies
Highlighted
Moderator
47 Views

Alan, thanks for the report. I see the similar results with the latest version 11.3 update 2 ( released today). We will investigate the problem and keep you updated with the results. 

0 Kudos
Highlighted
Employee
48 Views

Hi Alan,

I took a look at your issue and the problem I see here is that almost all eigenvalues are clustered near zero and the following eigenvalue problem has more than 20 eigenvalues within the given search interval. Indeed, if we increase accuracy by adding contour points (fmp[1]=16 or more instead of 8 by default) eigensolver in this case returns info = 3 that corresponds to the case when the given subspace size is too small. The reason why default number of contour points didn’t give enough accuracy is that it is algorithmically hard problem to separate clustered eigenvalues. Where did this problem come from? Maybe there is more suitable way to use Intel MKL to solve it.

The best solution is to avoid putting edges of the search interval near eigenvalue clusters. If you are unaware of that possibility, increase the number of contour points for better accuracy.

To ensure that you indeed have found the lowest eigenvalue you can use PARDISO with mtype=2, msglvl=1, to check that A-(λmin-eps)B is positive definite(eps is a small number to make system nonsingular). Pardiso will return error = -4 in case if the matrix is not positive definite quite fast.

Best regards,

Irina

View solution in original post

0 Kudos
Highlighted
Beginner
47 Views

Thanks Irina

I now also realise that I can solve the symmetric indefinite problem, and the number of positive and negative eigenvalues is reported in iparm[21] and [22], so I will implement either the test that the lowest reported eigenvalue is lowest, or that the number of found eigenvalues is correct.

The test problem is a linear buckling analysis for a thin metal plate under load, and the eigenvalue is the multiple of the applied load that will cause the plate to fail.  So I am looking for the lowest n eigenvalues, which is not a good fit for the MKL currently.

And as B has to be positive definite, I need to solve the inverse problem, which moves all the eigenvalues close to zero.  Now I am searching for the largest eigenvalue, and the plate fails if this is greater than 1.  So I expect all the eigenvalues to be in [0,1) for most problems, and some problems will have a handful of eigenvalues >= 1.

 

 

0 Kudos
Highlighted
Moderator
47 Views

Alan, pls check how it works with the latest update of MKL ( v.11.3 update 3)

0 Kudos