- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
I have been using MKL/ARPACK with MKL versions from 8 to 10.1 to solve real eigenvalue problems with no issues. If I replace the 10.1 MKL libraries with 10.2 Update 1, the eigenvector extraction produces (very) incorrect results in some but not all cases ( the eigenvalues are still correct). If I go back to 10.1, the results are correct - this is just swapping DLLS. Trying to nail down the offending MKL routine is a very complex process by swapping software LAPACK for MKL LAPACK and will take days. Anyone else seen this?
Link kopiert
13 Antworten
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
This looks like a bug in dstevr which was fixed recently and the fix will be available in the next update release.
--Vipin
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Quoting - Vipin Kumar E K (Intel)
This looks like a bug in dstevr which was fixed recently and the fix will be available in the next update release.
--Vipin
Dear Vasci_Intel
Would you mind provide ussome information regarding how to use MKL and Arpack together? Any referenceis greatly appreciated. I am planning to use INTEL Pardiso Solver with Arpack, but not quite sure where to start.
Regards
Bulent
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Hi Bulent,
Sorry to not reply - did not subscribe to the thread. Do you still need an example of using ARPACK with PARDISO?
Andrew
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Quoting - vasci_intel
Hi Bulent,
Sorry to not reply - did not subscribe to the thread. Do you still need an example of using ARPACK with PARDISO?
Andrew
Yes, I am still interested :-)
Thx
bulent
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Hello vasci_intel,
Could you please provide a reproducer for the issue, and also describe which platform do you use (win32, win32e, etc.).It would be enought to havea simple programm which calls ARPACK with your input data and prints out incorrect part of eigenvector.
Thanks in advance.
-- Alexander.
Could you please provide a reproducer for the issue, and also describe which platform do you use (win32, win32e, etc.).It would be enought to havea simple programm which calls ARPACK with your input data and prints out incorrect part of eigenvector.
Thanks in advance.
-- Alexander.
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Quoting - Alexander Kobotov (Intel)
Hello vasci_intel,
Could you please provide a reproducer for the issue, and also describe which platform do you use (win32, win32e, etc.).It would be enought to havea simple programm which calls ARPACK with your input data and prints out incorrect part of eigenvector.
Thanks in advance.
-- Alexander.
Could you please provide a reproducer for the issue, and also describe which platform do you use (win32, win32e, etc.).It would be enought to havea simple programm which calls ARPACK with your input data and prints out incorrect part of eigenvector.
Thanks in advance.
-- Alexander.
It is actually quite a bit of work to create a standalone sample case. Is the potential fix you mentioned in MKL 10.2 Update 1 or a later Update that is still to be released?
PS my platforms are Win32 and Win64
Andrew
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Quoting - vasci_intel
Hi Alexander,
It is actually quite a bit of work to create a standalone sample case. Is the potential fix you mentioned in MKL 10.2 Update 1 or a later Update that is still to be released?
PS my platforms are Win32 and Win64
Andrew
It is actually quite a bit of work to create a standalone sample case. Is the potential fix you mentioned in MKL 10.2 Update 1 or a later Update that is still to be released?
PS my platforms are Win32 and Win64
Andrew
Ok, then probablyyou could provide some additional information, like which particular routine from ARPACK was used, or at least which of eigenproblems is solved, could you? Size of the matrixand other input parameters are also highly appresiated. I'd like to reproduce thefailure to check if it is reallyabout the issuealready fixed in upcoming 10.2 Update 2.
-- Alexander
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Quoting - Alexander Kobotov (Intel)
Ok, then probablyyou could provide some additional information, like which particular routine from ARPACK was used, or at least which of eigenproblems is solved, could you? Size of the matrixand other input parameters are also highly appresiated. I'd like to reproduce thefailure to check if it is reallyabout the issuealready fixed in upcoming 10.2 Update 2.
-- Alexander
I am calling the following ARPACK routines
DSAUPD followed by
DSEUPD
As I may have mentioned... the eigenvalues are correct, the eigenvectors extracted by DSEUPD are (sometimes) way out. That would suggest it is a LAPACK routine called by DSEUPD
c routines call by dseupd
c dgeqr2 LAPACK routine that computes the QR factorization of
c a matrix.
c dlacpy LAPACK matrix copy routine.
c dlamch LAPACK routine that determines machine constants.
c dorm2r LAPACK routine that applies an orthogonal matrix in
c factored form.
c dsteqr LAPACK routine that computes eigenvalues and eigenvectors
c of a tridiagonal matrix.
c dger Level 2 BLAS rank one update to a matrix.
c dcopy Level 1 BLAS that copies one vector to another .
c dnrm2 Level 1 BLAS that computes the norm of a vector.
c dscal Level 1 BLAS that scales a vector.
c dswap Level 1 BLAS that swaps the contents of two vectors.
Andrew
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Quoting - vasci_intel
Hi Alexander,
I am calling the following ARPACK routines
DSAUPD followed by
DSEUPD
As I may have mentioned... the eigenvalues are correct, the eigenvectors extracted by DSEUPD are (sometimes) way out. That would suggest it is a LAPACK routine called by DSEUPD
c routines call by dseupd
c dgeqr2 LAPACK routine that computes the QR factorization of
c a matrix.
c dlacpy LAPACK matrix copy routine.
c dlamch LAPACK routine that determines machine constants.
c dorm2r LAPACK routine that applies an orthogonal matrix in
c factored form.
c dsteqr LAPACK routine that computes eigenvalues and eigenvectors
c of a tridiagonal matrix.
c dger Level 2 BLAS rank one update to a matrix.
c dcopy Level 1 BLAS that copies one vector to another .
c dnrm2 Level 1 BLAS that computes the norm of a vector.
c dscal Level 1 BLAS that scales a vector.
c dswap Level 1 BLAS that swaps the contents of two vectors.
Andrew
I am calling the following ARPACK routines
DSAUPD followed by
DSEUPD
As I may have mentioned... the eigenvalues are correct, the eigenvectors extracted by DSEUPD are (sometimes) way out. That would suggest it is a LAPACK routine called by DSEUPD
c routines call by dseupd
c dgeqr2 LAPACK routine that computes the QR factorization of
c a matrix.
c dlacpy LAPACK matrix copy routine.
c dlamch LAPACK routine that determines machine constants.
c dorm2r LAPACK routine that applies an orthogonal matrix in
c factored form.
c dsteqr LAPACK routine that computes eigenvalues and eigenvectors
c of a tridiagonal matrix.
c dger Level 2 BLAS rank one update to a matrix.
c dcopy Level 1 BLAS that copies one vector to another .
c dnrm2 Level 1 BLAS that computes the norm of a vector.
c dscal Level 1 BLAS that scales a vector.
c dswap Level 1 BLAS that swaps the contents of two vectors.
Andrew
DearAndrew,
If you don't mind, I haveseveral questions to you. Do you solve symmetricor non-symmetric eigenvalue problem?Could you provide the residual norm || A*x - lambda*x || for bad and good cases?
What you described is usually observed in the problem of poorly separated eigenvalues or whenmultiplicy of a eigenvalue is more than 1. Is it possible al least to look at the values of computed eigenvalues in your case?
In some cases, finite arithmetic may cause zero eigenvalues to be computed as very small, possibly negative, numbers even it is known that a matrix is nonnegative. Any change in auxilary routine like dscal might cause changing the sign some of these eigenvalues.
Similar tosystems of linear equations,an eigenvalue and eigenvectors may be well-conditioned or ill-conditioned (there are a plenty of references about conditioning of eingenvalues and eigenvectors, see for example SIAM book "Templates for the solution of algebaric eigenvalue problems"). Deviationineigenvectors or in eigenvaluesmight happendue toill-conditioning of problem. Therealso existsseveral so-calledspectral transformationsthat help to turn poorly separated eigenvalues into well-separated ones.The residual norm and the measure of orthogonality for symmetriceigenvalue problemsare only thecriteria whether eigenvectors and eigenvalues are correctly computed.
All the best
Sergey
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Quoting - Sergey Kuznetsov (Intel)
DearAndrew,
If you don't mind, I haveseveral questions to you. Do you solve symmetricor non-symmetric eigenvalue problem?Could you provide the residual norm || A*x - lambda*x || for bad and good cases?
What you described is usually observed in the problem of poorly separated eigenvalues or whenmultiplicy of a eigenvalue is more than 1. Is it possible al least to look at the values of computed eigenvalues in your case?
In some cases, finite arithmetic may cause zero eigenvalues to be computed as very small, possibly negative, numbers even it is known that a matrix is nonnegative. Any change in auxilary routine like dscal might cause changing the sign some of these eigenvalues.
Similar tosystems of linear equations,an eigenvalue and eigenvectors may be well-conditioned or ill-conditioned (there are a plenty of references about conditioning of eingenvalues and eigenvectors, see for example SIAM book "Templates for the solution of algebaric eigenvalue problems"). Deviationineigenvectors or in eigenvaluesmight happendue toill-conditioning of problem. Therealso existsseveral so-calledspectral transformationsthat help to turn poorly separated eigenvalues into well-separated ones.The residual norm and the measure of orthogonality for symmetriceigenvalue problemsare only thecriteria whether eigenvectors and eigenvalues are correctly computed.
All the best
Sergey
The bu introduced in MKL 10.2 seems to have been resolved in MKL Update 2.
In response to your questions above.... the problems with errors were well conditioned.
Andrew
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Unfortunately this MKL 10.2 regression problem as not been entirely resolved in MKL 10.2 Update 2. I have tracked the problem down to two LAPACK routines dgeqr2 and dsteqr. When I replace the MKL versions with the reference LAPACK source code versions , the issue disappears.
So given the same input data, the MKL 10.2 versions of these routines give different output data compared to both MKL 10.1 and previous versions and the reference LAPACK FORTRAN source code.
I am going to create test cases, and submit a premier support issue.
So given the same input data, the MKL 10.2 versions of these routines give different output data compared to both MKL 10.1 and previous versions and the reference LAPACK FORTRAN source code.
I am going to create test cases, and submit a premier support issue.
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Quoting - vasci_intel
Unfortunately this MKL 10.2 regression problem as not been entirely resolved in MKL 10.2 Update 2. I have tracked the problem down to two LAPACK routines dgeqr2 and dsteqr. When I replace the MKL versions with the reference LAPACK source code versions , the issue disappears.
So given the same input data, the MKL 10.2 versions of these routines give different output data compared to both MKL 10.1 and previous versions and the reference LAPACK FORTRAN source code.
I am going to create test cases, and submit a premier support issue.
So given the same input data, the MKL 10.2 versions of these routines give different output data compared to both MKL 10.1 and previous versions and the reference LAPACK FORTRAN source code.
I am going to create test cases, and submit a premier support issue.
Andrew,
you don't need to do that - you can use the private tread here and we will check the problem on our side.
--Gennady
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Quoting - Gennady Fedorov (Intel)
Andrew,
you don't need to do that - you can use the private tread here and we will check the problem on our side.
--Gennady
The issue is not quite as clear-cut as I thought - the differences I was seeing in MKL 10.2 vs using ARPACK's 'old' LAPACK src were in the values of the WORK array ( which is temporary data). Increasing the NCV parameter ( max number of LANCZOS vectors) made the issue go away as well.... excuse me while I bang me head against a wall!!
This may be an issue of numerical sensitivity with ARPACK that MKL exposes...
Andrew

Antworten
Themen-Optionen
- RSS-Feed abonnieren
- Thema als neu kennzeichnen
- Thema als gelesen kennzeichnen
- Diesen Thema für aktuellen Benutzer floaten
- Lesezeichen
- Abonnieren
- Drucker-Anzeigeseite