Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

proper way to call MKL routines

nooj
Beginner
397 Views
I have ifort version 11.1.088 on Mac. I have a code that is predominantly F77, with very few F90/95 bits:
reals are "real*8",ints "integer*4";
I use modules, intent(), allocatable arrays, and absolutely no pointers.
What is the proper way to call the higher precision eigenvalue solver for symmetric matrices (DSYEVR)?
Right now I'm calling DSYEVR directly (because I used ifort's flag "-r8"), and it's giving me problems. I suspect there is a better way. I'm looking for something fast for 3x3 matrices, and with as little fancy memory management as possible. (For instance, I don't know the difference between real*8 and real(8).)
I see the example program SYEVR_MAIN in syevr.f90 (which never uses the variable ISUPPZ). But I don't see the documentation for the exact call to SYEVR.
Help?
- Nooj
0 Kudos
6 Replies
TimP
Honored Contributor III
397 Views
real*8 and real(8) are the same as double precision with Intel Fortran, and probably with any other Fortran you will find for your Mac. Neither are portable. ifort -r8 would change all default real to real(8) but would not affect variables which are declared with a specific data type like real(4) or real(8). I don't think your statement about "calling DSYEVR directly" makes sense; perhaps a working example would help.
MKL won't be particularly efficient for such small matrices, but it would hardly matter unless you have many of them; certainly it's not evident how "fancy memory management" could enter in.
0 Kudos
nooj
Beginner
397 Views
Tim -

Thanks for the info on variable declaration.

MKL won't be particularly efficient for such small matrices,
but it would hardly matter unless you have many of them
Actually, I will have many of them; tens to hundreds of millions per run, once the code is running full steam. I'm using MKL because it's the only robust eigensolver I know of. I need both eigenvalues and eigenvectors, and I can't assume the eigenvalues are all distinct. Do you have a better suggestion? Every time I look for robust solvers, I end up with solvers for large systems; mine will always be 3x3 symmetric, positive definite matrices.
I don't think your statement about "calling DSYEVR directly" makes sense
What I meant is there are at least eight ways to call ?syevr between v10.0 and 11.1 (F77_SYEVR, DSYEVR, DSYEVR_MKL95, etc.), and I wasn't sure whether to use the F95 interface, or call the underlying F77 routine.
perhaps a working example would help.
I tried to give an example athttp://software.intel.com/en-us/forums/showthread.php?t=74152, but no one had any insight.
certainly it's not evident how "fancy memory management" could enter in.
Every time I try to pull the offending code into a small test, the test never crashes. My main code crashes in random locations, so I thought perhaps I was misusing memory somewhere. I used valgrind, and it gave me thousands of false positives (use of uninitialized variable) and an error I didn't understand right before the crash. (I can post the error in the morning.)
- Nooj
0 Kudos
mecej4
Honored Contributor III
397 Views
The issue here is a rather simple misunderstanding of how multidimensional arrays are declared and segments passed to Fortran-77 subroutines.

Please see my reply to your earlier thread .

Furthermore, if the matrices whose eigenvalues you want are never going to be anything other than 3 X 3, the characteristic equation

det ( A - \lambda I) = 0

is cubic and you can solve it directly using one of the many available cubic equation solvers.
0 Kudos
nooj
Beginner
397 Views
> if the matrices whose eigenvalues you want
> are never going to be anything other than 3 X 3,
> the characteristic equationis cubic
> and you can solve it directly
> using one of the many available cubic equation solvers.
True, direct analytic calculation seems to be the fastest method. I also want the eigenvectors, which the intertubes agree can be computed fastest by coding up an analytic solution.
That said, the MKL is fast, even on small matrices. I ran a few tests of the MKL with computing 3x3 eigenvalues millions of times, and it was pretty consistent across matrices with different condition numbers: about 100K eigenproblems per second on a single 3.02GHz processor. This is much faster than my other bottlenecks and the serial portions of my code.
-f
0 Kudos
Gennady_F_Intel
Moderator
397 Views
Interestingly, if you try to compile your code using Intel compiler with some optimization options enabled, and compare the results for such small sizes. What will be the results?
--Gennady
0 Kudos
nooj
Beginner
397 Views
Gennady -

For the time I quoted, I used these options:
-O3
-unroll
-ftrapuv
-traceback
-automatic
-static-intel
and these libraries:
-lmkl_solver_lp64
-lmkl_intel_lp64
-lmkl_lapack95_lp64
-lmkl_sequential
-lmkl_core
I think only -O3 is mattered here. (The loop was not unrollable.) I don't know of any chip-specific flags that apply to 64 bit Mac.
- Nooj
0 Kudos
Reply