Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
Intel Customer Support will be observing the Martin Luther King holiday on Monday, Jan. 17, and will return on Tues. Jan. 18.
For the latest information on Intel’s response to the Log4j/Log4Shell vulnerability, please see Intel-SA-00646

Error in running VASP 5.3.5 with mkl, ifort and mpif90


I did installed VASP executable successfully, only I changed FC=mpif90 (openmpi compiled using Intel compiler) whatever you mentioned in the following link

But I got the following error while running,

mpirun -np 4 /opt/VASP/vasp.5.3/vasp

this gives the error as follows, 

WARNING: for PREC=h ENMAX is automatically increase by 25 %
        this was not the case for versions prior to vasp.4.4
 WARNING: for PREC=h ENMAX is automatically increase by 25 %
        this was not the case for versions prior to vasp.4.4
 WARNING: for PREC=h ENMAX is automatically increase by 25 %
        this was not the case for versions prior to vasp.4.4
 LDA part: xc-table for Ceperly-Alder, standard interpolation
 POSCAR, INCAR and KPOINTS ok, starting setup
 FFT: planning ...
 WAVECAR not read
 entering main loop
       N       E                     dE             d eps       ncg     rms                                                                               rms(c)

forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image              PC                Routine            Line        Source                                                                          00002B3133018DE9  Unknown               Unknown  Unknown
libmkl_blacs_inte  00002B3130D8B273  Unknown               Unknown  Unknown
libmkl_blacs_inte  00002B3130D7D9FB  Unknown               Unknown  Unknown
libmkl_blacs_inte  00002B3130D7D409  Unknown               Unknown  Unknown
vasp               00000000004D7BCD  Unknown               Unknown  Unknown
vasp               00000000004CA239  Unknown               Unknown  Unknown
vasp               0000000000E23D62  Unknown               Unknown  Unknown
vasp               0000000000E447AD  Unknown               Unknown  Unknown
vasp               0000000000472BC5  Unknown               Unknown  Unknown
vasp               000000000044D25C  Unknown               Unknown  Unknown          00002B31340C1C36  Unknown               Unknown  Unknown
vasp               000000000044D159  Unknown               Unknown  Unknown

mpirun has exited due to process rank 6 with PID 12042 on
node node01 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).


Here all the libs associated with vasp executable,

ldd vasp =>  (0x00007fffcd1d5000) => /opt/intel/mkl/lib/intel64/ (0x00002b7018572000) => /opt/intel/mkl/lib/intel64/ (0x00002b7018c84000) => /opt/intel/mkl/lib/intel64/ (0x00002b7018ea0000) => /opt/intel/mkl/lib/intel64/ (0x00002b701968b000) => /opt/intel/mkl/lib/intel64/ (0x00002b70198c8000) => /opt/intel/mkl/lib/intel64/ (0x00002b7019f66000) => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/ (0x00002b701b174000) => /opt/intel/openmpi-icc/lib/ (0x00002b701b477000) => /opt/intel/openmpi-icc/lib/ (0x00002b701b67b000) => /opt/intel/openmpi-icc/lib/ (0x00002b701b8b8000) => /lib64/ (0x00002b701bd03000) => /lib64/ (0x00002b701bf07000) => /lib64/ (0x00002b701c180000) => /lib64/ (0x00002b701c38a000) => /lib64/ (0x00002b701c5a2000) => /lib64/ (0x00002b701c7a5000) => /lib64/ (0x00002b701c9c3000) => /lib64/ (0x00002b701cd37000) => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/ (0x00002b701cf4d000) => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/ (0x00002b701d17d000) => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/ (0x00002b701d4b3000) => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/ (0x00002b701d96f000) => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/ (0x00002b701dbbe000) => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/ (0x00002b701e48c000) => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/ (0x00002b701e7f1000)
        /lib64/ (0x00002b7018351000)

Please take a look into this and help me in running the same.

0 Kudos
2 Replies

Dear Prasanna Kumar N.,

You mentioned you use OpenMPI. If so, you should link against libmkl_blacs_openmpi_lp64.{a,so}, not against libmkl_blacs_intelmpi_lp64.{a,so}.

E.g. instead of

BLAS= -mkl=cluster

please use something like:

BLAS = -L$(MKLROOT)/lib/intel64 -Wl,-Bstatic \
  -Wl,--start-group \
  -lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64 -lmkl_cdft_core \
  -lmkl_intel_lp64 -lmkl_sequential -lmkl_core \
  -Wl,--end-group \
-Wl,-Bdynamic -lm



Hi Prasanna

It looks the main difference is in OpenMPI, where the article assumed the Intel MPI. 

The is for Intel MPI => /opt/intel/mkl/lib/intel64/ (0x00002b7018572000) => /opt/intel/mkl/lib/intel64/ (0x00002b7018c84000) => /opt/intel/mkl/lib/intel64/ (0x00002b7018ea0000) => /opt/intel/mkl/lib/intel64/ (0x00002b701968b000)

OpenMP MPI should using 



Could you please try to change them according to MKL link advisor. 

for example. 

 –DMKL_ILP64, => should ILP64 mode.

-mkl=cluster => should change to explicitly mkl library link according to MKL link advisor.

Best Regards,