Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
6956 Discussions

Error in running VASP 5.3.5 with mkl, ifort and mpif90

Prasanna_Kumar_N_
1,063 Views

I did installed VASP executable successfully, only I changed FC=mpif90 (openmpi compiled using Intel compiler) whatever you mentioned in the following link

https://software.intel.com/en-us/articles/building-vasp-with-intel-mkl-and-intel-compilers?page=1#comment-1842228

But I got the following error while running,

mpirun -np 4 /opt/VASP/vasp.5.3/vasp

this gives the error as follows, 

WARNING: for PREC=h ENMAX is automatically increase by 25 %
        this was not the case for versions prior to vasp.4.4
 WARNING: for PREC=h ENMAX is automatically increase by 25 %
        this was not the case for versions prior to vasp.4.4
 WARNING: for PREC=h ENMAX is automatically increase by 25 %
        this was not the case for versions prior to vasp.4.4
 LDA part: xc-table for Ceperly-Alder, standard interpolation
 POSCAR, INCAR and KPOINTS ok, starting setup
 FFT: planning ...
 WAVECAR not read
 entering main loop
       N       E                     dE             d eps       ncg     rms                                                                               rms(c)

forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image              PC                Routine            Line        Source                                                                           
libmpi.so.1        00002B3133018DE9  Unknown               Unknown  Unknown
libmkl_blacs_inte  00002B3130D8B273  Unknown               Unknown  Unknown
libmkl_blacs_inte  00002B3130D7D9FB  Unknown               Unknown  Unknown
libmkl_blacs_inte  00002B3130D7D409  Unknown               Unknown  Unknown
vasp               00000000004D7BCD  Unknown               Unknown  Unknown
vasp               00000000004CA239  Unknown               Unknown  Unknown
vasp               0000000000E23D62  Unknown               Unknown  Unknown
vasp               0000000000E447AD  Unknown               Unknown  Unknown
vasp               0000000000472BC5  Unknown               Unknown  Unknown
vasp               000000000044D25C  Unknown               Unknown  Unknown
libc.so.6          00002B31340C1C36  Unknown               Unknown  Unknown
vasp               000000000044D159  Unknown               Unknown  Unknown
                                                                  

--------------------------------------------------------------------------
mpirun has exited due to process rank 6 with PID 12042 on
node node01 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------

 

Here all the libs associated with vasp executable,

ldd vasp

linux-vdso.so.1 =>  (0x00007fffcd1d5000)
        libmkl_intel_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_intel_lp64.so (0x00002b7018572000)
        libmkl_cdft_core.so => /opt/intel/mkl/lib/intel64/libmkl_cdft_core.so (0x00002b7018c84000)
        libmkl_scalapack_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_scalapack_lp64.so (0x00002b7018ea0000)
        libmkl_blacs_intelmpi_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so (0x00002b701968b000)
        libmkl_sequential.so => /opt/intel/mkl/lib/intel64/libmkl_sequential.so (0x00002b70198c8000)
        libmkl_core.so => /opt/intel/mkl/lib/intel64/libmkl_core.so (0x00002b7019f66000)
        libiomp5.so => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libiomp5.so (0x00002b701b174000)
        libmpi_f90.so.3 => /opt/intel/openmpi-icc/lib/libmpi_f90.so.3 (0x00002b701b477000)
        libmpi_f77.so.1 => /opt/intel/openmpi-icc/lib/libmpi_f77.so.1 (0x00002b701b67b000)
        libmpi.so.1 => /opt/intel/openmpi-icc/lib/libmpi.so.1 (0x00002b701b8b8000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00002b701bd03000)
        libm.so.6 => /lib64/libm.so.6 (0x00002b701bf07000)
        librt.so.1 => /lib64/librt.so.1 (0x00002b701c180000)
        libnsl.so.1 => /lib64/libnsl.so.1 (0x00002b701c38a000)
        libutil.so.1 => /lib64/libutil.so.1 (0x00002b701c5a2000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b701c7a5000)
        libc.so.6 => /lib64/libc.so.6 (0x00002b701c9c3000)
        libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002b701cd37000)
        libifport.so.5 => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libifport.so.5 (0x00002b701cf4d000)
        libifcore.so.5 => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libifcore.so.5 (0x00002b701d17d000)
        libimf.so => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libimf.so (0x00002b701d4b3000)
        libintlc.so.5 => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libintlc.so.5 (0x00002b701d96f000)
        libsvml.so => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libsvml.so (0x00002b701dbbe000)
        libifcoremt.so.5 => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libifcoremt.so.5 (0x00002b701e48c000)
        libirng.so => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libirng.so (0x00002b701e7f1000)
        /lib64/ld-linux-x86-64.so.2 (0x00002b7018351000)

Please take a look into this and help me in running the same.

0 Kudos
2 Replies
Evarist_F_Intel
Employee
1,063 Views

Dear Prasanna Kumar N.,

You mentioned you use OpenMPI. If so, you should link against libmkl_blacs_openmpi_lp64.{a,so}, not against libmkl_blacs_intelmpi_lp64.{a,so}.

E.g. instead of

BLAS= -mkl=cluster

please use something like:

BLAS = -L$(MKLROOT)/lib/intel64 -Wl,-Bstatic \
  -Wl,--start-group \
  -lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64 -lmkl_cdft_core \
  -lmkl_intel_lp64 -lmkl_sequential -lmkl_core \
  -Wl,--end-group \
-Wl,-Bdynamic -lm

 

0 Kudos
Ying_H_Intel
Employee
1,063 Views

Hi Prasanna

It looks the main difference is in OpenMPI, where the article assumed the Intel MPI. 

The  libmkl_blacs_intelmpi_lp64.so is for Intel MPI

libmkl_intel_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_intel_lp64.so (0x00002b7018572000)
        libmkl_cdft_core.so => /opt/intel/mkl/lib/intel64/libmkl_cdft_core.so (0x00002b7018c84000)
        libmkl_scalapack_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_scalapack_lp64.so (0x00002b7018ea0000)
        libmkl_blacs_intelmpi_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so (0x00002b701968b000)

OpenMP MPI should using 

libmkl_blacs_openmpi_lp64.a 

or 
libmkl_blacs_openmpi_ilp64.a

Could you please try to change them according to MKL link advisor. 

for example. 

 –DMKL_ILP64, => should ILP64 mode.

-mkl=cluster => should change to explicitly mkl library link according to MKL link advisor.

Best Regards,

Ying 

 

0 Kudos
Reply