Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
28456 Discussions

Error While running VASP with Intel ifort, mkl and mpif90 (openmpi)

Prasanna_Kumar_N_
1,489 Views

I did installed VASP executable successfully, only I changed FC=mpif90 (openmpi compiled using Intel compiler) whatever you mentioned in the following link

https://software.intel.com/en-us/articles/building-vasp-with-intel-mkl-and-intel-compilers?page=1#comment-1842228

But I got the following error while running,

mpirun -np 4 /opt/VASP/vasp.5.3/vasp

this gives the error as follows, 

WARNING: for PREC=h ENMAX is automatically increase by 25 %
        this was not the case for versions prior to vasp.4.4
 WARNING: for PREC=h ENMAX is automatically increase by 25 %
        this was not the case for versions prior to vasp.4.4
 WARNING: for PREC=h ENMAX is automatically increase by 25 %
        this was not the case for versions prior to vasp.4.4
 LDA part: xc-table for Ceperly-Alder, standard interpolation
 POSCAR, INCAR and KPOINTS ok, starting setup
 FFT: planning ...
 WAVECAR not read
 entering main loop
       N       E                     dE             d eps       ncg     rms                                                                               rms(c)

 

forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image              PC                Routine            Line        Source                                                                           
libmpi.so.1        00002B3133018DE9  Unknown               Unknown  Unknown
libmkl_blacs_inte  00002B3130D8B273  Unknown               Unknown  Unknown
libmkl_blacs_inte  00002B3130D7D9FB  Unknown               Unknown  Unknown
libmkl_blacs_inte  00002B3130D7D409  Unknown               Unknown  Unknown
vasp               00000000004D7BCD  Unknown               Unknown  Unknown
vasp               00000000004CA239  Unknown               Unknown  Unknown
vasp               0000000000E23D62  Unknown               Unknown  Unknown
vasp               0000000000E447AD  Unknown               Unknown  Unknown
vasp               0000000000472BC5  Unknown               Unknown  Unknown
vasp               000000000044D25C  Unknown               Unknown  Unknown
libc.so.6          00002B31340C1C36  Unknown               Unknown  Unknown
vasp               000000000044D159  Unknown               Unknown  Unknown
                                              

--------------------------------------------------------------------------
mpirun has exited due to process rank 6 with PID 12042 on
node node01 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------

 

Here all the libs associated with vasp executable,

ldd vasp

linux-vdso.so.1 =>  (0x00007fffcd1d5000)
        libmkl_intel_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_intel_lp64.so (0x00002b7018572000)
        libmkl_cdft_core.so => /opt/intel/mkl/lib/intel64/libmkl_cdft_core.so (0x00002b7018c84000)
        libmkl_scalapack_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_scalapack_lp64.so (0x00002b7018ea0000)
        libmkl_blacs_intelmpi_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so (0x00002b701968b000)
        libmkl_sequential.so => /opt/intel/mkl/lib/intel64/libmkl_sequential.so (0x00002b70198c8000)
        libmkl_core.so => /opt/intel/mkl/lib/intel64/libmkl_core.so (0x00002b7019f66000)
        libiomp5.so => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libiomp5.so (0x00002b701b174000)
        libmpi_f90.so.3 => /opt/intel/openmpi-icc/lib/libmpi_f90.so.3 (0x00002b701b477000)
        libmpi_f77.so.1 => /opt/intel/openmpi-icc/lib/libmpi_f77.so.1 (0x00002b701b67b000)
        libmpi.so.1 => /opt/intel/openmpi-icc/lib/libmpi.so.1 (0x00002b701b8b8000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00002b701bd03000)
        libm.so.6 => /lib64/libm.so.6 (0x00002b701bf07000)
        librt.so.1 => /lib64/librt.so.1 (0x00002b701c180000)
        libnsl.so.1 => /lib64/libnsl.so.1 (0x00002b701c38a000)
        libutil.so.1 => /lib64/libutil.so.1 (0x00002b701c5a2000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b701c7a5000)
        libc.so.6 => /lib64/libc.so.6 (0x00002b701c9c3000)
        libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002b701cd37000)
        libifport.so.5 => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libifport.so.5 (0x00002b701cf4d000)
        libifcore.so.5 => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libifcore.so.5 (0x00002b701d17d000)
        libimf.so => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libimf.so (0x00002b701d4b3000)
        libintlc.so.5 => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libintlc.so.5 (0x00002b701d96f000)
        libsvml.so => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libsvml.so (0x00002b701dbbe000)
        libifcoremt.so.5 => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libifcoremt.so.5 (0x00002b701e48c000)
        libirng.so => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libirng.so (0x00002b701e7f1000)
        /lib64/ld-linux-x86-64.so.2 (0x00002b7018351000)

Please take a look into this and help me in running the same.

0 Kudos
1 Reply
TimP
Honored Contributor III
1,489 Views

You would need to build your MPI and application with -traceback or -debug inline-debug-info to get more useful results for the traceback.  The useful advice about segfaults in https://software.intel.com/en-us/articles/determining-root-cause-of-sigsegv-or-sigbus-errors may not be enough if you need to deal with the combination of gdb-ia and MPI.  Any hints from the OpenMPI site about use of gdb ought to work with gdb-ia.

If you haven't ruled out problems with your stack limit setting, and, if using OpenMP, OMP_STACKSIZE, you might start there.

0 Kudos
Reply