Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2159 Discussions

Running executable compiled with MPI without using mpirun

Dierker__Steve
Beginner
2,443 Views

Hello,

I'm having trouble running the abinit (8.10.2) executable (electronic structure program; www.abinit.org) after compiling with the Intel 19 Update 3 compilers and with MPI enabled (64 bit intel linux).

If I compile with either the gnu tools or the Intel tools (icc, ifort), and without MPI enabled, I can directly run the abinit executable with no errors.

If I compile with the gnu tools and MPI enabled (using openmpi), I can still run the abinit executable direclty (without using mpirun) without errors.

If I compile with the Intel tools (mpiicc, mpiifort) and MPI enabled (using intel MPI), and then try to run the abinit executable directly (without mpirun), then it fails with the following error when trying to read in the input file (t01.input):

abinit < t01.input > OUT-traceback
forrtl: severe (24): end-of-file during read, unit 5, file /proc/26824/fd/0
Image                 PC                              Routine                 Line           Source             
libifcoremt.so.5   00007F0847FAC7B6  for__io_return        Unknown  Unknown
libifcoremt.so.5   00007F0847FEAC00  for_read_seq_fmt   Unknown  Unknown
abinit                  000000000187BC1F  m_dtfil_mp_iofn1_   1363        m_dtfil.F90
abinit                  0000000000407C49  MAIN__                    251          abinit.F90
abinit                  0000000000407942  Unknown                 Unknown  Unknown
libc-2.27.so         00007F08459E4B97  __libc_start_main     Unknown  Unknown
abinit                  000000000040782A  Unknown                 Unknown  Unknown


If I compile with the Intel tools and MPI enabled and run the abinit executable with "mpirun -np 1 abinit < t01.input > OUT-traceback" then reading the input file succeeds.

Running the MPI enabled executable without mpirun succeeds when compiled with the gnu tools, but not when compiled with the intel tools.

A colleague of mine compiled abinit with MPI enabled using the Intel 17 compiler and IS able to run the abinit executable without mpirun.

I am using Intel Parallel Studio XE Cluster Edition Update 3 and source psxevars.sh to set the environment before compiling/running with intel. The output of mpiifort -V is:

mpiifort -V
Intel(R) Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 19.0.3.199 Build 20190206
Copyright (C) 1985-2019 Intel Corporation.  All rights reserved.

Any ideas on what is causing this forrtl crash?

Thanks for any suggestions.

0 Kudos
7 Replies
Jakub_Benda
Beginner
2,443 Views

I came across this change in behaviour in Intel MPI 2019, too, though in different context. It seems that when you want to access the standard input after the call to MPI_Init, you need to launch the application using mpiexec/mpirun.

For example, "echo 1 | ./test_out" will work with this code:

program test_out

    read (5, *) n
    write (6, *) n

    call MPI_Init(ierr)
    call MPI_Finalize(ierr)

end program test_out

but "echo 1 | ./test_in" will not work with this code:

program test_in

    call MPI_Init(ierr)

    read (5, *) n
    write (6, *) n

    call MPI_Finalize(ierr)

end program test_in

unless you execute it as "echo 1 | mpiexec -n 1 ./test_in".

0 Kudos
Anatoliy_R_Intel
Employee
2,443 Views

Hello,

Thank you for finding this issue. It will be fixed in one of Intel MPI 2020 updates.

--

Best regards, Anatoliy.

0 Kudos
Anatoliy_R_Intel
Employee
2,443 Views

Hi Steve, Jacub,

 

Could you check 2019 Update 5? That issue should be fixed there. 

--

Best regards, Anatoliy.

0 Kudos
ARaoo
Beginner
2,443 Views

I also ran into a similar issue and I do not think it is resolved in update 5 (2019.5.281), I had to use a lower version for this purpose (not performance measurement)!

 

Regards,

Amir

0 Kudos
Dierker__Steve
Beginner
2,443 Views

Hi Anatoly,

Thanks for following up on this.

Unfortunately, 2019 Update 5 does not fix this problem. The behavior is the same as before, both for the abinit application and for the simple test code that Jakub posted, which succinctly illustrates the issue.

To repeat, with test_in.f given by:

    program test_in
    call MPI_Init(ierr)
    read (5,*)n
    write (6,*) n
    call MPI_Finalize(ierr)
    end program test_in

Running "echo 1 | ./test_in" results in:

forrtl: severe (24): end-of-file during read, unit 5, file /proc/15219/fd/0
Image              PC                Routine            Line        Source             
test_in            000000000042A8EB  Unknown               Unknown  Unknown
test_in            00000000004095E8  Unknown               Unknown  Unknown
test_in            0000000000402CD6  MAIN__                      3  test_in.f
test_in            0000000000402C42  Unknown               Unknown  Unknown
libc-2.27.so       00001469FDC92B97  __libc_start_main     Unknown  Unknown
test_in            0000000000402B2A  Unknown               Unknown  Unknown

I've tried this code with 2018 Update 4 and it worked fine. It stopped working with 2019 and is still not working with all updates since then, including Update 5.

steve

0 Kudos
Anatoliy_R_Intel
Employee
2,443 Views

Hi,

I'm sorry, it's my mistake. The fix for this issue was not included to 2019 Update 5. This fix will be available in 2019 Update 6.

--

Best regards, Anatoliy.

0 Kudos
Dierker__Steve
Beginner
2,443 Views

Hi,

I see that 2019 Update 6 (for the MPI library, not the entire PSXE suite) was released on Nov 6. I tested it and it does indeed fix the problem I reported above.

Thank you to Intel for being so responsive in addressing this issue.

Best Regards,

steve

0 Kudos
Reply