Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
28178 Discussions

Error in an OMP program when linked with libifcore_pic.a and/or PETSC


Hi all,

I want to use in the same code PETSC and, in other regions, OpenMP and I have found a problem that I think it's not because of PETSC.

1. I have compiled PETSC with these options (to take into account the same OpenMP used by Intel compiler). 
./configure --with-cc=mpiicc --with-cxx=mpiicpc --with-fc=mpiifort --with-blas-lapack-dir=/opt/intel/mkl/lib/intel64/ --with-debugging=1 --with-scalar-type=complex --with-threadcomm --with-pthreadclasses --with-openmp --with-openmp-include=/opt/intel/compilers_and_libraries_2016.1.150/linux/compiler/lib/intel64_lin --with-openmp-lib=/opt/intel/compilers_and_libraries_2016.1.150/linux/compiler/lib/intel64_lin/libiomp5.a PETSC_ARCH=linux-intel-dbg PETSC-AVOID-MPIF-H=1
2. The program to be executed is composed of two files, one is hellocount.F90:
MODULE hello_count
  use omp_lib
  subroutine hello_print ()
     integer :: nthreads,mythread
   !pragma hello-who-omp-f
   !$omp parallel
     nthreads = omp_get_num_threads()
     mythread = omp_get_thread_num()
     write(*,'("Hello from",i3," out of",i3)') mythread,nthreads
   !$omp end parallel
   !pragma end
   end subroutine hello_print
END MODULE hello_count
and the other one is hellocount_main.F90:
Program Hello
   USE hello_count
   call hello_print
end Program Hello
3. To compile these two functions I use:
rm -rf _obj
mkdir _obj
ifort -E -I/home/aamor/petsc/include -I/home/aamor/petsc/linux-intel-dbg/include -c hellocount.F90 >_obj/hellocount.f90
ifort -E -I/home/aamor/petsc/include -I/home/aamor/petsc/linux-intel-dbg/include -c hellocount_main.F90 >_obj/hellocount_main.f90
mpiifort -CB -g -warn all -O0 -shared-intel -check:none -qopenmp -module _obj -I./_obj -I/home/aamor/MUMPS_5.1.2/include   -I/opt/intel/compilers_and_libraries_2016.1.150/linux/mkl/include -I/opt/intel/compilers_and_libraries_2016.1.150/linux/mkl/include/intel64/lp64/ -I/home/aamor/petsc/include -I/home/aamor/petsc/linux-intel-dbg/include -o _obj/hellocount.o -c _obj/hellocount.f90
mpiifort -CB -g -warn all -O0 -shared-intel -check:none -qopenmp -module _obj -I./_obj -I/home/aamor/MUMPS_5.1.2/include   -I/opt/intel/compilers_and_libraries_2016.1.150/linux/mkl/include -I/opt/intel/compilers_and_libraries_2016.1.150/linux/mkl/include/intel64/lp64/ -I/home/aamor/petsc/include -I/home/aamor/petsc/linux-intel-dbg/include -o _obj/hellocount_main.o -c _obj/hellocount_main.f90
mpiifort -CB -g -warn all -O0 -shared-intel -check:none -qopenmp -module _obj -I./_obj -o exec/HELLO _obj/hellocount.o _obj/hellocount_main.o /home/aamor/lib_tmp/libarpack_LinuxIntel15.a /home/aamor/MUMPS_5.1.2/lib/libzmumps.a /home/aamor/MUMPS_5.1.2/lib/libmumps_common.a /home/aamor/MUMPS_5.1.2/lib/libpord.a /home/aamor/parmetis-4.0.3/lib/libparmetis.a /home/aamor/parmetis-4.0.3/lib/libmetis.a -L/opt/intel/compilers_and_libraries_2016.1.150/linux/mkl/lib/intel64 -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 -lpetsc -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lmkl_lapack95_lp64 -liomp5 -lpthread -lm -L/home/aamor/lib_tmp -lgidpost -lz /home/aamor/lua-5.3.3/src/liblua.a /home/aamor/ESEAS-master/libeseas.a -Wl,-rpath,/home/aamor/petsc/linux-intel-dbg/lib -L/home/aamor/petsc/linux-intel-dbg/lib -Wl,-rpath,/opt/intel/mkl/lib/intel64 -L/opt/intel/mkl/lib/intel64 -Wl,-rpath,/opt/intel/impi/ -L/opt/intel/impi/ -Wl,-rpath,/opt/intel/impi/ -L/opt/intel/impi/ -Wl,-rpath,/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64 -L/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2016.1.150/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2016.1.150/linux/compiler/lib/intel64_lin -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -Wl,-rpath,/opt/intel/mpi-rt/5.1/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/mpi-rt/5.1/intel64/lib -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lX11 -lssl -lcrypto -lifport -lifcore_pic -lmpicxx -ldl -Wl,-rpath,/opt/intel/impi/ -L/opt/intel/impi/ -Wl,-rpath,/opt/intel/impi/ -L/opt/intel/impi/ -lmpifort -lmpi -lmpigi -lrt -lpthread -Wl,-rpath,/opt/intel/impi/ -L/opt/intel/impi/ -Wl,-rpath,/opt/intel/impi/ -L/opt/intel/impi/ -Wl,-rpath,/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64 -L/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2016.1.150/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2016.1.150/linux/compiler/lib/intel64_lin -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64 -L/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64 -Wl,-rpath,/opt/intel/impi/ -Wl,-rpath,/opt/intel/impi/ -Wl,-rpath,/opt/intel/mpi-rt/5.1/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/mpi-rt/5.1/intel64/lib -limf -lsvml -lirng -lm -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lirc_s -Wl,-rpath,/opt/intel/impi/ -L/opt/intel/impi/ -Wl,-rpath,/opt/intel/impi/ -L/opt/intel/impi/ -Wl,-rpath,/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64 -L/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64 -Wl,-rpath,/opt/intel/compilers_and_libraries_2016.1.150/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2016.1.150/linux/compiler/lib/intel64_lin -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -Wl,-rpath,/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64 -L/opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64 -ldl
4. Then I have seen that:
4.1. If I set OMP_NUM_THREADS=2 and I remove -lpetsc and -lifcore_pic from the last step, I got:
Hello from  0 out of  2
Hello from  1 out of  2
4.2 But if add -lifcore_pic and/or -lpetsc (because I want to use PETSC) I get this error:
Hello from  0 out of  2
forrtl: severe (40): recursive I/O operation, unit -1, file unknown
Image              PC                Routine            Line        Source
HELLO              000000000041665C  Unknown               Unknown  Unknown
HELLO              00000000004083C8  Unknown               Unknown  Unknown        00007F9C603566A3  Unknown               Unknown  Unknown        00007F9C60325007  Unknown               Unknown  Unknown        00007F9C603246F5  Unknown               Unknown  Unknown        00007F9C603569C3  Unknown               Unknown  Unknown    0000003CE76079D1  Unknown               Unknown  Unknown          0000003CE6AE88FD  Unknown               Unknown  Unknown
If you set OMP_NUM_THREADS to 8, I get:
forrtl: severe (40): recursive I/O operation, unit -1, file unknown
forrtl: severe (40): recursive I/O operation, unit -1, file unknown
forrtl: severe (40): recursive I/O operation, unit -1, file unknown
I am sorry if this is a trivial problem because I guess that lots of people use PETSC with OpenMP in FORTRAN, but I have really done my best to figure out where the error is. Can you help me? 
Thanks a lot!
0 Kudos
3 Replies
Black Belt Retired Employee

You have a WRITE in a parallel region, which means that it can be entered in more than one thread at a time. Fortran does not allow what it calls "recursive I/O" (except for internal files) - once you start an I/O operation on a unit no other operation on that unit may begin.

Now, I do see that you are using a two-year-old version of Intel Fortran and I know that in the 2017 version a change was made so that the Fortran run-time library was always "reentrant". I am not sure if that would help you here, but it might.

Either remove the WRITE from the parallel region or protect it with a critical section.

0 Kudos

Thanks, Steve, it's a great honor to be answered by you since your answers have been helping me a lot in the last years. It's true that with CRITICAL I didn't get the error anymore... but I got confused since I got that error when linking with PETSC and don't when I didn't link the program. 

Thank you for your dedication!


0 Kudos
Black Belt Retired Employee

In a multithreaded program, the sequence of events is unpredictable.

0 Kudos