Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

Problem in RANDOM_NUMBER() with openmp flag in MPI programs

Customer__Intel4
Beginner
433 Views

I noticed that using the qopenmp or fopenmp flag causes segmentation errors when the MPI program contains calls to random_number function. This only happens with Intel compiler. Here is a simple example

program hello_world
  use mpi_f08
  implicit none
  integer :: ierr, num_procs, my_id
  real(8):: rn
  call MPI_INIT ( ierr )
  call MPI_COMM_RANK (MPI_COMM_WORLD, my_id, ierr)
  call MPI_COMM_SIZE (MPI_COMM_WORLD, num_procs, ierr)

  call random_number(rn)
  print *, rn

  call MPI_FINALIZE ( ierr )

end

When I compile the program, with mpif90 hello.f90 -qopenmp and then run with mpirun -np 1 a.out I get:

mpirun noticed that process rank 0 with PID XXX on node XXX exited on signal 11 (Segmentation fault).

 

In other programs the problem appears even in the compile/link time!

 

0 Kudos
3 Replies
Martyn_C_Intel
Employee
433 Views

Hi,

    I tried to reproduce this, but the program ran successfully. I was using the Intel Compiler version 17.0.1 and Intel MPI 5.1.3. Please could you provide all your environment details: Compiler version; Intel MPI version; OS version; processor type?  If you are not using Intel MPI, which MPI do you use and how was it built?

 

0 Kudos
Customer__Intel4
Beginner
433 Views

I looked up the loaded Intel  modules and they are 

intel/16.0.2.181      

intel/15.0.2.164 

openmpi/intel/1.8.5

after I loaded openmpi/intel17/2.0.1  and intel/17.0.1.132 the problem was solved. The system is Linux 2.6.32.

0 Kudos
Martyn_C_Intel
Employee
433 Views

Glad that worked. 

I'm aware of an issue that has shown up before, both with OpenMPI and elsewhere, that might be behind this. There is a symbol conflict between libintlc.so in the Intel 16.0 compiler and libc.so.6 in new GCC/glib versions that have become default in recent OS versions, due to changes in the libc. That could not be foreseen when the 16.0 compiler was released. It is resolved in version 17 of the Intel compiler. If you were doing dynamic linking, you could try going back to the version built with the older ifort and OpenMPI, and putting libintlc.so from Intel 17.0  into the path instead of the one from 16.0, (or preload it), and see if that worked around the problem.

0 Kudos
Reply