Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

Coarray runtime bug

NCarlson
New Contributor I
660 Views

I'm hitting a segfault with some coarray code when running with 1 image.  I've reduced it to this test case:

integer :: x(10), y(10)
call foo(x, y)
contains
  subroutine foo(src, dest)
    integer, intent(in), target :: src(:)
    integer, intent(inout) :: dest(:)
    type box
      integer, pointer :: data(:)
    end type
    type(box), allocatable :: buffer[:]
    allocate(buffer[*])
    if (this_image() == 1) buffer%data => src
    sync all
    dest = buffer[1]%data
  end subroutine
end

I'm using ifort 2021.5.0 and using these compiler options: -coarray -coarray-num-images=1

I'm also setting

export I_MPI_DEVICE=shm
export I_MPI_FABRICS=shm

per this Intel page in order to keep coarray executables from pounding on the loopback network device and getting really crappy performance (I'm running on a single shared-memory node).

With this setup the executable segfaults, however if I don't set those I_MPI variables then the test case executes without error.

0 Kudos
1 Solution
Barbara_P_Intel
Employee
582 Views

I learned that there is a known issue with MPI that applies to your case and a workaround.

  • Single-process run with I_MPI_FABRICS=shm may lead to crash. Please use out-of-box settings or I_MPI_FABRICS=shm:ofi instead.

It worked for me with your reproducer.

This will be documented in the MPI Library Release Notes in the next release.



View solution in original post

2 Replies
Barbara_P_Intel
Employee
616 Views

I duplicated your issue. I'll keeping looking at that.

There are two workarounds that I see. 

  1. Compile with -coarray=shared.
  2. Create a config file. The article has some details. It's written for distributed environments, but the concepts are the same for shared.
0 Kudos
Barbara_P_Intel
Employee
583 Views

I learned that there is a known issue with MPI that applies to your case and a workaround.

  • Single-process run with I_MPI_FABRICS=shm may lead to crash. Please use out-of-box settings or I_MPI_FABRICS=shm:ofi instead.

It worked for me with your reproducer.

This will be documented in the MPI Library Release Notes in the next release.



Reply