Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.

Using custom MPI_REDUCE functions (MPI_OP_CREATE) fails.

Reuter__Leonard
Beginner
1,012 Views

I am using IFORT 18.0.3 on CentOS 7.4.
The code example (bottom, from https://github.com/open-mpi/ompi/issues/3409#issuecomment-296904409) compiles and runs well with gfortran7/open-mpi and pgi/open-mpi. (independent of the open-mpi version)

However, with ifort/open-mpi it gives a segmentation fault iff run on two or more cores. With ifort/intel-mpi the code does not even compile:

mpi_example_5_21.f90(31): error #7061: The characteristics of dummy argument 1 of the associated actual procedure differ from the characteristics of dummy argument 1 of the dummy procedure.   [MY_USER_FUNCTION]
   call MPI_Op_create(user_fn=my_user_function, commute=.true., op=myOp)
------------------------------^
mpi_example_5_21.f90(31): error #7062: The characteristics of dummy argument 2 of the associated actual procedure differ from the characteristics of dummy argument 2 of the dummy procedure.   [MY_USER_FUNCTION]
   call MPI_Op_create(user_fn=my_user_function, commute=.true., op=myOp)
------------------------------^

Thanks a lot for any help!

Code:

module foo
contains
subroutine my_user_function( invec, inoutvec, len, type )
  use, intrinsic :: iso_c_binding, only : c_ptr, c_f_pointer
  use mpi_f08
  type(c_ptr), value :: invec, inoutvec
  integer :: len 
  type(MPI_Datatype) :: type
  real, pointer :: invec_r(:), inoutvec_r(:)
  if (type%MPI_VAL == MPI_REAL%MPI_VAL) then
     call c_f_pointer(invec, invec_r, (/ len /) )
     call c_f_pointer(inoutvec, inoutvec_r, (/ len /) )
  inoutvec_r = invec_r + inoutvec_r
  end if
end subroutine
end module

program mpi_example_5_21
   use mpi_f08
   use foo, only: my_user_function
   implicit none

   type(MPI_Op) :: myOp
   integer :: rank, nproc
   real :: R(100), S(100) = 1.0 

   call MPI_Init
   call MPI_Comm_rank(comm=MPI_COMM_WORLD, rank=rank)
   call MPI_Comm_size(comm=MPI_COMM_WORLD, size=nproc)

   call MPI_Op_create(user_fn=my_user_function, commute=.true., op=myOp)
   call MPI_Reduce(sendbuf=S, recvbuf=R, count=size(S), datatype=MPI_REAL, op=myOp, root=0, comm=MPI_COMM_WORLD)

   call MPI_Finalize

   if (rank == 0) write (*,*) merge('PASS', 'FAIL', all(R == S(1)*nproc))
end program mpi_example_5_21

 

0 Kudos
2 Replies
Reuter__Leonard
Beginner
1,012 Views

In case this is the wrong place to ask the above question, could you tell me a better one?

Thank you very much!

0 Kudos
Steve_Lionel
Honored Contributor III
1,012 Views

My install of the latest Intel MPI doesn't include an mpi_f08 module (puzzling), but it doesn't include sources for mpi.mod either. You would need to compare the declaration of the procedure arguments of MPI_op_create in your version of that module (assuming you have the source) with your actual routines. Looking at mpi-forum.org documentation for MPI_op_create, I see that the INVEC and INOUTVEC arguments are described as arrays of some type, whereas in your code they are TYPE(C_PTR). Maybe mpi_f08 defines them that way, I don't know. I do note that you give those arguments the VALUE attribute but don't give the procedure the BIND(C) specification. In this case, VALUE doesn't do what you evidently think it does - it says that an anonymous copy of the argument is passed by reference. I could well believe that this error leads to issues with the other MPI implementations.

0 Kudos
Reply