Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2154 Discussions

Issue with MPI_Iallreduce and MPI_IN_PLACE

aidan_c_
Beginner
485 Views

Hi, 

I'm having some issues with using MPI_Iallreduce and MPI_IN_PLACE with FORTRAN (I haven't tested with C at this point), and I'm unclear if I'm doing something wrong w.r.t the standard. I've created a simple code that I can use to duplicate the issue:

Program Test
Use mpi
Implicit None
Integer, Dimension(0:19) :: test1, test2
Integer :: i, request, ierr, rank
Logical :: complete
Integer :: status(MPI_STATUS_SIZE)

Call MPI_Init(ierr)

do i =0,19
  test1(i) = i
end do

Call MPI_Iallreduce( test1, MPI_IN_PLACE, 20, MPI_INT, MPI_SUM, MPI_COMM_WORLD, request, ierr )
if(ierr /= MPI_Success) print *, "failed"

Call  MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr)
if(ierr /= MPI_Success) print *, "failed"

Call MPI_Wait(request, status, ierr)
if(ierr /= MPI_Success) print *, "failed"
do i = 0, 1
if(rank == i) print *, rank , test1
Call MPI_Barrier(MPI_COMM_WORLD, ierr)
end do

End Program Test

I've executed with 2 ranks using this MPI vesrion:

bash-4.1$ mpirun --version
Intel(R) MPI Library for Linux* OS, Version 2017 Update 2 Build 20170125 (id: 16752)
Copyright (C) 2003-2017, Intel Corporation. All rights reserved.

and the output is not as expected i.e. rank, 0, 2, 4, .... but instead:

           0           0           1           2           3           4
           5           6           7           8           9          10
          11          12          13          14          15          16
          17          18          19
           1           0           1           2           3           4
           5           6           7           8           9          10
          11          12          13          14          15          16
          17          18          19

i.e. the reduction sum never occurs. If instead of MPI_IN_PLACE I reduce to test2 then the code works correctly.

Am I violating the standard in some way or is there a workaround?

Thanks

Aidan Chalk

0 Kudos
1 Reply
aidan_c_
Beginner
485 Views

This can be ignored, used MPI_IN_PLACE incorrectly.

0 Kudos
Reply