Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2224 Discussions

Intel MPI failure from MPI_SEND MPI_RECEIVE

dingjun_chencmgl_ca
1,136 Views

HI, James,

I am testing Intel MPI on my computer. I do not know why the following MPI sample codes failed to run. after I made a minor revision on stoping message from sending from process 0 to iteself, then everything works well. Why is process 0 not allowed to send message to itself in Intel MPI and then receive it by itslef? I look forward to hearing from you. Thanks.

Dingjun


Please the original sample codes:

program vector
   include 'mpif.h'

   integer SIZE
   parameter(SIZE=4)
   integer numtasks, rank, source, dest, tag, i,  ierr
   real*4 a(0:SIZE-1,0:SIZE-1), b(0:SIZE-1)
   integer stat(MPI_STATUS_SIZE), rowtype

C  Fortran stores this array in column major order
   data a  /1.0, 2.0, 3.0, 4.0,
  &         5.0, 6.0, 7.0, 8.0,
  &         9.0, 10.0, 11.0, 12.0,
  &         13.0, 14.0, 15.0, 16.0 /

   call MPI_INIT(ierr)
   call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
   call MPI_COMM_SIZE(MPI_COMM_WORLD, numtasks, ierr)

   call MPI_TYPE_VECTOR(SIZE, 1, SIZE, MPI_REAL, rowtype, ierr)
   call MPI_TYPE_COMMIT(rowtype, ierr)
 
   tag = 1
   if (numtasks .eq. SIZE) then
      if (rank .eq. 0) then
         do 10 i=0, numtasks-1
         call MPI_SEND(a(i,0), 1, rowtype, i, tag,
  &                    MPI_COMM_WORLD, ierr)
10      continue
      endif

      source = 0
      call MPI_RECV(b, SIZE, MPI_REAL, source, tag,
  &                MPI_COMM_WORLD, stat, ierr)
      print *, 'rank= ',rank,' b= ',b

   else
      print *, 'Must specify',SIZE,' processors.  Terminating.'
   endif

   call MPI_TYPE_FREE(rowtype, ierr)
   call MPI_FINALIZE(ierr)

   end


The minor revisions are made for the above sample codes and please see the following codes: 


 

 

program vector

 

 

include 'mpif.h'

 

 

integer SIZE

 

 

parameter(SIZE=4)

 

 

integer numtasks, rank, source, dest, tag, i,j, ierr

 

 

real*4 a(0:SIZE-1,0:SIZE-1), b(0:SIZE-1)

 

 

integer stat(MPI_STATUS_SIZE), rowtype,avalue

C Fortran stores this array in column major order

 

 

! data a /1.0, 2.0, 3.0, 4.0,

 

 

! & 5.0, 6.0, 7.0, 8.0,

 

 

! & 9.0, 10.0, 11.0, 12.0,

 

 

! & 13.0, 14.0, 15.0, 16.0 /

avalue=0

 

 

do i=0,3

 

 

doj=0,3

avalue= avalue+1

a(i,j)=avalue

 

 

end do

 

 

end do

 

 

 

call MPI_INIT(ierr)

 

 

call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)

 

 

call MPI_COMM_SIZE(MPI_COMM_WORLD, numtasks, ierr)

 

 

call MPI_TYPE_VECTOR(SIZE, 1, SIZE, MPI_REAL, rowtype, ierr)

 

 

callMPI_TYPE_COMMIT(rowtype, ierr)

 

tag = 1

 

 

if (numtasks .eq. SIZE) then

 

 

if (rank .eq. 0) then

 

 

 

do i=0,3

b(i)=a(0,i)

 

! array stored in column for Fortran

 

 

end do

 

 

print *,'rank= ',rank,' ', b

 

 

 

do i=1, numtasks-1

 

 

call MPI_SEND(a(i,0), 1, rowtype, i, tag,

 

 

& MPI_COMM_WORLD, ierr)

 

 

end do

 

 

 

endif

source = 0

 

 

if(rank.gt.0) then

 

 

call MPI_RECV(b, SIZE, MPI_REAL, source, tag,

 

 

& MPI_COMM_WORLD, stat, ierr)

 

 

print *, 'rank= ',rank,' b= ',b

 

 

endif

 

 

 

else

 

 

print *, 'Must specify',SIZE,' processors. Terminating.'

 

 

endif

 

 

call MPI_TYPE_FREE(rowtype, ierr)

 

 

call MPI_FINALIZE(ierr)

 

 

end



0 Kudos
1 Reply
James_T_Intel
Moderator
1,136 Views
Hi Dingjun,

MPI_Send is a blocking send.  What that means is that the call will not return until the data has been sent.  As such, if you send and receive in the same process, you will never reach the receive call.  If your program requires a send/receive within a process, you can instead use a non-blocking send or receive as the first call.  Consider MPI_Isend, MPI_Ibsend, MPI_Issend, MPI_Irsend, and MPI_Irecv.

Sincerely,
James Tullos
Technical Consulting Engineer
Intel® Cluster Tools
0 Kudos
Reply