Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

MPI_SEND hangs

Lei1
Beginner
919 Views

I wrote the following Fortran code on Windows:

DO NBLK_L = 1,NUM_BLK_OF_CPU(mymesh)
LOC = BLK_IN_CPU_LOC(mymesh)+NBLK_L-1
NBLK = BLK_IN_CPU(LOC)
JB = INT((NBLK-1)/IBMAX)+1
IB = NBLK-1-(JB-1)*IBMAX+1
IMS = IMAX(IB,JB)
JMS = JMAX(IB,JB)
!*****************************
!..Transfer solutions for IS:
!*****************************
IF(IB.GT.1) THEN
IB1 = IB-1
CPU1= CPU_OF_BLK(IB1,JB)
IF(CPU1.NE.mymesh) THEN
IM1= IMAX(IB1,JB)
JM1= JMAX(IB1,JB)
!
!..Send IS of mymesh block to CPU1 block:
!
NPTS = 2*JMS
CALL MPI_SEND(NPTS,1,MPI_INTEGER,CPU1-1, &
100*NBLK_L+10*mymesh+1,MPI_COMM_WORLD,ierr)
!
!..Receive IE of CPU1 block:
!
NBLK1= LID_OF_BLK(IB1,JB)
CALL MPI_RECV(NPTS1,1,MPI_INTEGER,CPU1-1, &
100*NBLK1+10*CPU1+2,MPI_COMM_WORLD,status,ierr)
ENDIF
ENDIF
!*****************************
!..Transfer solutions for IE:
!*****************************
IF(IB.LT.IBMAX) THEN
IB1 = IB+1
CPU1= CPU_OF_BLK(IB1,JB)
IF(CPU1.NE.mymesh) THEN
IM1= IMAX(IB1,JB)
JM1= JMAX(IB1,JB)
!
!..Send IE of mymesh block to CPU1 block:
!
NPTS = 2*JMS
CALL MPI_SEND(NPTS,1,MPI_INTEGER,CPU1-1, &
100*NBLK_L+10*mymesh+2,MPI_COMM_WORLD,ierr)
!
!..Receive IS of CPU1 block:
!
NBLK1= LID_OF_BLK(IB1,JB)
CALL MPI_RECV(NPTS1,1,MPI_INTEGER,CPU1-1, &
100*NBLK1+10*CPU1+1,MPI_COMM_WORLD,status,ierr)
ENDIF
ENDIF
!*****************************
!..Transfer solutions for JS:
!*****************************
IF(JB.GT.1) THEN
JB1 = JB-1
CPU1= CPU_OF_BLK(IB,JB1)
IF(CPU1.NE.mymesh) THEN
IM1= IMAX(IB,JB1)
JM1= JMAX(IB,JB1)
!
!..Send JS of mymesh block to CPU1 block:
!
NPTS = 2*IMS
CALL MPI_SSEND(NPTS,1,MPI_INTEGER,CPU1-1, &
100*NBLK_L+10*mymesh+3,MPI_COMM_WORLD)
!
!..Receive JE of CPU1 block:
!
NBLK1= LID_OF_BLK(IB,JB1)
CALL MPI_RECV(NPTS1,1,MPI_INTEGER,CPU1-1, &
100*NBLK1+10*CPU1+4,MPI_COMM_WORLD,status)
ENDIF
ENDIF
!*****************************
!..Transfer solutions for JE:
!*****************************
IF(JB.LT.JBMAX) THEN
JB1 = JB+1
CPU1= CPU_OF_BLK(IB,JB1)
IF(CPU1.NE.mymesh) THEN
IM1= IMAX(IB,JB1)
JM1= JMAX(IB,JB1)
!
!..Send JE of mymesh block to CPU1 block:
!
NPTS = 2*IMS
CALL MPI_SSEND(NPTS,1,MPI_INTEGER,CPU1-1, &
100*NBLK_L+10*mymesh+4,MPI_COMM_WORLD)
!
!..Receive JS of CPU1 block
!
NBLK1= LID_OF_BLK(IB,JB1)
CALL MPI_RECV(NPTS1,1,MPI_INTEGER,CPU1-1, &
100*NBLK1+10*CPU1+3,MPI_COMM_WORLD,status)
ENDIF
ENDIF
ENDDO

The code works before I add JS and JE portion. After I add the MPI communication for J-direction, however, the code hangs there forever.  I am using Intel Cluster Studio XE 2013 SP1 Update 1 for Windows.

Any suggestion is highly appreciated.

 

 

Labels (1)
0 Kudos
3 Replies
PrasanthD_intel
Moderator
876 Views

Hi Lei,


Thanks for reaching out to us.

The hang was may be due to the usage of MPI_SSEND. However, we are not able to reproduce the error in our versions.

The Intel Cluster Studio XE 2013 SP1 was old and no longer supported. For a list of supported versions refer to this article: Intel® Parallel Studio XE & Intel® oneAPI Toolkits...

If you can please upgrade your Parallel Studio/MPI version.


Regards

Prasanth


0 Kudos
PrasanthD_intel
Moderator
841 Views

Hi Lei,


We haven't heard back from you.

Let us know if you face the issue even after upgrading to the latest version.


Regards

Prasanth


0 Kudos
PrasanthD_intel
Moderator
810 Views

Hi Lei,


We are closing this thread assuming your issue has been resolved. We will no longer respond to this thread. If you require additional assistance from Intel, please start a new thread. Any further interaction in this thread will be considered community only.


Regards

Prasanth


0 Kudos
Reply