Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2154 Discussions

mpi FORTRAN program hanging at mpi_recv

galli_m
New Contributor I
1,121 Views

I am just starting in MPI.
Here is part of a FORTRAN PROGRAM I have.

CCCCCCCCCCCCCCCCCCCCCCCCCCCC

c Laplace calculation - tasks divided on all processors

1 nt=1

if (my_rank.eq.0) then
np=mp
else
np=my_rank
endif

nts=ntasks(np,nt)
nte=ntaske(np,nt)
do 55 m=nts,nte
nod14=nod14t(m,nt)
i=nim(nod14)
j=njm(nod14)
nbcv=nbc(i,j)
if (nbcv.eq.0) then
te=t(i+1,j)
tw=t(i-1,j)
tn=t(i,j+1)
ts=t(i,j-1)
t(i,j)=0.25d0*(te+tw+tn+ts)
go to 50
endif
if (nbcv.eq.2) then
t(i,j)=t(i-1,j)
endif
50 if (my_rank.eq.0) go to 55
call mpi_send(t(i,j),1,MPI_DOUBLE_PRECISION,0,1, ! send to master
* MPI_COMM_WORLD,ierr)
55 continue

write(*,*) 'my_rank=',my_rank

IF (my_rank.eq.0) THEN
do 62 nproc=1,nsize-1
nts=ntasks(nproc,nt)
nte=ntaske(nproc,nt)
do 60 m=nts,nte
nod14=nod14t(m,nt)
i=nim(nod14)
j=njm(nod14)

write(*,*) nproc,m,i,j

call mpi_recv(t(i,j),1,MPI_DOUBLE_PRECISION,nproc,1, ! master receive
* MPI_COMM_WORLD,status,ierr)

write(*,*) nproc,m,i,j

60 continue

write(*,*) 'nproc=',nproc

62 continue
endif

CCCCCCCCCCCCCCCCCCCCCCCCCCCC

The output is as follows...

OOOOOOOOOOOOOOOOOOOOOOOOOOOO

>mpiexec -np 12 laplace
my_rank= 3
my_rank= 2
my_rank= 7
my_rank= 1
my_rank= 10
my_rank= 11
my_rank= 5
my_rank= 9
my_rank= 0
1 1 1 1
1 1 1 1
1 2 3 1
my_rank= 4
my_rank= 8
my_rank= 6

OOOOOOOOOOOOOOOOOOOOOOOOOOOO

It goes through all the calculations and mpi_send's
but hangs after the second call to mpi_recv. I have
been trying to figure this out through a long search
but I am still missing something fundamental here.
Your help would be appreciated.

 

0 Kudos
1 Solution
galli_m
New Contributor I
1,068 Views

I believe I resolved this issue...

I Broadcast all required user inputs prior to the calculation section.

Now, I sweep through all 'tasks' (nt=1-4).
After it completes the first iteration (and executes "go to 1")
the program idles again.

I cannot figure out why...
The latest code is attached.
Thanks again.

View solution in original post

0 Kudos
6 Replies
ShivaniK_Intel
Moderator
1,096 Views

Hi,


Thanks for reaching out to us.


If possible could you please provide the complete reproducer code so that we can investigate more on your issue?


Also, please share the below details:

OS version

MPI version


Thanks & Regards

Shivani


0 Kudos
galli_m
New Contributor I
1,086 Views

The full code is attached. This simple code is a
development step to eventually converting a 3D CFD
code from serial to parallel.


I'm running this on Windows 10 Pro with
Intel Core I5-10600K 6-Core, 12-Thread desktop.
I am using Intel Visual Studio, Intel oneAPI Base
and HPC...
Thank you for your help.

 

For the problem at hand....
I know that if I were transferring text between
processors I would first write to
write(greeting,*) ...
and then send and receive 'greeting' back and forth
call mpi_recv(greeting,...
call mpi_send(greeting,...
This makes the master wait to receive messages
from other processors. How do I do this in the
current code?

 

0 Kudos
galli_m
New Contributor I
1,069 Views

I believe I resolved this issue...

I Broadcast all required user inputs prior to the calculation section.

Now, I sweep through all 'tasks' (nt=1-4).
After it completes the first iteration (and executes "go to 1")
the program idles again.

I cannot figure out why...
The latest code is attached.
Thanks again.

0 Kudos
ShivaniK_Intel
Moderator
1,015 Views

Hi,


We are glad that your issue has been resolved.


Could you please let us know if there is anything else that we can help you with? If no, could you please confirm whether we can close this thread?


Thanks & Regards

Shivani


0 Kudos
galli_m
New Contributor I
1,006 Views

Please close this thread.

Thank you.

0 Kudos
ShivaniK_Intel
Moderator
998 Views

Hi,


Thanks for the confirmation!


As this issue has been resolved, we will no longer respond to this thread.

If you require any additional assistance from Intel, please start a new thread.

Any further interaction in this thread will be considered community only.

Have a Good day.


Thanks & Regards

Shivani


0 Kudos
Reply