- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I've recently started programming with the intel MPI library with the Intel Visual Fortran Compiler for Windows. I've runned some test example to check the links between the MPI library and the ifort compiler. Things were going very smoothly until I used the MPI_IPROBE subroutine to verify the work done by a MPI_BCAST subroutine. Here is the code of the test example :
program main
use mpi
implicit none
integer, dimension(MPI_STATUS_SIZE)::statut
integer, parameter :: etiquette=100
integer :: ii,rang,valeur,code,nbre_processus,IERROR
logical :: flag
call MPI_INIT (code)
call MPI_COMM_SIZE (MPI_COMM_WORLD,nbre_processus,code)
call MPI_COMM_RANK (MPI_COMM_WORLD,rang,code)
if (rang == 1) then
value= 1000 + rang
endif
Call MPI_BCAST(value,1,MPI_INTEGER,1,MPI_COMM_WORLD,IERROR)
Call MPI_IPROBE(MPI_ANY_SOURCE,MPI_ANY_TAG,MPI_COMM_WORLD,flag,statut,IERROR)
If (flag) Then
print *,"I, process ",rang,", have received ",value," from process 1."
Else
print *,"I, process ",rang,", have received nothing "
EndIf
Call MPI_FINALIZE (code)
end
What I find bizarre is that the MPI_PROBE gives FALSE while the different processes actually receive the message (value). I've noted that the different processes actually receive the message when I've runned the 2 following versions of the code :
VERSION 1 : I've just reversed the condition on flag to allow the processes to print the value that I have
if (rang == 1) then
value= 1000 + rang
endif
Call MPI_BCAST(value,1,MPI_INTEGER,1,MPI_COMM_WORLD,IERROR)
Call MPI_IPROBE(MPI_ANY_SOURCE,MPI_ANY_TAG,MPI_COMM_WORLD,flag,statut,IERROR)
If (.NOT. flag) Then
print *,"I, process ",rang,", have received ",value," from process 1."
Else
print *,"I, process ",rang,", have received nothing "
EndIf
RESULT 1 :
I, process 1 , have received 1001 from process 1.
I, process 0 , have received 1001 from process 1.
I, process 6 , have received 1001 from process 1.
I, process 7 , have received 1001 from process 1.
I, process 4 , have received 1001 from process 1.
I, process 5 , have received 1001 from process 1.
I, process 3 , have received 1001 from process 1.
I, process 2 , have received 1001 from process 1.
VERSION 2 : Here, I've commented the MPI_BCAST to see the value printed by the other processes
if (rang == 1) then
value= 1000 + rang
endif
!!Call MPI_BCAST(value,1,MPI_INTEGER,1,MPI_COMM_WORLD,IERROR)
Call MPI_IPROBE(MPI_ANY_SOURCE,MPI_ANY_TAG,MPI_COMM_WORLD,flag,statut,IERROR)
If (.NOT. flag) Then
print *,"I, process ",rang,", have received ",value," from process 1."
Else
print *,"I, process ",rang,", have received nothing "
EndIf
RESULT 2 :
I, process 3 , have received -858993460 from process 1.
I, process 5 , have received -858993460 from process 1.
I, process 6 , have received -858993460 from process 1.
I, process 1 , have received 1001 from process 1.
I, process 4 , have received -858993460 from process 1.
I, process 2 , have received -858993460 from process 1.
I, process 0 , have received -858993460 from process 1.
I, process 7 , have received -858993460 from process 1.
So What I don't really understand is why the MPI_IPROBE is giving FALSE when the MPI_BCAST is doing well its work ? Can you help me please to understand this contradiction ?
Thanks,
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Questions about MPI are probably better posed on a forum specific to the MPI implementation. For Intel MPI, this would be the companion HPC and cluster forum.
Note that web searches on MPI_Probe will show up information on how it's not expected to work with collectives.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for your comment. Yes, in fact, I think that the problem is due to the use of the MPI_IPROBE with a collective communication. I did'nt pay attention to it. Thanks a lot for your help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
unfortunately I've discovered that I have the same problem with a point to point communication (with the subroutines MPI_SEND and MPI_RECV) !!

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page