Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2159 Discussions

The matched MPI Send/Recv pairs are mixed up

seongyun_k_
Beginner
680 Views

Hi,

Each MPI process has dedicated thread for each send/receive operation (I use MPI_Send/MPI_Recv calls). Because the data size is too big, I made each thread iterate over a for loop sending fixed sized data (a chunk) at a time.

My Expectation:

 SendThread (rank 0)    ReceiveThread (rank 1)
         chunk    ->       chunk
         chunk[i+1]  ->       chunk[i+1]
                     ...

Actually what happened is that:

 SendThread (rank 0)    ReceiveThread (rank 1)
         chunk    ->       chunk[i+1]
         chunk[i+1]  ->       chunk
                     ...

The data of chunk on the sender side has sent to

How can this happen? AFAIK, MPI_Send/Recv call is blocking call that can not progress unless there is any matched call (Send <---> Recv).

Does it mean that Sending chunk has been matched with Receiving chunk, but when the data is actually being delivered they got screwed up?

Is it expected behavior? thanks,

0 Kudos
2 Replies
Judith_W_Intel
Employee
680 Views

 

This is the C++ compiler forum.

I think this question would be more appropriate here:

https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology

 

0 Kudos
James_T_Intel
Moderator
680 Views

You're a little off with how MPI_Send and MPI_Recv work.  By default, MPI_Send only needs to copy the data (and a few other things) somewhere safe before returning.  Different implementations can handle this in different manners.  One variant of MPI_Send is MPI_Ssend.  This is a synchronous send, and will not return until the matching MPI_Recv occurs.  That is not forced in the Intel® MPI Library.  Intel MPI uses a buffered send, which copies the data to a buffer for communication later, and will return once the data is ready for a later MPI_Recv call.  If you need to force the synchronous behavior, use MPI_Ssend instead of MPI_Send.

However, for performance reasons, I recommend sticking with MPI_Send and using tags to differentiate each message.  If you assign a unique tag for each send/receive pair, this will force the pairs to match.

0 Kudos
Reply