- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Each MPI process has dedicated thread for each send/receive operation (I use MPI_Send/MPI_Recv calls). Because the data size is too big, I made each thread iterate over a for loop sending fixed sized data (a chunk) at a time.
My Expectation:
SendThread (rank 0) ReceiveThread (rank 1)
chunk -> chunk
chunk[i+1] -> chunk[i+1]
...
Actually what happened is that:
SendThread (rank 0) ReceiveThread (rank 1)
chunk -> chunk[i+1]
chunk[i+1] -> chunk
...
The data of chunk on the sender side has sent to
How can this happen? AFAIK, MPI_Send/Recv call is blocking call that can not progress unless there is any matched call (Send <---> Recv).
Does it mean that Sending chunk has been matched with Receiving chunk, but when the data is actually being delivered they got screwed up?
Is it expected behavior? thanks,
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This is the C++ compiler forum.
I think this question would be more appropriate here:
https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You're a little off with how MPI_Send and MPI_Recv work. By default, MPI_Send only needs to copy the data (and a few other things) somewhere safe before returning. Different implementations can handle this in different manners. One variant of MPI_Send is MPI_Ssend. This is a synchronous send, and will not return until the matching MPI_Recv occurs. That is not forced in the Intel® MPI Library. Intel MPI uses a buffered send, which copies the data to a buffer for communication later, and will return once the data is ready for a later MPI_Recv call. If you need to force the synchronous behavior, use MPI_Ssend instead of MPI_Send.
However, for performance reasons, I recommend sticking with MPI_Send and using tags to differentiate each message. If you assign a unique tag for each send/receive pair, this will force the pairs to match.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page