- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm noticing some strange behaviour with a very simple piece of MPI code:
#include <mpi.h> int main(int argc, char* argv[]) { // Initialize the MPI environment MPI_Init(NULL, NULL); int world_rank; MPI_Comm_rank(MPI_COMM_WORLD, &world_rank); int world_size; MPI_Comm_size(MPI_COMM_WORLD, &world_size); // We are assuming at least 2 processes for this task if (world_size != 2) { std::cout << "World size must be equal to 1" << std::endl; MPI_Abort(MPI_COMM_WORLD, 1); } int numberCounter = 10000; double number[numberCounter]; if (world_rank == 0) { std::cout << world_rank << std::endl; MPI_Send(number, numberCounter, MPI_DOUBLE, 1, 0, MPI_COMM_WORLD); } else if (world_rank == 1) { std::cout << world_rank << std::endl; MPI_Recv(number, numberCounter, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); } MPI_Finalize(); }
The above works fine provided that numberCounter
is small (~1000). When the value is larger however (>10000), the code hangs and never reaches the end. I checked with MPI_Iprobe and it does show that rank 1 is receiving a message, but MPI_Recv always hangs.
What could be causing this? Can anyone else reproduce this behaviour?
[EDIT] I'm not sure how I missed this, but this seems to be a duplicate of https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/607259 [/EDIT]
Link Copied
0 Replies
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page