- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear all, I am started to using MPI for a simple data decomposition of a 2-D domain. Assuming that I am using 2 computing nodes, each having 8 processors, I want to make message pass only between the two nodes, while inside each node, all processors can access their shared memory.
After calling MPI_rank and receiving 0~15 for processor rank, how can I know to which node a processor belongs? Do processors with rank 0 to 7 belong to computing node1 and 8 to 15 belong to computing node 2?
By the way, the machine is using a Windows compute cluster pack, MS_MPI library. Are there some options to know the rank of each processor on each node, if multiple nodes are used?
Thanks and regards
skiff
After calling MPI_rank and receiving 0~15 for processor rank, how can I know to which node a processor belongs? Do processors with rank 0 to 7 belong to computing node1 and 8 to 15 belong to computing node 2?
By the way, the machine is using a Windows compute cluster pack, MS_MPI library. Are there some options to know the rank of each processor on each node, if multiple nodes are used?
Thanks and regards
skiff
Link Copied
1 Reply
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Some of your questions are specific to MSMPI, which you should look up in their documentation, and some are general MPI questions, where you should start by looking up MPI references. For example, you might be interested in MPI_Comm_rank(). This forum is better suited to specific questions on Intel cluster software tools.
In general, you shouldn't be looking at specific rank numbers in order to over-ride how your MPI implementation will handle communications. Even MSMPI, in versions about to be released, optimizes local vs inter-node communication, as most others have done for several years. If you wish to go further than that, you may be interested in hybrid OpenMP/MPI. These topics may be over-kill if you don't wish to go beyond 2 nodes.
In general, you shouldn't be looking at specific rank numbers in order to over-ride how your MPI implementation will handle communications. Even MSMPI, in versions about to be released, optimizes local vs inter-node communication, as most others have done for several years. If you wish to go further than that, you may be interested in hybrid OpenMP/MPI. These topics may be over-kill if you don't wish to go beyond 2 nodes.

Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page