- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Running Intel MPI 4.1.3
Contrary to the user guide, which states for the default round-robin mapping,
To change this default behavior, set the number of processes per host by using the -perhost option, and set the total number of processes by using the -n option. See Local Options for details. The first <# of processes> indicated by the -perhost option is executed on the first host; the next <# of processes> is executed on the next host, and so on.
, when I try to run on 2 nodes and I_MPI_DEBUG=4, I see
[cchang@n0290]$ mpirun -n 4 -perhost 2 ./hello_MPIMP_multinode
[0] MPI startup(): Rank Pid Node name Pin cpu
[0] MPI startup(): 0 54622 n0290 {0,1,2,3,4,5,6,7,8,9,10,11}
[0] MPI startup(): 1 53310 n0289 {0,1,2,3,4,5,6,7,8,9,10,11}
[0] MPI startup(): 2 54623 n0290 {12,13,14,15,16,17,18,19,20,21,22,23}
[0] MPI startup(): 3 53311 n0289 {12,13,14,15,16,17,18,19,20,21,22,23}
Hello world: rank 0 of 4, thread 0 of 1 on n0290
Hello world: rank 2 of 4, thread 0 of 1 on n0290
Hello world: rank 1 of 4, thread 0 of 1 on n0289
Hello world: rank 3 of 4, thread 0 of 1 on n0289
Both the I_MPI_DEBUG and the actual test program output confirm that round-robin scheduling is being done.
I can't seem to change this with any combination of -perhost #, or -grr #. I can create a machinefile that specifies the number of ranks per node and it will map as desired, but I'm guessing I should be able to do this simply from the mpirun command. How is this done?
Thanks; Chris
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Back then there were some flip flops version to version on how perhost was working, so I'm not entirely surprised at your finding. What does surprise is your desire to rely on a version which was retired before me.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Chris,
As it's mentioned in the Intel® MPI Library for Linux* OS Reference Manual:
The -machinefile, -ppn, -rr, and -perhost options are intended for process distribution. If used simultaneously, -machinefile takes precedence.
You can try '-f ' or '-hostfile' mpirun options (instead of '-machinefile') which are able to work in conjunction with '-ppn'/'-perhost' options.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Tim--
Not sure I understand. Intel MPI 4.1 update 3 build 049 was released Mar 2014. We've put off upgrading to 5.0 because we're leery of putting .0 versions of anything into production.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Artem--
Thanks. I was hoping there was a short and sweet way (not that generating a machinefile is especially onerous) of packing consecutive ranks to consecutive node ids. The machinefile works fine, just wondered if I was misinterpreting the documentation somehow, or if there were an environment variable that we'd missed setting to get the documented behavior.
Cheers,
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi what does host in MPI mean?
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page