<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Hi Alin, in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923790#M2415</link>
    <description>&lt;P&gt;Hi Alin,&lt;/P&gt;

&lt;P&gt;Using -ppn will not limit the total number of ranks on a host, simply the number of consecutive ranks on each host.&amp;nbsp; If you have too many ranks, the placement will cycle back to the first host and begin again.&amp;nbsp; So if I have a hostfile with two hosts (node0 and node1), here's what I should see:&lt;/P&gt;

&lt;P&gt;[plain]$mpirun -n 4 -ppn 2 ./hello&lt;/P&gt;

&lt;P&gt;Hello world: rank 0 of 4 running on node0&lt;/P&gt;

&lt;P&gt;Hello world: rank&amp;nbsp;1 of 4 running on node0&lt;/P&gt;

&lt;P&gt;Hello world: rank&amp;nbsp;2 of 4 running on node1&lt;/P&gt;

&lt;P&gt;Hello world: rank&amp;nbsp;3 of 4 running on node1&lt;/P&gt;

&lt;P&gt;$mpirun -n 4 -ppn 1 ./hello&lt;/P&gt;

&lt;P&gt;Hello world: rank 0 of 4 running on node0&lt;/P&gt;

&lt;P&gt;Hello world: rank&amp;nbsp;1 of 4 running on node1&lt;/P&gt;

&lt;P&gt;Hello world: rank&amp;nbsp;2 of 4 running on node0&lt;/P&gt;

&lt;P&gt;Hello world: rank&amp;nbsp;3 of 4 running on node1[/plain]&lt;/P&gt;

&lt;P&gt;In your command line, you didn't specify the number of ranks to run.&amp;nbsp; If you don't specify that number, it will be determined from your job (or if that can't be found, then the number of cores available on the host).&amp;nbsp; In this case, your job says to use 40 ranks, so 40 ranks were launched.&lt;/P&gt;

&lt;P&gt;Sincerely,&lt;BR /&gt;
	James Tullos&lt;BR /&gt;
	Technical Consulting Engineer&lt;BR /&gt;
	Intel® Cluster Tools&lt;/P&gt;</description>
    <pubDate>Tue, 03 Dec 2013 21:46:52 GMT</pubDate>
    <dc:creator>James_T_Intel</dc:creator>
    <dc:date>2013-12-03T21:46:52Z</dc:date>
    <item>
      <title>mpiexec.hydra -ppn 1 and intel-mpi 4.1.2.040</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923788#M2413</link>
      <description>&lt;P&gt;I have just installed intel-mpi&amp;nbsp;4.1.2.040 onf a cluster...&lt;/P&gt;

&lt;P&gt;If I used mpiexec.hydra to start jobs one per node... it still spawns processes on all available resources...&lt;/P&gt;

&lt;P&gt;mpiexec.hydra -ppn 1 hostname&lt;/P&gt;

&lt;P&gt;on two nodes will show me 40 lines as oppose to only two expected.&lt;/P&gt;

&lt;P&gt;I have added a file with debug info when running&lt;/P&gt;

&lt;P&gt;I_MPI_HYDRA_DEBUG=1 mpiexec.hydra -ppn 1 hostname 2&amp;gt;&amp;amp;1 | tee debug.txt&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;regards,&lt;/P&gt;

&lt;P&gt;Alin&lt;/P&gt;</description>
      <pubDate>Fri, 29 Nov 2013 13:30:58 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923788#M2413</guid>
      <dc:creator>Alin_M_Elena</dc:creator>
      <dc:date>2013-11-29T13:30:58Z</dc:date>
    </item>
    <item>
      <title>Forgot to say! Any help in</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923789#M2414</link>
      <description>&lt;P&gt;Forgot to say! Any help in solving the issue or better understanding it much appreciated.&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;regards,&lt;/P&gt;

&lt;P&gt;Alin&lt;/P&gt;</description>
      <pubDate>Fri, 29 Nov 2013 13:42:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923789#M2414</guid>
      <dc:creator>Alin_M_Elena</dc:creator>
      <dc:date>2013-11-29T13:42:00Z</dc:date>
    </item>
    <item>
      <title>Hi Alin,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923790#M2415</link>
      <description>&lt;P&gt;Hi Alin,&lt;/P&gt;

&lt;P&gt;Using -ppn will not limit the total number of ranks on a host, simply the number of consecutive ranks on each host.&amp;nbsp; If you have too many ranks, the placement will cycle back to the first host and begin again.&amp;nbsp; So if I have a hostfile with two hosts (node0 and node1), here's what I should see:&lt;/P&gt;

&lt;P&gt;[plain]$mpirun -n 4 -ppn 2 ./hello&lt;/P&gt;

&lt;P&gt;Hello world: rank 0 of 4 running on node0&lt;/P&gt;

&lt;P&gt;Hello world: rank&amp;nbsp;1 of 4 running on node0&lt;/P&gt;

&lt;P&gt;Hello world: rank&amp;nbsp;2 of 4 running on node1&lt;/P&gt;

&lt;P&gt;Hello world: rank&amp;nbsp;3 of 4 running on node1&lt;/P&gt;

&lt;P&gt;$mpirun -n 4 -ppn 1 ./hello&lt;/P&gt;

&lt;P&gt;Hello world: rank 0 of 4 running on node0&lt;/P&gt;

&lt;P&gt;Hello world: rank&amp;nbsp;1 of 4 running on node1&lt;/P&gt;

&lt;P&gt;Hello world: rank&amp;nbsp;2 of 4 running on node0&lt;/P&gt;

&lt;P&gt;Hello world: rank&amp;nbsp;3 of 4 running on node1[/plain]&lt;/P&gt;

&lt;P&gt;In your command line, you didn't specify the number of ranks to run.&amp;nbsp; If you don't specify that number, it will be determined from your job (or if that can't be found, then the number of cores available on the host).&amp;nbsp; In this case, your job says to use 40 ranks, so 40 ranks were launched.&lt;/P&gt;

&lt;P&gt;Sincerely,&lt;BR /&gt;
	James Tullos&lt;BR /&gt;
	Technical Consulting Engineer&lt;BR /&gt;
	Intel® Cluster Tools&lt;/P&gt;</description>
      <pubDate>Tue, 03 Dec 2013 21:46:52 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923790#M2415</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2013-12-03T21:46:52Z</dc:date>
    </item>
    <item>
      <title>Hi James,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923791#M2416</link>
      <description>Hi James,

Thank you for you answer. I cannot still reproduce your results with the above approach...
mpiexec.hydra is the one provided by the current version of the intel mpi library... 
mpiexec.hydra.good is from 4.0 as you can see one offers the right output the other not.
Also the presence of -n in the past was not mandatory but maybe I missed something in the manual. I have also attached the nodes file.
The same happens when using mpirun.

[alin@service56:~]: mpiexec.hydra -n 4 -ppn 1 ./hello.X 
I am process 0 out of 4 running on service56 with MPI version 2.2
I am process 1 out of 4 running on service56 with MPI version 2.2
I am process 3 out of 4 running on service56 with MPI version 2.2
I am process 2 out of 4 running on service56 with MPI version 2.2
[alin@service56:~]: mpiexec.hydra -n 4 -ppn 2 ./hello.X 
I am process 1 out of 4 running on service56 with MPI version 2.2
I am process 0 out of 4 running on service56 with MPI version 2.2
I am process 3 out of 4 running on service56 with MPI version 2.2
I am process 2 out of 4 running on service56 with MPI version 2.2
[alin@service56:~]: mpirun -n 4 -ppn 1 ./hello.X
I am process 1 out of 4 running on service56 with MPI version 2.2
I am process 0 out of 4 running on service56 with MPI version 2.2
I am process 2 out of 4 running on service56 with MPI version 2.2
I am process 3 out of 4 running on service56 with MPI version 2.2
[alin@service56:~]: mpirun -n 4 -ppn 2 ./hello.X
I am process 1 out of 4 running on service56 with MPI version 2.2
I am process 0 out of 4 running on service56 with MPI version 2.2
I am process 3 out of 4 running on service56 with MPI version 2.2
I am process 2 out of 4 running on service56 with MPI version 2.2

[alin@service56:~]: mpiexec.hydra.good -n 4 -ppn 2 ./hello.X 
I am process 0 out of 4 running on service56 with MPI version 2.2
I am process 1 out of 4 running on service56 with MPI version 2.2
I am process 2 out of 4 running on service54 with MPI version 2.2
I am process 3 out of 4 running on service54 with MPI version 2.2
[alin@service56:~]: mpiexec.hydra.good -n 4 -ppn 1 ./hello.X 
I am process 0 out of 4 running on service56 with MPI version 2.2
I am process 2 out of 4 running on service56 with MPI version 2.2
I am process 3 out of 4 running on service54 with MPI version 2.2
I am process 1 out of 4 running on service54 with MPI version 2.2
[alin@service56:~]: mpiexec.hydra.good  -ppn 1 ./hello.X 
I am process 0 out of 2 running on service56 with MPI version 2.2
I am process 1 out of 2 running on service54 with MPI version 2.2
[alin@service56:~]: mpiexec.hydra.good -ppn 2 ./hello.X 
I am process 0 out of 4 running on service56 with MPI version 2.2
I am process 2 out of 4 running on service54 with MPI version 2.2
I am process 3 out of 4 running on service54 with MPI version 2.2
I am process 1 out of 4 running on service56 with MPI version 2.2

[alin@service56:~]: cat $PBS_NODEFILE &amp;gt; nodes.txt

I looked more into
I_MPI_HYDRA_DEBUG=1 mpiexec.hydra.good -n 4 -ppn 2 ./hello.X &amp;gt; good
I_MPI_HYDRA_DEBUG=1 mpiexec.hydra -n 4 -ppn 2 ./hello.X &amp;gt; bad

attached them both.

looking into them I find these differences that may help to understand the issue
[alin@abaddon:~]: grep -A 3 "Proxy information" bad
    Proxy information:
    *********************
      [1] proxy: service56 (20 cores)
      Exec list: ./hello.X (4 processes); 
[alin@abaddon:~]: grep -A 6 "Proxy information" good
    Proxy information:
    *********************
      [1] proxy: service56 (2 cores)
      Exec list: ./hello.X (2 processes); 

      [2] proxy: service54 (2 cores)
      Exec list: ./hello.X (2 processes); 

more the arguments passed to the proxy are different...

[alin@abaddon:~]: grep -A 2 "Arguments being" good
Arguments being passed to proxy 0:
--version 1.4.1p1 --iface-ip-env-name MPICH_INTERFACE_HOSTNAME --hostname service56 --global-core-map 0,2,2 --filler-process-map 0,2,2 --global-process-count 4 --auto-cleanup 1 --pmi-rank -1 --pmi-kvsname kvs_38696_0 --pmi-process-mapping (vector,(0,2,2)) --topolib ipl --ckpointlib blcr --ckpoint-prefix /tmp --ckpoint-preserve -1 --ckpoint off --ckpoint-num -1 --global-inherited-env 117 'I_MPI_PERHOST=allcores' 'I_MPI_ROOT=/ichec/home/packages/intel-cluster-studio/2013-sp1-u1/impi/4.1.2.040' 'COLORTERM=1' 'PBS_O_PATH=/ichec/home/staff/alin/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:/opt/c3/bin:/usr/lib/mit/bin:/usr/lib/mit/sbin:/opt/mam/bin:/opt/moab/bin:/opt/moab/sbin:.:/opt/sgi/sbin:/opt/sgi/bin' 'module=() {  eval `/usr/bin/modulecmd bash $*`
}' '_=/ichec/home/packages/intel-cluster-studio/2013-sp1-u1/impi/4.1.2.040/intel64/bin/mpiexec.hydra.good' --global-user-env 0 --global-system-env 2 'MPICH_ENABLE_CKPOINT=1' 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' --proxy-core-count 2 --exec --exec-appnum 0 --exec-proc-count 2 --exec-local-env 0 --exec-wdir /ichec/home/staff/alin --exec-args 1 ./hello.X 
--
Arguments being passed to proxy 1:
--version 1.4.1p1 --iface-ip-env-name MPICH_INTERFACE_HOSTNAME --hostname service54 --global-core-map 2,2,0 --filler-process-map 2,2,0 --global-process-count 4 --auto-cleanup 1 --pmi-rank -1 --pmi-kvsname kvs_38696_0 --pmi-process-mapping (vector,(0,2,2)) --topolib ipl --ckpointlib blcr --ckpoint-prefix /tmp --ckpoint-preserve -1 --ckpoint off --ckpoint-num -1 --global-inherited-env 117 'I_MPI_PERHOST=allcores' 'I_MPI_ROOT=/ichec/home/packages/intel-cluster-studio/2013-sp1-u1/impi/4.1.2.040' 'COLORTERM=1' 'PBS_O_PATH=/ichec/home/staff/alin/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:/opt/c3/bin:/usr/lib/mit/bin:/usr/lib/mit/sbin:/opt/mam/bin:/opt/moab/bin:/opt/moab/sbin:.:/opt/sgi/sbin:/opt/sgi/bin' 'module=() {  eval `/usr/bin/modulecmd bash $*`
}' '_=/ichec/home/packages/intel-cluster-studio/2013-sp1-u1/impi/4.1.2.040/intel64/bin/mpiexec.hydra.good' --global-user-env 0 --global-system-env 2 'MPICH_ENABLE_CKPOINT=1' 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' --proxy-core-count 2 --exec --exec-appnum 0 --exec-proc-count 2 --exec-local-env 0 --exec-wdir /ichec/home/staff/alin --exec-args 1 ./hello.X 
[alin@abaddon:~]: 
[alin@abaddon:~]: 
[alin@abaddon:~]: grep -A 2 "Arguments being" bad
Arguments being passed to proxy 0:
--version 1.4.1p1 --iface-ip-env-name MPICH_INTERFACE_HOSTNAME --hostname service56 --global-core-map 0,20,0 --filler-process-map 0,20,0 --global-process-count 4 --auto-cleanup 1 --pmi-rank -1 --pmi-kvsname kvs_38714_0 --pmi-process-mapping (vector,(0,2,20)) --topolib ipl --ckpointlib blcr --ckpoint-prefix /tmp --ckpoint-preserve 1 --ckpoint off --ckpoint-num -1 --global-inherited-env 117 'I_MPI_PERHOST=allcores' 'I_MPI_ROOT=/ichec/home/packages/intel-cluster-studio/2013-sp1-u1/impi/4.1.2.040' 'COLORTERM=1' 'PBS_O_PATH=/ichec/home/staff/alin/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:/opt/c3/bin:/usr/lib/mit/bin:/usr/lib/mit/sbin:/opt/mam/bin:/opt/moab/bin:/opt/moab/sbin:.:/opt/sgi/sbin:/opt/sgi/bin' 'module=() {  eval `/usr/bin/modulecmd bash $*`
}' '_=/ichec/home/packages/intel-cluster-studio/2013-sp1-u1/impi/4.1.2.040/intel64/bin/mpiexec.hydra' --global-user-env 0 --global-system-env 2 'MPICH_ENABLE_CKPOINT=1' 'GFORTRAN_UNBUFFERED_PRECONNECTED=y' --proxy-core-count 20 --exec --exec-appnum 0 --exec-proc-count 4 --exec-local-env 0 --exec-wdir /ichec/home/staff/alin --exec-args 1 ./hello.X 

If I collapse my hostfile into uniq hosts and use the -f I get the correct behaviour with or without -n.
Did the behaviour between versions of intel-mpi change or this is bug?

regards,
Alin



regards,
Alin</description>
      <pubDate>Tue, 03 Dec 2013 22:52:52 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923791#M2416</guid>
      <dc:creator>Alin_M_Elena</dc:creator>
      <dc:date>2013-12-03T22:52:52Z</dc:date>
    </item>
    <item>
      <title>Hi Alin,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923792#M2417</link>
      <description>&lt;P&gt;Hi Alin,&lt;/P&gt;

&lt;P&gt;What is the full version number for the working one?&lt;/P&gt;

&lt;P&gt;James.&lt;/P&gt;</description>
      <pubDate>Wed, 04 Dec 2013 15:48:10 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923792#M2417</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2013-12-04T15:48:10Z</dc:date>
    </item>
    <item>
      <title>Hi James,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923793#M2418</link>
      <description>&lt;P&gt;Hi James,&lt;/P&gt;

&lt;P&gt;Alin not being available right now, I'll answer the question.&lt;/P&gt;

&lt;P&gt;The Intel MPI version the working mpiexec.hydra comes from is 4.1.0.024.&lt;BR /&gt;
	More precisely, it says: Intel(R) MPI Library for Linux* OS, Version 4.1.0 Build 20120831&lt;/P&gt;

&lt;P&gt;Cheers.&lt;/P&gt;

&lt;P&gt;Gilles&lt;/P&gt;</description>
      <pubDate>Wed, 04 Dec 2013 16:13:12 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923793#M2418</guid>
      <dc:creator>Gilles_C_</dc:creator>
      <dc:date>2013-12-04T16:13:12Z</dc:date>
    </item>
    <item>
      <title>Hi,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923794#M2419</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;

&lt;P&gt;Any chance to see an update on this issue? From the outside it looks so trivially a regression in the mpiexec.hydra, yet it is so annoying from a user's point of view... Do I miss some critical element here?&lt;/P&gt;

&lt;P&gt;Although using an old version of it allows to run, it might have some unexpected side effects we don't see. Moreover, since we plan using intensively symmetric MPI mode on Xeon phi, being in a clean and up-to-date Intel MPI environment would be a highly desirable.&lt;/P&gt;

&lt;P&gt;Cheers.&lt;/P&gt;

&lt;P&gt;Gilles&lt;/P&gt;</description>
      <pubDate>Tue, 17 Dec 2013 08:23:17 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923794#M2419</guid>
      <dc:creator>Gilles_C_</dc:creator>
      <dc:date>2013-12-17T08:23:17Z</dc:date>
    </item>
    <item>
      <title>Hi Gilles,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923795#M2420</link>
      <description>&lt;P&gt;Hi Gilles,&lt;/P&gt;

&lt;P&gt;I currently do not have any additional information about this issue.&amp;nbsp; Several other customers are reporting it.&amp;nbsp; I can suggest using a machinefile as a workaround, or specifying a different hostfile, rather than allowing Hydra to automatically get the hosts from your job scheduler.&lt;/P&gt;

&lt;P&gt;Sincerely,&lt;BR /&gt;
	James Tullos&lt;BR /&gt;
	Technical Consulting Engineer&lt;BR /&gt;
	Intel® Cluster Tools&lt;/P&gt;</description>
      <pubDate>Tue, 31 Dec 2013 19:29:43 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923795#M2420</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2013-12-31T19:29:43Z</dc:date>
    </item>
    <item>
      <title>I'm working in a cluster and</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923796#M2421</link>
      <description>&lt;P&gt;I'm working in a cluster and learning how to send different process. Today I tried to use a script with the command to execute the program. Suddenly, when I use the command top appears:&lt;/P&gt;

&lt;P&gt;28210 jazmin &amp;nbsp; &amp;nbsp;25 &amp;nbsp; 0 13088 &amp;nbsp;928 &amp;nbsp;712 R 100.2 &amp;nbsp;0.0 383:56.27 mpiexec.hydra&amp;nbsp;&lt;/P&gt;

&lt;P&gt;and I cannot kill this process, how can I do it? thanks in advance&lt;/P&gt;</description>
      <pubDate>Mon, 05 Jan 2015 19:47:36 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpiexec-hydra-ppn-1-and-intel-mpi-4-1-2-040/m-p/923796#M2421</guid>
      <dc:creator>Jazmín_Yanel_J_</dc:creator>
      <dc:date>2015-01-05T19:47:36Z</dc:date>
    </item>
  </channel>
</rss>

