<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Hi Tim, in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/How-to-bind-MPI-process-to-core-from-mpirun-argument/m-p/837158#M1387</link>
    <description>&lt;P&gt;Hi Tim,&lt;/P&gt;

&lt;P&gt;as mentioned in my question, I have 10 MPI ranks that I would like to run on a 20 core node with 5 MPI ranks on each socket. So the number of ranks and cores are not equal. I don't want the 10 MPI ranks to run on a single socket, as I would like the 5 MPI ranks to have their own NUMA region.&lt;/P&gt;

&lt;P&gt;Regards,&lt;/P&gt;</description>
    <pubDate>Thu, 01 Jan 2015 23:08:29 GMT</pubDate>
    <dc:creator>Miah__Wadud</dc:creator>
    <dc:date>2015-01-01T23:08:29Z</dc:date>
    <item>
      <title>How to bind MPI process to core from mpirun argument</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/How-to-bind-MPI-process-to-core-from-mpirun-argument/m-p/837152#M1381</link>
      <description>Dear Intel,&lt;BR /&gt;&lt;BR /&gt;I use "sched_setaffinity" in the code to pin MPI process to core. But can only do so if I have access to source code.Of course, I can pin it after the code is running, but sometimes this is not a good solution since pin will need to be done on after process been created but before it starts to execute the compute kernel. &lt;BR /&gt;So, a very simple question, isthere an option in mpirun (or mpiexec) such that I can pin the MPI process to core? For example, something like this:&lt;BR /&gt;&lt;BR /&gt;mpirun -nc 2 -pincore 0 6 -np 10 .....&lt;BR /&gt;&lt;BR /&gt;here&lt;BR /&gt;-np 10 =&amp;gt; 10 mpi processes.&lt;BR /&gt;-nc 2 =&amp;gt; use 2 cores per node, or run 2 mpi processes per node&lt;BR /&gt;-pincore0 6 =&amp;gt; pin the mpi processes to core ID 0 and core ID 6&lt;BR /&gt;&lt;BR /&gt;Obvious, I am thinking about a two sockets Westmere per node, and I want each mpi process run on on different socket, so I pin to core 0 and core 6 (of course, it can make is more general, like 0:5, will pin the process to 0,1,2,...5)&lt;BR /&gt;&lt;BR /&gt;Thanks.&lt;BR /&gt;&lt;BR /&gt;-- Terrence&lt;BR /&gt;</description>
      <pubDate>Mon, 20 Dec 2010 21:28:50 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/How-to-bind-MPI-process-to-core-from-mpirun-argument/m-p/837152#M1381</guid>
      <dc:creator>Terrence_Liao</dc:creator>
      <dc:date>2010-12-20T21:28:50Z</dc:date>
    </item>
    <item>
      <title>How to bind MPI process to core from mpirun argument</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/How-to-bind-MPI-process-to-core-from-mpirun-argument/m-p/837153#M1382</link>
      <description>&lt;P&gt;Dear Terrence,&lt;/P&gt;&lt;P&gt;The Intel MPI Library does a process pinning automatically. It also provides a set of options to control process pinning behavior. See the description of the I_MPI_PIN_* environment variables in the Reference Manual for details. &lt;/P&gt;&lt;P&gt;To control number of processes placed per node use the mpirun perhost option or I_MPI_PERHOST environment variable. &lt;/P&gt;&lt;P&gt;For instance, use the following syntax for your example using Intel MPI &lt;/P&gt;&lt;P&gt;$ mpirun perhost 2 env I_MPI_PIN_PROCESSOR_LIST 0,6 n 10 &lt;/P&gt;&lt;P&gt;Set I_MPI_DEBUG to5 if you want to see process pining table.&lt;/P&gt;&lt;P&gt;Does it answer your question?&lt;/P&gt;&lt;P&gt;Best regards,&lt;/P&gt;&lt;P&gt;Andrey&lt;/P&gt;</description>
      <pubDate>Tue, 21 Dec 2010 09:40:27 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/How-to-bind-MPI-process-to-core-from-mpirun-argument/m-p/837153#M1382</guid>
      <dc:creator>Andrey_D_Intel</dc:creator>
      <dc:date>2010-12-21T09:40:27Z</dc:date>
    </item>
    <item>
      <title>How to bind MPI process to core from mpirun argument</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/How-to-bind-MPI-process-to-core-from-mpirun-argument/m-p/837154#M1383</link>
      <description>Thank Andrey. This is what I need. I checked the reference manual, PIN_PROCESSOR_LIST and PIN_DOMAIN cover 11 pages out of 115 pages of the manual. Indeed, these can become complcate.&lt;BR /&gt; -- terrence</description>
      <pubDate>Tue, 21 Dec 2010 12:39:34 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/How-to-bind-MPI-process-to-core-from-mpirun-argument/m-p/837154#M1383</guid>
      <dc:creator>Terrence_Liao</dc:creator>
      <dc:date>2010-12-21T12:39:34Z</dc:date>
    </item>
    <item>
      <title>How to bind MPI process to core from mpirun argument</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/How-to-bind-MPI-process-to-core-from-mpirun-argument/m-p/837155#M1384</link>
      <description>&lt;P&gt;Terrence,&lt;/P&gt;&lt;P&gt;Usually the default pinning scheme work well for most customers. Let us know if you have special requirements. We will be able to disscuss possible solutions then.&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;Andrey&lt;/P&gt;</description>
      <pubDate>Tue, 21 Dec 2010 13:14:37 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/How-to-bind-MPI-process-to-core-from-mpirun-argument/m-p/837155#M1384</guid>
      <dc:creator>Andrey_D_Intel</dc:creator>
      <dc:date>2010-12-21T13:14:37Z</dc:date>
    </item>
    <item>
      <title>Hello,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/How-to-bind-MPI-process-to-core-from-mpirun-argument/m-p/837156#M1385</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;

&lt;P&gt;I would like to pin MPI processes across all CPU sockets. For example, I would like to run 10 MPI processes on a two socket machine with 5 MPI processes on each socket. Could you please send me the instructions on doing this?&lt;/P&gt;

&lt;P&gt;Many thanks,&lt;/P&gt;</description>
      <pubDate>Thu, 01 Jan 2015 22:44:31 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/How-to-bind-MPI-process-to-core-from-mpirun-argument/m-p/837156#M1385</guid>
      <dc:creator>Miah__Wadud</dc:creator>
      <dc:date>2015-01-01T22:44:31Z</dc:date>
    </item>
    <item>
      <title>As andrey said, the default</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/How-to-bind-MPI-process-to-core-from-mpirun-argument/m-p/837157#M1386</link>
      <description>&lt;P&gt;As andrey said, the default pinning should be good if your numbers of ranks and cores match. &amp;nbsp;If &amp;nbsp;not, pinning may be inadvisable.&lt;/P&gt;</description>
      <pubDate>Thu, 01 Jan 2015 23:04:24 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/How-to-bind-MPI-process-to-core-from-mpirun-argument/m-p/837157#M1386</guid>
      <dc:creator>TimP</dc:creator>
      <dc:date>2015-01-01T23:04:24Z</dc:date>
    </item>
    <item>
      <title>Hi Tim,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/How-to-bind-MPI-process-to-core-from-mpirun-argument/m-p/837158#M1387</link>
      <description>&lt;P&gt;Hi Tim,&lt;/P&gt;

&lt;P&gt;as mentioned in my question, I have 10 MPI ranks that I would like to run on a 20 core node with 5 MPI ranks on each socket. So the number of ranks and cores are not equal. I don't want the 10 MPI ranks to run on a single socket, as I would like the 5 MPI ranks to have their own NUMA region.&lt;/P&gt;

&lt;P&gt;Regards,&lt;/P&gt;</description>
      <pubDate>Thu, 01 Jan 2015 23:08:29 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/How-to-bind-MPI-process-to-core-from-mpirun-argument/m-p/837158#M1387</guid>
      <dc:creator>Miah__Wadud</dc:creator>
      <dc:date>2015-01-01T23:08:29Z</dc:date>
    </item>
  </channel>
</rss>

