<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Assign specific mpi tasks to specific IB interfaces in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/Assign-specific-mpi-tasks-to-specific-IB-interfaces/m-p/765838#M76</link>
    <description>Hi David,&lt;BR /&gt;&lt;BR /&gt;I need to make some clarifications.&lt;BR /&gt;If you use I_MPI_FABRICS=shm:ofa it means that 'shm' will be used for INTRA-node communication and 'ofa' will be used for INTER-node communication.&lt;BR /&gt;Since you are going to use OFA for intra-node communication you need to set I_MPI_FABRICS to 'ofa' or 'ofa:ofa'.&lt;BR /&gt;And all other paratemers as James mentioned.&lt;BR /&gt;Please give it a try and compare results with default settings. It would be nice if you could share with us results of different runs.&lt;BR /&gt;&lt;BR /&gt;Regards!&lt;BR /&gt; Dmitry&lt;BR /&gt;</description>
    <pubDate>Sat, 28 Apr 2012 09:15:22 GMT</pubDate>
    <dc:creator>Dmitry_K_Intel2</dc:creator>
    <dc:date>2012-04-28T09:15:22Z</dc:date>
    <item>
      <title>Assign specific mpi tasks to specific IB interfaces</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Assign-specific-mpi-tasks-to-specific-IB-interfaces/m-p/765836#M74</link>
      <description>I have a system with 16 cores and 2 IB interfaces. I need to use both rails, so have been using&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV id="_mcePaste"&gt;	export I_MPI_OFA_NUM_ADAPTERS=2&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;	export I_MPI_OFA_NUM_PORTS=1&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;	export I_MPI_OFA_RAIL_SCHEDULER=ROUND_ROBIN&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;	export I_MPI_FABRICS="shm:ofa"&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;But this has the undesirable effect of sending some of the data from the second 8 cores to the IB on the first eight cores and vice-versa.&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;I tried&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV id="_mcePaste"&gt;export I_MPI_OFA_NUM_ADAPTERS=2&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;export I_MPI_OFA_NUM_PORTS=1&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;export I_MPI_OFA_RAIL_SCHEDULER=PROCESS_BIND&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;export I_MPI_FABRICS="shm:ofa"&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;But this assigns process 0 to interface 1, process 1 to interface 2, process 3 to interface 1, etc. This sends alot of data from one set of cores to the opposing interface. Furthermore, the code is written so that it assumes that process 0 and process 1 are in the same cpu so it tries to do a cyclic data movement.&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;I need to assign process 0 - 7 to interface 1 and process 8 - 15 to interface 2.&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;Is this possible?&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;Thanks&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;David&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;Thanks&lt;/DIV&gt;</description>
      <pubDate>Fri, 27 Apr 2012 15:23:21 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Assign-specific-mpi-tasks-to-specific-IB-interfaces/m-p/765836#M74</guid>
      <dc:creator>David_Race</dc:creator>
      <dc:date>2012-04-27T15:23:21Z</dc:date>
    </item>
    <item>
      <title>Assign specific mpi tasks to specific IB interfaces</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Assign-specific-mpi-tasks-to-specific-IB-interfaces/m-p/765837#M75</link>
      <description>Hi David,&lt;BR /&gt;&lt;BR /&gt;Try using something like the following:&lt;BR /&gt;&lt;BR /&gt;[bash]export I_MPI_FABRICS=shm:ofa
mpirun -n 8 -env I_MPI_OFA_ADAPTER_NAME adap1 &lt;APPLICATION&gt; : -n 8 -env I_MPI_OFA_ADAPTER_NAME adap2 &lt;APPLICATION&gt;[/bash]&lt;BR /&gt;This should set the first 8 processes to use the first adapter, and the next 8 to use the second adapter.&lt;BR /&gt;&lt;BR /&gt;Sincerely,&lt;BR /&gt;James Tullos&lt;BR /&gt;Technical Consulting Engineer&lt;BR /&gt;Intel Cluster Tools&lt;/APPLICATION&gt;&lt;/APPLICATION&gt;</description>
      <pubDate>Fri, 27 Apr 2012 18:29:54 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Assign-specific-mpi-tasks-to-specific-IB-interfaces/m-p/765837#M75</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2012-04-27T18:29:54Z</dc:date>
    </item>
    <item>
      <title>Assign specific mpi tasks to specific IB interfaces</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Assign-specific-mpi-tasks-to-specific-IB-interfaces/m-p/765838#M76</link>
      <description>Hi David,&lt;BR /&gt;&lt;BR /&gt;I need to make some clarifications.&lt;BR /&gt;If you use I_MPI_FABRICS=shm:ofa it means that 'shm' will be used for INTRA-node communication and 'ofa' will be used for INTER-node communication.&lt;BR /&gt;Since you are going to use OFA for intra-node communication you need to set I_MPI_FABRICS to 'ofa' or 'ofa:ofa'.&lt;BR /&gt;And all other paratemers as James mentioned.&lt;BR /&gt;Please give it a try and compare results with default settings. It would be nice if you could share with us results of different runs.&lt;BR /&gt;&lt;BR /&gt;Regards!&lt;BR /&gt; Dmitry&lt;BR /&gt;</description>
      <pubDate>Sat, 28 Apr 2012 09:15:22 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Assign-specific-mpi-tasks-to-specific-IB-interfaces/m-p/765838#M76</guid>
      <dc:creator>Dmitry_K_Intel2</dc:creator>
      <dc:date>2012-04-28T09:15:22Z</dc:date>
    </item>
  </channel>
</rss>

