<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Intel MPI on Mellanox Infiniband in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-on-Mellanox-Infiniband/m-p/926920#M2482</link>
    <description>Hi everybody.&lt;BR /&gt;&lt;BR /&gt;I have several months trying to run Intel MPI on our Itanium cluster with Mellanox Infiniband interconnect with IBGold (It works perfectly over ethernet)&lt;BR /&gt;&lt;BR /&gt;apparently, MPI can't find the DAPL provider. my /etc/dat.conf say:&lt;BR /&gt;ib0 u1.2 nonthreadsafe default /opt/ibgd/lib/libdapl.so ri.1.1 "InfiniHost0 1" ""&lt;BR /&gt;ib1 u1.2 nonthreadsafe default /opt/ibgd/lib/libdapl.so ri.1.1 "InfiniHost0 2" ""&lt;BR /&gt;&lt;BR /&gt;but when I run a MPI code, I get:&lt;BR /&gt;mpiexec -genv I_MPI_DEVICE rdma -env I_MPI_DEBUG 4 -n 2 ./a.out&lt;BR /&gt;I_MPI: [0] my_dlopen(): dlopen failed: libmpi.def.so&lt;BR /&gt;I_MPI: [0] set_up_devices(): will use static-default device&lt;BR /&gt;couldn't open /dev/ts_ua_cm0: No such file or directory&lt;BR /&gt;&lt;BR /&gt;using more debug value, I get something strange:&lt;BR /&gt;I_MPI: [0] try_one_device(): trying device: libmpi.rdma.so&lt;BR /&gt;I_MPI: [0] my_dlsym(): dlsym for dats_get_ia_handle failed: /usr/lib/libdat.so: undefined symbol: dats_get_ia_handle&lt;BR /&gt;I_MPI: [0] can_use_dapl_provider(): returning; DAPL provider not ok to use: ib0&lt;BR /&gt;I_MPI: [0] can_use_dapl_provider(): returning; DAPL provider not ok to use: ib1&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Anybody have a hint?&lt;BR /&gt;&lt;BR /&gt;Thanks.</description>
    <pubDate>Thu, 18 May 2006 04:53:43 GMT</pubDate>
    <dc:creator>emoreno</dc:creator>
    <dc:date>2006-05-18T04:53:43Z</dc:date>
    <item>
      <title>Intel MPI on Mellanox Infiniband</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-on-Mellanox-Infiniband/m-p/926920#M2482</link>
      <description>Hi everybody.&lt;BR /&gt;&lt;BR /&gt;I have several months trying to run Intel MPI on our Itanium cluster with Mellanox Infiniband interconnect with IBGold (It works perfectly over ethernet)&lt;BR /&gt;&lt;BR /&gt;apparently, MPI can't find the DAPL provider. my /etc/dat.conf say:&lt;BR /&gt;ib0 u1.2 nonthreadsafe default /opt/ibgd/lib/libdapl.so ri.1.1 "InfiniHost0 1" ""&lt;BR /&gt;ib1 u1.2 nonthreadsafe default /opt/ibgd/lib/libdapl.so ri.1.1 "InfiniHost0 2" ""&lt;BR /&gt;&lt;BR /&gt;but when I run a MPI code, I get:&lt;BR /&gt;mpiexec -genv I_MPI_DEVICE rdma -env I_MPI_DEBUG 4 -n 2 ./a.out&lt;BR /&gt;I_MPI: [0] my_dlopen(): dlopen failed: libmpi.def.so&lt;BR /&gt;I_MPI: [0] set_up_devices(): will use static-default device&lt;BR /&gt;couldn't open /dev/ts_ua_cm0: No such file or directory&lt;BR /&gt;&lt;BR /&gt;using more debug value, I get something strange:&lt;BR /&gt;I_MPI: [0] try_one_device(): trying device: libmpi.rdma.so&lt;BR /&gt;I_MPI: [0] my_dlsym(): dlsym for dats_get_ia_handle failed: /usr/lib/libdat.so: undefined symbol: dats_get_ia_handle&lt;BR /&gt;I_MPI: [0] can_use_dapl_provider(): returning; DAPL provider not ok to use: ib0&lt;BR /&gt;I_MPI: [0] can_use_dapl_provider(): returning; DAPL provider not ok to use: ib1&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Anybody have a hint?&lt;BR /&gt;&lt;BR /&gt;Thanks.</description>
      <pubDate>Thu, 18 May 2006 04:53:43 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-on-Mellanox-Infiniband/m-p/926920#M2482</guid>
      <dc:creator>emoreno</dc:creator>
      <dc:date>2006-05-18T04:53:43Z</dc:date>
    </item>
    <item>
      <title>Re: Intel MPI on Mellanox Infiniband</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-on-Mellanox-Infiniband/m-p/926921#M2483</link>
      <description>Unfortunately, this is a frequent problem with those DAPL drivers. Some have avoided it by switching to OpenIB gen2.</description>
      <pubDate>Thu, 18 May 2006 06:14:59 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-on-Mellanox-Infiniband/m-p/926921#M2483</guid>
      <dc:creator>TimP</dc:creator>
      <dc:date>2006-05-18T06:14:59Z</dc:date>
    </item>
    <item>
      <title>Re: Intel MPI on Mellanox Infiniband</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-on-Mellanox-Infiniband/m-p/926922#M2484</link>
      <description>&lt;DIV&gt;&lt;FONT face="Verdana"&gt;Which version of the Mellanox IBGD package are you using?&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT face="Verdana"&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT face="Verdana"&gt;If it is 1.8.0 or later you may have to enable the DAPL before you can use the Intel MPI.&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT face="Verdana"&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT face="Verdana"&gt;Install the Mellanox package with everything selected. This makes sure you have the DAPL software installed.&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT face="Verdana"&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT face="Verdana"&gt;The DAPL driver is not enabled by default on these versions. To enable it &lt;SPAN dir="ltr"&gt;&lt;FONT size="2"&gt;&lt;SPAN style="FONT-SIZE: 11pt; FONT-FAMILY: Verdana"&gt;you need to make a minor change to a file:&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN dir="ltr"&gt;&lt;FONT face="Verdana" size="2"&gt;&lt;SPAN style="FONT-SIZE: 11pt; FONT-FAMILY: Verdana"&gt;/etc/infiniband/openib.conf&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN dir="ltr"&gt;&lt;FONT face="Verdana" size="2"&gt;&lt;SPAN style="FONT-SIZE: 11pt; FONT-FAMILY: Verdana"&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN dir="ltr"&gt;&lt;FONT face="Verdana" size="2"&gt;&lt;SPAN style="FONT-SIZE: 11pt; FONT-FAMILY: Verdana"&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/SPAN&gt;&lt;B&gt;&lt;FONT face="Verdana" size="2"&gt;&lt;SPAN style="FONT-WEIGHT: bold; FONT-SIZE: 11pt; FONT-FAMILY: Verdana"&gt;&lt;SPAN style="mso-list: Ignore"&gt;&lt;FONT face="Times New Roman" size="1"&gt;&lt;SPAN style="FONT: 7pt 'Times New Roman'"&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/B&gt;&lt;SPAN dir="ltr"&gt;&lt;FONT face="Verdana" size="2"&gt;&lt;SPAN style="FONT-SIZE: 11pt; FONT-FAMILY: Verdana"&gt;Change the answer to loading UDAPL to YES on the copy on the master node. Do the same thing to all of the other nodes in the cluster. Once you have finished I recommend shutting down all of the compute nodes then rebooting the master node. This will run the openib init correctly and you should see the fabric come up as each node is turned on.&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN dir="ltr"&gt;&lt;FONT face="Verdana" size="2"&gt;&lt;SPAN style="FONT-SIZE: 11pt; FONT-FAMILY: Verdana"&gt;&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN dir="ltr"&gt;&lt;FONT face="Verdana" size="2"&gt;&lt;SPAN style="FONT-SIZE: 11pt; FONT-FAMILY: Verdana"&gt;Once all of the nodes are up you should be able to use the Intel MPI with the proper switch to utilize the RDMA driver.&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/SPAN&gt;&lt;/DIV&gt;</description>
      <pubDate>Thu, 18 May 2006 06:56:30 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-on-Mellanox-Infiniband/m-p/926922#M2484</guid>
      <dc:creator>Intel_C_Intel</dc:creator>
      <dc:date>2006-05-18T06:56:30Z</dc:date>
    </item>
  </channel>
</rss>

