<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Intel MPI 3.0 over IB (uDAPL) in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-3-0-over-IB-uDAPL/m-p/894265#M2079</link>
    <description>Thanks for your prompt reply.&lt;BR /&gt;&lt;BR /&gt;I'm not using old Cisco MPI (actually, it was grabbed Cisco from Topspin and derives from MPICH, as I remember). Cisco now uses OFED. And I trying to run Intel MPI on newest OFED version.&lt;BR /&gt;</description>
    <pubDate>Fri, 09 Nov 2007 15:06:31 GMT</pubDate>
    <dc:creator>abrindeyev</dc:creator>
    <dc:date>2007-11-09T15:06:31Z</dc:date>
    <item>
      <title>Intel MPI 3.0 over IB (uDAPL)</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-3-0-over-IB-uDAPL/m-p/894263#M2077</link>
      <description>Anybody could help me to run Intel MPI on IB?&lt;BR /&gt;&lt;BR /&gt;My steps was:&lt;BR /&gt;1. Got Intel MPI 3.0 Evaluation for 30 days&lt;BR /&gt;2. Install it on shared directory&lt;BR /&gt;3. Configure password-less SSH between nodes&lt;BR /&gt;4. Configure (for other purposes) IBoIP - confirmed working&lt;BR /&gt;5. Compiled test MPI application - comes with Intel MPI&lt;BR /&gt;&lt;BR /&gt;Now it works over Ethernet for can't run it over IB:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;$ mpirun -n 4 -r ssh /gpfs/loadl/HPL/prefix/intel/mpi/3.0/test/test&lt;BR /&gt;Hello world: rank 0 of 4 running on n1&lt;BR /&gt;Hello world: rank 1 of 4 running on n3&lt;BR /&gt;Hello world: rank 2 of 4 running on n4&lt;BR /&gt;Hello world: rank 3 of 4 running on n2&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;$ mpirun -n 4 -r ssh -env I_MPI_DEVICE rdssm:OpenIB-cma -env I_MPI_FALLBACK_DEVICE 0 -env I_MPI_DEBUG 5 /gpfs/loadl/HPL/prefix/intel/mpi/3.0/test/test&lt;BR /&gt;[0] DAPL provider is not found and fallback device is not enabled&lt;BR /&gt;[cli_0]: aborting job:&lt;BR /&gt;Fatal error in MPI_Init: Other MPI error, error stack:&lt;BR /&gt;MPIR_Init_thread(925): Initialization failed&lt;BR /&gt;MPIDD_Init(95).......: channel initialization failed&lt;BR /&gt;MPIDI_CH3_Init(144)..: generic failure with errno = -1&lt;BR /&gt;(unknown)(): &lt;NULL&gt;&lt;BR /&gt;rank 3 in job 1 n1_36568 caused collective abort of all ranks&lt;BR /&gt; exit status of rank 3: return code 13&lt;BR /&gt;[output from other nodes skipped]&lt;BR /&gt;&lt;BR /&gt;My IB configuration: OFED 1.2.5 from Cisco:&lt;BR /&gt;OFED-1.2.5&lt;BR /&gt;&lt;BR /&gt;ofa_kernel-1.2.5:&lt;BR /&gt;Git:&lt;BR /&gt;git://git.openfabrics.org/ofed_1_2/linux-2.6.git ofed_1_2_c&lt;BR /&gt;commit 21ec9ff84cba24ea6e53a268da21a72e6ab190d0&lt;BR /&gt;&lt;BR /&gt;ofa_user-1.2.5:&lt;BR /&gt;libibverbs:&lt;BR /&gt;git://git.kernel.org/pub/scm/libs/infiniband/libibverbs.git master&lt;BR /&gt;commit d5052fa0bf8180be9edf1c4c1c014dde01f8a4dd&lt;BR /&gt;libmthca:&lt;BR /&gt;git://git.kernel.org/pub/scm/libs/infiniband/libmthca.git master&lt;BR /&gt;commit f29c1d8a198a8d7f322c3924205a62770a9862a3&lt;BR /&gt;libmlx4:&lt;BR /&gt;git://git.kernel.org/pub/scm/libs/infiniband/libmlx4.git master&lt;BR /&gt;commit fc9edce51069fd38e33c9e627d9a89bc1e329b67&lt;BR /&gt;libehca:&lt;BR /&gt;git://git.openfabrics.org/ofed_1_2/libehca.git ofed_1_2&lt;BR /&gt;commit 00b26973092c949b11b8372eb027059fda7a8061&lt;BR /&gt;libipathverbs:&lt;BR /&gt;git://git.openfabrics.org/ofed_1_2/libipathverbs.git ofed_1_2&lt;BR /&gt;commit 15f62c3f045295dd2a941ae8d4e0e36035aad5cf&lt;BR /&gt;tvflash:&lt;BR /&gt;git://git.openfabrics.org/ofed_1_2/tvflash.git ofed_1_2&lt;BR /&gt;commit e0a0903b2a998a397ada053554fd678ed7914cc6&lt;BR /&gt;libibcm:&lt;BR /&gt;git://git.openfabrics.org/ofed_1_2/libibcm.git ofed_1_2&lt;BR /&gt;commit 8154d4d57f69789be6d26fdc8f10b552c83a87ec&lt;BR /&gt;libsdp:&lt;BR /&gt;git://git.openfabrics.org/ofed_1_2/libsdp.git ofed_1_2&lt;BR /&gt;commit 9e1c2cce1cbe030bf8fc9c03db4e80a703946af1&lt;BR /&gt;mstflint:&lt;BR /&gt;git://git.openfabrics.org/~mst/mstflint.git master&lt;BR /&gt;commit a9579dfbd259133cb50bf6b12ff247d5a04a9473&lt;BR /&gt;perftest:&lt;BR /&gt;git://git.openfabrics.org/~mst/perftest.git master&lt;BR /&gt;commit 20ea8b29537dda3f0a217b95ac50a0aaa7b24477&lt;BR /&gt;srptools:&lt;BR /&gt;git://git.openfabrics.org/ofed_1_2/srptools.git ofed_1_2&lt;BR /&gt;commit 883a08f0db168f4eb20293552f6416529da982f1&lt;BR /&gt;ipoibtools:&lt;BR /&gt;git://git.openfabrics.org/ofed_1_2/ipoibtools.git ofed_1_2&lt;BR /&gt;commit e29da6049cb725b175423fddc80181980ebfa0b4&lt;BR /&gt;librdmacm:&lt;BR /&gt;git://git.openfabrics.org/ofed_1_2/librdmacm.git ofed_1_2&lt;BR /&gt;commit 87b2be8cf17cca4f2212c32ecfd06c35d7ac7719&lt;BR /&gt;dapl:&lt;BR /&gt;git://git.openfabrics.org/ofed_1_2/dapl.git ofed_1_2&lt;BR /&gt;commit 3654c6ef425f94b9f27a593b0b8c1f3d7cc39029&lt;BR /&gt;management:&lt;BR /&gt;git://git.openfabrics.org/ofed_1_2/management.git ofed_1_2&lt;BR /&gt;commit 46bdba974ee2e1c8a64101effdb7358fd9060c8b&lt;BR /&gt;libcxgb3:&lt;BR /&gt;git://git.openfabrics.org/ofed_1_2/libcxgb3.git ofed_1_2&lt;BR /&gt;commit f97d
cedc6d5af5c222542d69755ad4193f2114fc&lt;BR /&gt;qlvnictools:&lt;BR /&gt;git://git.openfabrics.org/ofed_1_2/qlvnictools.git ofed_1_2&lt;BR /&gt;commit bcfd11d4b5369398f2f816d0e1d89b6e98b25961&lt;BR /&gt;sdpnetstat:&lt;BR /&gt;git://git.openfabrics.org/ofed_1_2/sdpnetstat.git ofed_1_2&lt;BR /&gt;commit d726c17c3b54739ad71e2234c521aa3ee81a5905&lt;BR /&gt;ofascripts:&lt;BR /&gt;git://git.openfabrics.org/~vlad/ofascripts.git ofed_1_2_c&lt;BR /&gt;commit 598684991ff6127dd803540c757f56b289872bef&lt;BR /&gt;&lt;BR /&gt;# MPI&lt;BR /&gt;mvapich-0.9.9-1458.src.rpm&lt;BR /&gt;mvapich2-0.9.8-15.src.rpm&lt;BR /&gt;openmpi-1.2.2-1.src.rpm&lt;BR /&gt;mpitests-2.0-705.src.rpm&lt;BR /&gt;&lt;BR /&gt;$ ibv_devinfo&lt;BR /&gt;hca_id: mthca0&lt;BR /&gt; fw_ver: 4.8.917&lt;BR /&gt; node_guid: 0005:ad00:000b:b224&lt;BR /&gt; sys_image_guid: 0005:ad00:0100:d050&lt;BR /&gt; vendor_id: 0x05ad&lt;BR /&gt; vendor_part_id: 25208&lt;BR /&gt; hw_ver: 0xA0&lt;BR /&gt; board_id: HCA.HSDC.A0.Boot&lt;BR /&gt; phys_port_cnt: 2&lt;BR /&gt; port: 1&lt;BR /&gt; state: PORT_ACTIVE (4)&lt;BR /&gt; max_mtu: 2048 (4)&lt;BR /&gt; active_mtu: 2048 (4)&lt;BR /&gt; sm_lid: 2&lt;BR /&gt; port_lid: 6&lt;BR /&gt;&amp;amp;
nbsp; port_lmc: 0x00&lt;BR /&gt;&lt;BR /&gt; port: 2&lt;BR /&gt; state: PORT_DOWN (1)&lt;BR /&gt; max_mtu: 2048 (4)&lt;BR /&gt; active_mtu: 512 (2)&lt;BR /&gt; sm_lid: 0&lt;BR /&gt; port_lid: 0&lt;BR /&gt; port_lmc: 0x00&lt;BR /&gt;&lt;BR /&gt;$ cat /etc/dat.conf&lt;BR /&gt;#&lt;BR /&gt;# DAT 1.2 configuration file&lt;BR /&gt;#&lt;BR /&gt;# Each entry should have the following fields:&lt;BR /&gt;#&lt;BR /&gt;# &lt;IA_NAME&gt; &lt;API_VERSION&gt; &lt;THREADSAFETY&gt; &lt;DEFAULT&gt; &lt;LIB_PATH&gt; &lt;BR /&gt;# &lt;PROVIDER_VERSION&gt; &lt;IA_PARAMS&gt; &lt;PLATFORM_PARAMS&gt;&lt;BR /&gt;#&lt;BR /&gt;# For the uDAPL cma provder, specify &lt;IA_PARAMS&gt; as one of the following:&lt;BR /&gt;# network address, network hostname, or netdev name and 0 for port&lt;BR /&gt;#&lt;BR /&gt;# Simple (OpenIB-cma) default with netdev name provided first on list&lt;BR /&gt;# to enable use of same dat.conf version on all nodes&lt;BR /&gt;#&lt;BR /&gt;# Add examples for multiple interfaces and IPoIB HA fail over, and bonding&lt;BR /&gt;#&lt;BR /&gt;OpenIB-cma u1.2 nonthreadsafe default /usr/lib64/libdaplcma.so dapl.1.2 "ib0 0" ""&lt;BR /&gt;OpenIB-cma-1 u1.2 nonthreadsafe default /usr/lib64/libdaplcma.so dapl.1.2 "ib1 0" ""&lt;BR /&gt;OpenIB-cma-2 u1.2 nonthreadsafe default /usr/lib64/libdaplcma.so dapl.1.2 "ib2 0" ""&lt;BR /&gt;OpenIB-cma-3 u1.2 nonthreadsafe default /usr/lib64/libdaplcma.so dapl.1.2 "ib3 0" ""&lt;BR /&gt;OpenIB-bond u1.2 nonthreadsafe default /usr/lib64/libdaplcma.so dapl.1.2 "bond0 0" ""&lt;BR /&gt;&lt;BR /&gt;&lt;/IA_PARAMS&gt;&lt;/PLATFORM_PARAMS&gt;&lt;/IA_PARAMS&gt;&lt;/PROVIDER_VERSION&gt;&lt;/LIB_PATH&gt;&lt;/DEFAULT&gt;&lt;/THREADSAFETY&gt;&lt;/API_VERSION&gt;&lt;/IA_NAME&gt;&lt;/NULL&gt;</description>
      <pubDate>Fri, 09 Nov 2007 10:18:36 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-3-0-over-IB-uDAPL/m-p/894263#M2077</guid>
      <dc:creator>abrindeyev</dc:creator>
      <dc:date>2007-11-09T10:18:36Z</dc:date>
    </item>
    <item>
      <title>Re: Intel MPI 3.0 over IB (uDAPL)</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-3-0-over-IB-uDAPL/m-p/894264#M2078</link>
      <description>My customers don't get much information from Cisco, so we're not sufficiently in the loop. However, I received the following comment this week:&lt;BR /&gt;&lt;BR /&gt;the current topspin release 3.2.0-118 has fixes for uDAPL and Intel MPI, the release notes state:&lt;BR /&gt;&lt;BR /&gt;uDAPL&lt;BR /&gt;&lt;BR /&gt;Fixed uDAPL startup scalability problem when using Intel MPI. (PR&lt;BR /&gt;&lt;BR /&gt;CSCse88951)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 09 Nov 2007 14:58:02 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-3-0-over-IB-uDAPL/m-p/894264#M2078</guid>
      <dc:creator>TimP</dc:creator>
      <dc:date>2007-11-09T14:58:02Z</dc:date>
    </item>
    <item>
      <title>Re: Intel MPI 3.0 over IB (uDAPL)</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-3-0-over-IB-uDAPL/m-p/894265#M2079</link>
      <description>Thanks for your prompt reply.&lt;BR /&gt;&lt;BR /&gt;I'm not using old Cisco MPI (actually, it was grabbed Cisco from Topspin and derives from MPICH, as I remember). Cisco now uses OFED. And I trying to run Intel MPI on newest OFED version.&lt;BR /&gt;</description>
      <pubDate>Fri, 09 Nov 2007 15:06:31 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-3-0-over-IB-uDAPL/m-p/894265#M2079</guid>
      <dc:creator>abrindeyev</dc:creator>
      <dc:date>2007-11-09T15:06:31Z</dc:date>
    </item>
    <item>
      <title>Re: Intel MPI 3.0 over IB (uDAPL)</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-3-0-over-IB-uDAPL/m-p/894266#M2080</link>
      <description>Did you able to run Intel MPI on newest OFED version? The output with higher I_MPI_DEBUG value can be useful if you still have a problems with runs.</description>
      <pubDate>Mon, 12 Nov 2007 11:54:17 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-3-0-over-IB-uDAPL/m-p/894266#M2080</guid>
      <dc:creator>Andrey_D_Intel</dc:creator>
      <dc:date>2007-11-12T11:54:17Z</dc:date>
    </item>
    <item>
      <title>Re: Intel MPI 3.0 over IB (uDAPL)</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-3-0-over-IB-uDAPL/m-p/894267#M2081</link>
      <description>After number of unsuccessful attempts, now it works (don't ask me why - I don't know).&lt;BR /&gt;&lt;BR /&gt;Next question is how to compile 64-bit MPI applications with Intel MPI on x86_64 arch?&lt;BR /&gt;&lt;BR /&gt;$ mpicc -o osu_acc_latency-intel-mpi osu_acc_latency.c&lt;BR /&gt;$ file osu_acc_latency-intel-mpi&lt;BR /&gt;osu_acc_latency-intel-mpi: ELF &lt;B&gt;32-bit&lt;/B&gt; LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.2.5, dynamically linked (uses shared libs), not stripped&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 14 Nov 2007 06:29:05 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-3-0-over-IB-uDAPL/m-p/894267#M2081</guid>
      <dc:creator>abrindeyev</dc:creator>
      <dc:date>2007-11-14T06:29:05Z</dc:date>
    </item>
    <item>
      <title>Re: Intel MPI 3.0 over IB (uDAPL)</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-3-0-over-IB-uDAPL/m-p/894268#M2082</link>
      <description>&lt;P&gt;Please make sure that you have set 64-bit MPI environment. Source mpivars.&lt;C&gt;sh file from the $install_dir/bin64 directory to be able build 64-bit MPI application. You should also have 64-bit version of gcc compiler as your default gcc compiler while using the mpicc compiler driver.&lt;/C&gt;&lt;/P&gt;
&lt;P&gt;Best regards,&lt;/P&gt;
&lt;P&gt;Andrey&lt;/P&gt;</description>
      <pubDate>Wed, 14 Nov 2007 09:52:09 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Intel-MPI-3-0-over-IB-uDAPL/m-p/894268#M2082</guid>
      <dc:creator>Andrey_D_Intel</dc:creator>
      <dc:date>2007-11-14T09:52:09Z</dc:date>
    </item>
  </channel>
</rss>

