<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: intel mpi and infiniband udapl in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-and-infiniband-udapl/m-p/862796#M1647</link>
    <description>mpicc -static should have the same effect as gcc -static in choosing static versions of libraries known to gcc. As you figured out, -static_mpi controls the choice of Intel mpi libraries. According to your stated requirement, you would want to use both options.&lt;BR /&gt;</description>
    <pubDate>Thu, 24 Jan 2008 21:46:54 GMT</pubDate>
    <dc:creator>TimP</dc:creator>
    <dc:date>2008-01-24T21:46:54Z</dc:date>
    <item>
      <title>intel mpi and infiniband udapl</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-and-infiniband-udapl/m-p/862791#M1642</link>
      <description>hi,&lt;BR /&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;I am trying to use the Intel compilers and mpi libraries to run over&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;infiniband.&lt;/FONT&gt;
&lt;FONT size="2"&gt;From the documentation and also from all the searches I did on the Intel&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;forums I could not figure out what the problem might be. We are running&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;a small test with 8 nodes connected via infiniband. I can ping all the&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;nodes and startup mpd on all of then via IP over IB:&lt;/FONT&gt;&lt;BR /&gt;
&lt;BR /&gt;
&lt;FONT size="2"&gt;hpcp5551(salmr0)192:mpdtrace&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;192.168.0.1&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;192.168.0.5&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;192.168.0.4&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;192.168.0.3&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;192.168.0.2&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;192.168.0.8&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;192.168.0.7&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;192.168.0.6&lt;/FONT&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;I can run fine using the "sock" network fabric or IP over IB:&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;hpcp5551(salmr0)193:mpiexec -genv I_MPI_DEVICE sock -n 8 ./cpi&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;Process 0 on 192.168.0.1&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;Process 2 on 192.168.0.4&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;Process 1 on 192.168.0.5&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;Process 3 on 192.168.0.3&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;Process 4 on 192.168.0.2&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;Process 5 on 192.168.0.8&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;Process 6 on 192.168.0.7&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;Process 7 on 192.168.0.6&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;pi is approximately 3.1416009869231245, Error is 0.0000083333333314&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;wall clock time = 0.007859&lt;/FONT&gt;&lt;BR /&gt;
&lt;BR /&gt;
&lt;FONT size="2"&gt;The problem is when I try to run over the native IB fabric using the&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;"rdma" network fabric:&lt;/FONT&gt;&lt;BR /&gt;
&lt;BR /&gt;
&lt;FONT size="2"&gt;hpcp5551(salmr0)194:mpiexec -genv I_MPI_DEVICE rdma:OpenIB-cma -n 8 -env&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;I_MPI_DEBUG 2 ./cpi&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;rank 4 in job 9 192.168.0.1_35933 caused collective abort of all&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;ranks&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt; exit status of rank 4: killed by signal 11&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;rank 1 in job 9 192.168.0.1_35933 caused collective abort of all&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;ranks&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt; exit status of rank 1: killed by signal 11&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;rank 0 in job 9 192.168.0.1_35933 caused collective abort of all&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;ranks&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt; exit status of rank 0: killed by signal 11&lt;/FONT&gt;
&lt;BR /&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;I have the correct entries in /etc/dat.conf:&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;hpcp5551:~ # tail /etc/dat.conf&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;# Simple (OpenIB-cma) default with netdev name provided first on list&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;# to enable use of same dat.conf version on all nodes&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;#&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;# Add examples for multiple interfaces and IPoIB HA fail over, and&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;bonding&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;#&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;OpenIB-cma u1.2 nonthreadsafe&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;default /usr/local/ofed/lib64/libdaplcma.so dapl.1.2 "ib0 0" ""&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;OpenIB-cma-1 u1.2 nonthreadsafe&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;default /usr/local/ofed/lib64/libdaplcma.so dapl.1.2 "ib1 0" ""&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;OpenIB-cma-2 u1.2 nonthreadsafe&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;default /usr
/local/ofed/lib64/libdaplcma.so dapl.1.2 "ib2 0" ""&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;OpenIB-cma-3 u1.2 nonthreadsafe&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;default /usr/local/ofed/lib64/libdaplcma.so dapl.1.2 "ib3 0" ""&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;OpenIB-bond u1.2 nonthreadsafe&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;default /usr/local/ofed/lib64/libdaplcma.so dapl.1.2 "bond0 0" ""&lt;/FONT&gt;
&lt;BR /&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;hpcp5551:~ # ls -l /usr/local/ofed/lib64/libdaplcma.so&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;lrwxrwxrwx 1 root root 19 Jan 18&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;17:20 /usr/local/ofed/lib64/libdaplcma.so -&amp;gt; libdaplcma.so.1.0.2&lt;/FONT&gt;&lt;BR /&gt;
&lt;BR /&gt;
&lt;BR /&gt;
&lt;FONT size="2"&gt;hpcp5551:~ # ifconfig ib0&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;ib0 Link encap:UNSPEC HWaddr&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;80-00-04-04-FE-80-00-00-00-00-00-00-00-00-00-00&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt; inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:255.255.255.0&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt; inet6 addr: fe80::208:f104:398:2999/64 Scope:Link&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt; UP BROADCAST RUNNING MULTICAST MTU:65520 Metric:1&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt; RX packets:851583 errors:0 dropped:0 overruns:0 frame:0&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt; TX packets:824427 errors:0 dropped:0 overruns:0 carrier:0&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt; collisions:0 txqueuelen:128&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt; RX bytes:11834748000 (11286.4 Mb) TX bytes:11786736324&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;(11240.7 Mb)&lt;/FONT&gt;
&lt;BR /&gt;&lt;BR /&gt;&lt;FONT size="2"&gt;Is there any way to get mode debug or verbose messages out of mpiexec or&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;mpirun so that it can maybe provide me with a hit as to what the problem&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;might be?&lt;/FONT&gt;&lt;BR /&gt;
&lt;BR /&gt;This is with OFED 1.2.5.4&lt;BR /&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;Thanks&lt;/FONT&gt;&lt;BR /&gt;
&lt;FONT size="2"&gt;Rene&lt;/FONT&gt;
&lt;BR /&gt;</description>
      <pubDate>Tue, 22 Jan 2008 20:29:11 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-and-infiniband-udapl/m-p/862791#M1642</guid>
      <dc:creator>Rene_S_1</dc:creator>
      <dc:date>2008-01-22T20:29:11Z</dc:date>
    </item>
    <item>
      <title>Re: intel mpi and infiniband udapl</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-and-infiniband-udapl/m-p/862792#M1643</link>
      <description>export I_MPI_DEBUG=2 (or whatever level of verbosity you want)&lt;BR /&gt;</description>
      <pubDate>Tue, 22 Jan 2008 20:57:57 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-and-infiniband-udapl/m-p/862792#M1643</guid>
      <dc:creator>TimP</dc:creator>
      <dc:date>2008-01-22T20:57:57Z</dc:date>
    </item>
    <item>
      <title>Re: intel mpi and infiniband udapl</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-and-infiniband-udapl/m-p/862793#M1644</link>
      <description>Thanks for the reply. I guess I should have mentioned that on my post.&lt;BR /&gt;I did try the I_MPI_DEBUG 2 option with various levels but don't seem to get any more info that what I originally posted.&lt;BR /&gt;&lt;BR /&gt;hpcp5551(salmr0)196:setenv I_MPI_DEBUG 2&lt;BR /&gt;hpcp5551(salmr0)197:mpiexec -genv I_MPI_DEVICE rdma:OpenIB-cma -n 8 ./cpi&lt;BR /&gt;rank 3 in job 11 192.168.0.1_35933 caused collective abort of all ranks&lt;BR /&gt; exit status of rank 3: killed by signal 11 &lt;BR /&gt;&lt;BR /&gt;hpcp5551(salmr0)198:setenv I_MPI_DEBUG 4&lt;BR /&gt;hpcp5551(salmr0)199:mpiexec -genv I_MPI_DEVICE rdma:OpenIB-cma -n 8 ./cpi&lt;BR /&gt;rank 3 in job 12 192.168.0.1_35933 caused collective abort of all ranks&lt;BR /&gt; exit status of rank 3: killed by signal 11 &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;hpcp5551(salmr0)200:mpiexec -genv I_MPI_DEVICE rdma:OpenIB-cma -n 8 -env I_MPI_DEBUG 3 ./cpi&lt;BR /&gt;rank 0 in job 13 192.168.0.1_35933 caused collective abort of all ranks&lt;BR /&gt; exit status of rank 0: killed by signal 11 &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Any other ideas? Is ther a way to check if I have the right updapl libs installed other then looking for /usr/local/ofed/lib64/libdaplcma.so?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;Rene&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 22 Jan 2008 21:04:18 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-and-infiniband-udapl/m-p/862793#M1644</guid>
      <dc:creator>Rene_S_1</dc:creator>
      <dc:date>2008-01-22T21:04:18Z</dc:date>
    </item>
    <item>
      <title>Re: intel mpi and infiniband udapl</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-and-infiniband-udapl/m-p/862794#M1645</link>
      <description>&lt;P&gt;Hi Rene,&lt;/P&gt;
&lt;P&gt;Did you able to run dapltest program on your cluster? Do I understand right that you did not get additional debug information even if cpi was linked against debug version of MPI library?&lt;/P&gt;
&lt;P&gt;Best regards,&lt;/P&gt;
&lt;P&gt;Andrey&lt;/P&gt;</description>
      <pubDate>Thu, 24 Jan 2008 12:37:38 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-and-infiniband-udapl/m-p/862794#M1645</guid>
      <dc:creator>Andrey_D_Intel</dc:creator>
      <dc:date>2008-01-24T12:37:38Z</dc:date>
    </item>
    <item>
      <title>Re: intel mpi and infiniband udapl</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-and-infiniband-udapl/m-p/862795#M1646</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;i guess i was not asking for enough debug info. I tried debug levels of 2,3,4 and was getting nowhere. Once i increased to level 10 or above i got a bit more useful info.&lt;BR /&gt;&lt;BR /&gt;&lt;TT&gt;I think we found the problem. We like to compile things statically here&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;so we would typically do something like this:&lt;/TT&gt;&lt;BR /&gt;
&lt;BR /&gt;
&lt;TT&gt;hpcp5551(salmr0)77:mpicc -static cpi.c&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;hpcp5551(salmr0)108:ldd a.out &lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt; not a dynamic executable&lt;/TT&gt;&lt;BR /&gt;
&lt;BR /&gt;
&lt;TT&gt;and this works fine and we can run it anywhere over gigabit ethernet or&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;using the sock interface over IB.&lt;/TT&gt;&lt;BR /&gt;
&lt;BR /&gt;
&lt;TT&gt;If we do the same and try to run over IB we get nowhere as you can see&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;from the previous post&lt;/TT&gt;&lt;BR /&gt;
&lt;BR /&gt;
&lt;TT&gt;But for some reason if we compile with the "-static_mpi" flag things&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;seem to work.&lt;/TT&gt;&lt;BR /&gt;
&lt;BR /&gt;
&lt;TT&gt;hpcp5551(salmr0)109:mpicc -static_mpi cpi.c&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;hpcp5551(salmr0)110:ldd a.out &lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt; librt.so.1 =&amp;gt; /lib64/librt.so.1 (0x00002b666073b000)&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt; libpthread.so.0 =&amp;gt; /lib64/libpthread.so.0 (0x00002b6660844000)&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt; libdl.so.2 =&amp;gt; /lib64/libdl.so.2 (0x00002b666095a000)&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt; libc.so.6 =&amp;gt; /lib64/libc.so.6 (0x00002b6660a5f000)&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt; /lib64/ld-linux-x86-64.so.2 (0x00002b666061e000)&lt;/TT&gt;&lt;BR /&gt;
&lt;BR /&gt;
&lt;BR /&gt;
&lt;TT&gt;hpcp5551(salmr0)111:mpiexec -genv I_MPI_DEVICE rdma:OpenIB-cma -np 2&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;-env I_MPI_DEBUG 10 a.out&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;[0] MPI startup(): DAPL provider OpenIB-cma&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;[1] MPI startup(): DAPL provider OpenIB-cma&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;[0] MPI startup(): RDMA data transfer mode&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;[0] MPI Startup(): process is pinned to CPU00 on node hpcp5551&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;[1] MPI startup(): RDMA data transfer mode&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;[1] MPI Startup(): process is pinned to CPU00 on node hpcp5555&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;Process 1 on 192.168.0.5&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;Process 0 on 192.168.0.1&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;[0] Rank Pid Pin cpu Node name&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;[0] 0 7515 0 hpcp5551&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;[0] 1 5192 0 hpcp5555&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;[0] Init(): I_MPI_DEBUG=10&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;[0] Init(): I_MPI_DEVICE=rdma&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;pi is approximately 3.1416009869231241, Error is 0.0000083333333309&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;wall clock time = 0.000111&lt;/TT&gt;&lt;BR /&gt;
&lt;BR /&gt;
&lt;BR /&gt;
&lt;BR /&gt;
&lt;TT&gt;The only problem is the a.out executable is really not static it still&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;had the need for some libs to be loaded dynamically. What are the flags&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;or options we need to generate a true static executable that would run&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;over IB?&lt;/TT&gt;&lt;BR /&gt;
&lt;BR /&gt;
&lt;TT&gt;thanks&lt;/TT&gt;&lt;BR /&gt;
&lt;TT&gt;Rene&lt;/TT&gt;
&lt;BR /&gt;</description>
      <pubDate>Thu, 24 Jan 2008 18:59:28 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-and-infiniband-udapl/m-p/862795#M1646</guid>
      <dc:creator>Rene_S_1</dc:creator>
      <dc:date>2008-01-24T18:59:28Z</dc:date>
    </item>
    <item>
      <title>Re: intel mpi and infiniband udapl</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-and-infiniband-udapl/m-p/862796#M1647</link>
      <description>mpicc -static should have the same effect as gcc -static in choosing static versions of libraries known to gcc. As you figured out, -static_mpi controls the choice of Intel mpi libraries. According to your stated requirement, you would want to use both options.&lt;BR /&gt;</description>
      <pubDate>Thu, 24 Jan 2008 21:46:54 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-and-infiniband-udapl/m-p/862796#M1647</guid>
      <dc:creator>TimP</dc:creator>
      <dc:date>2008-01-24T21:46:54Z</dc:date>
    </item>
    <item>
      <title>Re: intel mpi and infiniband udapl</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-and-infiniband-udapl/m-p/862797#M1648</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;thanks for the reply. Yes I can compile using both flags just fine but if I do that I can not loger run the executable over IB. Here is an example.&lt;BR /&gt;&lt;BR /&gt;Compile semi statically just using -static_mpi works fine:&lt;BR /&gt;----------------------------------------------------------&lt;BR /&gt;hpcp5551(salmr0)140:mpicc -static_mpi cpi.c&lt;BR /&gt;hpcp5551(salmr0)141:ldd a.out &lt;BR /&gt; librt.so.1 =&amp;gt; /lib64/librt.so.1 (0x00002b3805bbe000)&lt;BR /&gt; libpthread.so.0 =&amp;gt; /lib64/libpthread.so.0 (0x00002b3805cc7000)&lt;BR /&gt; libdl.so.2 =&amp;gt; /lib64/libdl.so.2 (0x00002b3805ddd000)&lt;BR /&gt; libc.so.6 =&amp;gt; /lib64/libc.so.6 (0x00002b3805ee2000)&lt;BR /&gt; /lib64/ld-linux-x86-64.so.2 (0x00002b3805aa1000)&lt;BR /&gt;hpcp5551(salmr0)142:mpiexec -genv I_MPI_DEVICE rdma:OpenIB-cma -np 2 -env I_MPI_DEBUG 10 a.out&lt;BR /&gt;[0] MPI startup(): DAPL provider OpenIB-cma&lt;BR /&gt;[1] MPI startup(): DAPL provider OpenIB-cma&lt;BR /&gt;[0] MPI startup(): RDMA data transfer mode&lt;BR /&gt;[0] MPI Startup(): process is pinned to CPU00 on node hpcp5551&lt;BR /&gt;[1] MPI startup(): RDMA data transfer mode&lt;BR /&gt;[1] MPI Startup(): process is pinned to CPU00 on node hpcp5555&lt;BR /&gt;Process 1 on 192.168.0.5&lt;BR /&gt;[0] Rank Pid Pin cpu Node name&lt;BR /&gt;[0] 0 23443 0 hpcp5551&lt;BR /&gt;[0] 1 19241 0 hpcp5555&lt;BR /&gt;[0] Init(): I_MPI_DEBUG=10&lt;BR /&gt;[0] Init(): I_MPI_DEVICE=rdma&lt;BR /&gt;Process 0 on 192.168.0.1&lt;BR /&gt;pi is approximately 3.1416009869231241, Error is 0.0000083333333309&lt;BR /&gt;wall clock time = 0.000159&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Now we compile using both flags -static_mpi and -static does not run:&lt;BR /&gt;--------------------------------------------------------------------------------------&lt;BR /&gt;hpcp5551(salmr0)144:mpicc -static_mpi -static cpi.c /opt/intel/impi/3.1/lib64/libmpi.a(I_MPI_wrap_dat.o): In function `I_MPI_dlopen_dat':&lt;BR /&gt;I_MPI_wrap_dat.c:(.text+0x30f): warning: Using 'dlopen' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking&lt;BR /&gt;/opt/intel/impi/3.1/lib64/libmpi.a(rdma_iba_util.o): In function `get_addr_by_host_name':&lt;BR /&gt;rdma_iba_util.c:(.text+0x21a): warning: Using 'getaddrinfo' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking&lt;BR /&gt;/opt/intel/impi/3.1/lib64/libmpi.a(sock.o): In function `MPIDU_Sock_get_host_description':&lt;BR /&gt;sock.c:(.text+0x5956): warning: Using 'gethostbyaddr' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking&lt;BR /&gt;/opt/intel/impi/3.1/lib64/libmpi.a(simple_pmi.o): In function `PMII_Connect_to_pm':&lt;BR /&gt;simple_pmi.c:(.text+0x29a8): warning: Using 'gethostbyname' in statically linked applications requires at runtime the shared libraries from the glibc version used for linking&lt;BR /&gt;hpcp5551(salmr0)145:&lt;BR /&gt;hpcp5551(salmr0)145:ldd a.out &lt;BR /&gt; not a dynamic executable&lt;BR /&gt;hpcp5551(salmr0)146:mpiexec -genv I_MPI_DEVICE rdma:OpenIB-cma -np 2 -env I_MPI_DEBUG 10 
a.out&lt;BR /&gt;rank 1 in job 18 192.168.0.1_54412 caused collective abort of all ranks&lt;BR /&gt; exit status of rank 1: killed by signal 11 &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;as you can see the executable does not run when compiled staticaly. Here a more vebose output from debug=100&lt;BR /&gt;&lt;BR /&gt;hpcp5551(salmr0)147:mpiexec -genv I_MPI_DEVICE rdma:OpenIB-cma -np 2 -env I_MPI_DEBUG 100 a.out&lt;BR /&gt;[0] MPI startup(): attributes for device:&lt;BR /&gt;[0] MPI startup(): NEEDS_LDAT MAYBE&lt;BR /&gt;[0] MPI startup(): HAS_COLLECTIVES (null)&lt;BR /&gt;[0] MPI startup(): I_MPI_LIBRARY_VERSION 3.1&lt;BR /&gt;[0] MPI startup(): I_MPI_VERSION_DATE_OF_BUILD Fri Oct 5 15:41:02 MSD 2007&lt;BR /&gt;[0] MPI startup(): I_MPI_VERSION_PKGNAME_UNTARRED mpi_src.32.svsmpi004.20071005&lt;BR /&gt;[0] MPI startup(): I_MPI_VERSION_MY_CMD_NAME_CVS_ID ./BUILD_MPI.sh version: BUILD_MPI.sh,v 1.102 2007/09/13 07:41:42 Exp $&lt;BR /&gt;[0] MPI startup(): I_MPI_VERSION_MY_CMD_LINE ./BUILD_MPI.sh -pkg_name mpi_src.32.svsmpi004.20071005.tar.gz -explode -explode_dirname mpi2.32e.svsmpi020.20071005 -all -copyout -noinstall&lt;BR /&gt;[0] MPI startup(): I_MPI_VERSION_MACHINENAME svsmpi020&lt;BR /&gt;[0] MPI startup(): I_MPI_DEVICE_VERSION 3.1.20071005&lt;BR /&gt;[0] MPI startup(): I_MPI_GCC_VERSION 3.4.4 20050721 (Red Hat 3.4.4-2)&lt;BR /&gt;[0] MPI startup(): I_MPI_ICC_VERSION Version 9.1 Beta Build 20060131 Package ID: l_cc_bc_9.1.023&lt;BR /&gt;[0] MPI startup(): I_MPI_IFORT_VERSION Version 9.1 Beta Build 20060131 Package ID: l_fc_bc_9.1.020&lt;BR /&gt;[0] MPI startup(): attributes for device:&lt;BR /&gt;[0] MPI startup(): NEEDS_LDAT MAYBE&lt;BR /&gt;[0] MPI startup(): HAS_COLLECTIVES (null)&lt;BR /&gt;[0] MPI startup(): I_MPI_LIBRARY_VERSION 3.1&lt;BR /&gt;
[0] MPI startup(): I_MPI_VERSION_DATE_OF_BUILD Fri Oct 5 15:41:02 MSD 2007&lt;BR /&gt;[0] MPI startup(): I_MPI_VERSION_PKGNAME_UNTARRED mpi_src.32.svsmpi004.20071005&lt;BR /&gt;[0] MPI startup(): I_MPI_VERSION_MY_CMD_NAME_CVS_ID ./BUILD_MPI.sh version: BUILD_MPI.sh,v 1.102 2007/09/13 07:41:42 Exp $&lt;BR /&gt;[0] MPI startup(): I_MPI_VERSION_MY_CMD_LINE ./BUILD_MPI.sh -pkg_name mpi_src.32.svsmpi004.20071005.tar.gz -explode -explode_dirname mpi2.32e.svsmpi020.20071005 -all -copyout -noinstall&lt;BR /&gt;[0] MPI startup(): I_MPI_VERSION_MACHINENAME svsmpi020&lt;BR /&gt;[0] MPI startup(): I_MPI_DEVICE_VERSION 3.1.20071005&lt;BR /&gt;[0] MPI startup(): I_MPI_GCC_VERSION 3.4.4 20050721 (Red Hat 3.4.4-2)&lt;BR /&gt;[0] MPI startup(): I_MPI_ICC_VERSION Version 9.1 Beta Build 20060131 Package ID: l_cc_bc_9.1.023&lt;BR /&gt;[0] MPI startup(): I_MPI_IFORT_VERSION Version 9.1 Beta Build 20060131 Package ID: l_fc_bc_9.1.020&lt;BR /&gt;[0] I_MPI_dlopen_dat(): trying to dlopen default -ldat: libdat.so&lt;BR /&gt;[0] my_dlopen(): trying to dlopen: libdat.so&lt;BR /&gt;rank 0 in job 19 192.168.0.1_54412 caused collective abort of all ranks&lt;BR /&gt; exit status of rank 0: killed by signal 11 &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;Rene&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 24 Jan 2008 21:59:39 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-and-infiniband-udapl/m-p/862797#M1648</guid>
      <dc:creator>Rene_S_1</dc:creator>
      <dc:date>2008-01-24T21:59:39Z</dc:date>
    </item>
    <item>
      <title>Re: intel mpi and infiniband udapl</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-and-infiniband-udapl/m-p/862798#M1649</link>
      <description>&lt;P&gt;Rene,&lt;P&gt;&lt;/P&gt;&lt;/P&gt;
&lt;P&gt;You can not build true static executable that would run over IB with 100% garantee. It is due to libc runtime limitations. There isdlopen() call inside MPI library which requires presence of the same runtime on the other cluster. Probably you saw warning messages when tried the mpicc -static option. &lt;P&gt;&lt;/P&gt;&lt;/P&gt;
&lt;P&gt;Best regards,&lt;P&gt;&lt;/P&gt;&lt;/P&gt;
&lt;P&gt;Andrey&lt;P&gt;&lt;/P&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 25 Jan 2008 15:19:08 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-and-infiniband-udapl/m-p/862798#M1649</guid>
      <dc:creator>Andrey_D_Intel</dc:creator>
      <dc:date>2008-01-25T15:19:08Z</dc:date>
    </item>
  </channel>
</rss>

