<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic How are you launching your in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/Issue-migrating-to-Intel-MPI/m-p/950701#M2913</link>
    <description>&lt;P&gt;How are you launching your program?&amp;nbsp; When you ran ldd, did you run it in a job on a node, or directly on the head node?&amp;nbsp; If you haven't already, try running ldd the same way you are launching your program, and see if the linkage is correct in the job.&lt;/P&gt;
&lt;P&gt;Sincerely,&lt;BR /&gt; James Tullos&lt;BR /&gt; Technical Consulting Engineer&lt;BR /&gt; Intel® Cluster Tools&lt;/P&gt;</description>
    <pubDate>Wed, 09 Oct 2013 19:54:34 GMT</pubDate>
    <dc:creator>James_T_Intel</dc:creator>
    <dc:date>2013-10-09T19:54:34Z</dc:date>
    <item>
      <title>Issue migrating to Intel MPI</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Issue-migrating-to-Intel-MPI/m-p/950700#M2912</link>
      <description>&lt;P&gt;I manage a legacy code that has been built with the Intel compiler along with the MPI/Pro library for years, but in the last couple of years we have been trying to convert from MPI/Pro to Intel MPI. &amp;nbsp;To date, we have tried to migrate 3 times using 3 different versions of Intel MPI and very time we have hit a different roadblock. &amp;nbsp;I am trying again and have hit yet another roadblock and I have run out of ideas as to how to resolve it. &amp;nbsp;The code appears to compile fine, but when I run it I get the following runtime error:&lt;/P&gt;
&lt;P&gt;/home/&amp;lt;username&amp;gt;/src/rtpmain: symbol lookup error: /home/&amp;lt;username&amp;gt;/src/rtpmain: undefined symbol: DftiCreateDescriptor_s_md&lt;/P&gt;
&lt;P&gt;This error occurs the first time a FFT is performed. I built and ran the code on RHEL5 and everything about the code is the same except for the MPI library&amp;nbsp;and the only changes that were made was how the code was built and submitted to the scheduler (PBS Pro).&lt;/P&gt;
&lt;P&gt;Since there was an unresolved symbol, I was thinking the environment wasn't setup correctly,&amp;nbsp;but I including a "ldd" of the executable within the submit script to make sure the environment on the executing node was setup correct and everything looks fine:&lt;/P&gt;
&lt;P&gt;libdl.so.2 =&amp;gt; /lib64/libdl.so.2&lt;BR /&gt;libmkl_intel_lp64.so =&amp;gt; /opt/intel/composer_xe_2011_sp1.6.233/mkl/lib/intel64/libmkl_intel_lp64.so&lt;BR /&gt;libmkl_core.so =&amp;gt; /opt/intel/composer_xe_2011_sp1.6.233/mkl/lib/intel64/libmkl_core.so&lt;BR /&gt;libmkl_sequential.so =&amp;gt; /opt/intel/composer_xe_2011_sp1.6.233/mkl/lib/intel64/libmkl_sequential.so&lt;BR /&gt;libpthread.so.0 =&amp;gt; /lib64/libpthread.so.0&lt;BR /&gt;libm.so.6 =&amp;gt; /lib64/libm.so.6&lt;BR /&gt;librt.so.1 =&amp;gt; /lib64/librt.so.1&lt;BR /&gt;libmpi_mt.so.4 =&amp;gt; /opt/intel/impi/4.0.3.008/lib64/libmpi_mt.so.4&lt;BR /&gt;libmpigf.so.4 =&amp;gt; /opt/intel/impi/4.0.3.008/lib64/libmpigf.so.4&lt;BR /&gt;libgcc_s.so.1 =&amp;gt; /lib64/libgcc_s.so.1&lt;/P&gt;
&lt;P&gt;Since the only thing to change in the code was the MPI library and the actual error has nothing to do with MPI (we are using the sequential version MKL), I thought the issue might have something to do with mpicc and what it is passing to the compiler/linker. &amp;nbsp;Here is the output from the make:&lt;/P&gt;
&lt;P&gt;/opt/intel/impi/4.0.3.008/bin64/mpicc -cc=/opt/intel/composer_xe_2011_sp1.6.233/bin/intel64/icc -mt_mpi -echo -I../include -I../libs/vlib/include -I../libs/util/include -I/usr/local/hdf5-1.8.10/64_intel121_threadsafe_include -I/opt/intel/impi/4.0.4.008/include64 -I/opt/intel/composer_xe_2011_sp1.6.233/mkl/include -O3 -ip -axSSE4.2 -mssse3 -D_GNU_SOURCE -D H5_USE_16_API -L/opt/intel/composer_xe_2011_sp1.6.233/mkl/lib/intel64 -L/opt/intel/impi/4.0.3.008/lib64 -o rtpmain rtpmain.c srcFile1.o … srcFileN.o ../shared/shared.a ../libs/util/lib/libvec.a ../libs/util/lib/libutil.a -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lrt -lm&lt;/P&gt;
&lt;P&gt;Using the mpicc -echo option, here is what mpicc adds (see sections in &lt;B&gt;bold&lt;/B&gt;) to the build process:&lt;/P&gt;
&lt;P&gt;/opt/intel/composer_xe_2011_sp1.6.233/bin/intel64/icc &lt;B&gt;-ldl -ldl -ldl -ldl&lt;/B&gt; &amp;nbsp;-I../include -I../libs/vlib/include -I../libs/util/include -I/usr/local/hdf5-1.8.10/64_intel121_threadsafe_include -I/opt/intel/impi/4.0.4.008/include64 -I/opt/intel/composer_xe_2011_sp1.6.233/mkl/include -O3 -ip -axSSE4.2 -mssse3 -D_GNU_SOURCE -D H5_USE_16_API -L/opt/intel/composer_xe_2011_sp1.6.233/mkl/lib/intel64 -L/opt/intel/impi/4.0.3.008/lib64 -o rtpmain rtpmain.c srcFile1.o … srcFileN.o ../shared/shared.a ../libs/vlib/lib/libvec.a ../libs/util/lib/libutil.a -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lrt -lm &lt;B&gt;-I/opt/intel/impi/4.0.3.008/intel43/include -L/opt/intel/impi/4.0.3.008/intel64/lib -Xlinker -enalbe-new-dtags -Xlinker -rpath –Xlinker /opt/intel/impi/4.0.3.008/intel64/lib -Xlinker -rpath -Xlinker /opt/intel/mpi-rt/4.0.3 -lmpi_mt -lmpigf -lmpigi -lpthread -lpthreap -lpthreath -lpthread -lrt&lt;/B&gt;&lt;/P&gt;
&lt;P&gt;As a comparison, here is what the MPI/Pro mpicc script adds to the build process:&lt;/P&gt;
&lt;P&gt;/opt/intel/composer_xe_2011_sp1.6.233/bin/intel64/icc -I../include -I../libs/vlib/include -I../libs/util/include -I/usr/local/hdf5-1.8.10/64_intel121_threadsafe_include -I/usr/local/mpipro-2.2.0-rh4-64/include -I/opt/intel/composer_xe_2011_sp1.6.233/mkl/include -O3 -ip -axSSE4.2 -mssse3 -D_GNU_SOURCE -D H5_USE_16_API -L/opt/intel/composer_xe_2011_sp1.6.233/mkl/lib/intel64 -L/usr/local/mpipro-2.2.0-rh4-64/lib64 -o rtpmain rtpmain.c srcFile1.o … srcFileN.o ../shared/shared.a ../libs/vlib/lib/libvec.a ../libs/util/lib/libutil.a -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lrt &amp;nbsp;&lt;B&gt;-I/usr/local/mpipro-2.2.0-rh4-64/include -L/usr/local/mpipro-2.2.0-rh4-64/lib64 -lmpipro -lpthread –lm&lt;/B&gt;&lt;/P&gt;
&lt;P&gt;I have made numerous changes to this build process including the bypassing mpicc and adding various compiler/linker options, but nothing has made any difference. &amp;nbsp;At this point I am at a loss as to what to do next.&amp;nbsp; Does anyone have any ideas that I can try?&lt;/P&gt;
&lt;P&gt;Any feedback you can provide would be greatly appreciated.&lt;/P&gt;</description>
      <pubDate>Tue, 08 Oct 2013 23:55:32 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Issue-migrating-to-Intel-MPI/m-p/950700#M2912</guid>
      <dc:creator>jburri</dc:creator>
      <dc:date>2013-10-08T23:55:32Z</dc:date>
    </item>
    <item>
      <title>How are you launching your</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Issue-migrating-to-Intel-MPI/m-p/950701#M2913</link>
      <description>&lt;P&gt;How are you launching your program?&amp;nbsp; When you ran ldd, did you run it in a job on a node, or directly on the head node?&amp;nbsp; If you haven't already, try running ldd the same way you are launching your program, and see if the linkage is correct in the job.&lt;/P&gt;
&lt;P&gt;Sincerely,&lt;BR /&gt; James Tullos&lt;BR /&gt; Technical Consulting Engineer&lt;BR /&gt; Intel® Cluster Tools&lt;/P&gt;</description>
      <pubDate>Wed, 09 Oct 2013 19:54:34 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Issue-migrating-to-Intel-MPI/m-p/950701#M2913</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2013-10-09T19:54:34Z</dc:date>
    </item>
    <item>
      <title>I am launching the program</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Issue-migrating-to-Intel-MPI/m-p/950702#M2914</link>
      <description>&lt;P&gt;I am launching the program using mpirun within a script that is submitted to PBS. &amp;nbsp;The ldd command is within the submit script and is run when the job goes into execution on the execution node. &amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 09 Oct 2013 20:00:04 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Issue-migrating-to-Intel-MPI/m-p/950702#M2914</guid>
      <dc:creator>jburri</dc:creator>
      <dc:date>2013-10-09T20:00:04Z</dc:date>
    </item>
  </channel>
</rss>

