<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: MKL FFT Cluster: real DFTI_BAD_DESCRIPTOR problem or intern in Intel® oneAPI Math Kernel Library</title>
    <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875887#M8918</link>
    <description>&lt;DIV style="margin:0px;"&gt;&lt;/DIV&gt;
&lt;BR /&gt;Filippo,&lt;BR /&gt;&lt;BR /&gt;I am sorry I didn't put it clearly. I noticed that #include &lt;MALLOC.H&gt; is missing in your example, and so I suspect that compiler uses implicit declaration for it, which is 'int malloc()' instead of 'void *malloc(size_t)'.&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;Dima&lt;/MALLOC.H&gt;</description>
    <pubDate>Wed, 09 Sep 2009 09:16:17 GMT</pubDate>
    <dc:creator>Dmitry_B_Intel</dc:creator>
    <dc:date>2009-09-09T09:16:17Z</dc:date>
    <item>
      <title>MKL FFT Cluster: real DFTI_BAD_DESCRIPTOR problem or internal MPI fault?</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875884#M8915</link>
      <description>Hi all, I have a problem with MKL FFT Cluster routines. I have followed the code example "1D In-place Cluster FFT Computations" and I have tried to run a simple parallel job on two different clusters equipped with different processors (Intel Xeon CPU X5570 @ 2.93GHz and Dual-Core AMD Opteron Processor 2218), different resource managers (SLURM and LSF) and different versions of the MKL library (10.0.010 and 10.0.2). The underline MPI runtime environment is Open MPI 1.3.2, compiled using INTEL Compiler 10.1 . On both cluster, the test program crashes at the same point with the same error. &lt;BR /&gt;&lt;BR /&gt;This is the output &lt;BR /&gt;$ cat out &lt;BR /&gt;I'm 3 and I have passed STEP 1 &lt;BR /&gt;I'm 0 and I have passed STEP 1 &lt;BR /&gt;I'm 1 and I have passed STEP 1 &lt;BR /&gt;I'm 2 and I have passed STEP 1 &lt;BR /&gt;&lt;BR /&gt;TID HOST_NAME COMMAND_LINE STATUS TERMINATION_TIME&lt;BR /&gt;===== ========== ================ ======================= ===================&lt;BR /&gt;00000 node0027 ./x.fft_mpi_mkl Exit (5) 09/08/2009 21:53:25&lt;BR /&gt;00001 node0028 ./x.fft_mpi_mkl Exit (5) 09/08/2009 21:53:25&lt;BR /&gt;00002 node0023 ./x.fft_mpi_mkl Exit (5) 09/08/2009 21:53:25&lt;BR /&gt;00003 node0021 ./x.fft_mpi_mkl Exit (5) 09/08/2009 21:53:25&lt;BR /&gt;&lt;BR /&gt;and this--------------------------------------------------------------------------&lt;BR /&gt;[node0021:05836] *** An error occurred in MPI_comm_size&lt;BR /&gt;[node0021:05836] *** on communicator MPI_COMM_WORLD&lt;BR /&gt;[node0021:05836] *** MPI_ERR_COMM: invalid communicator&lt;BR /&gt;[node0021:05836] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)&lt;BR /&gt;[node0023:25571] *** An error occurred in MPI_comm_size&lt;BR /&gt;[node0027:30416] *** An error occurred in MPI_comm_size&lt;BR /&gt;[node0023:25571] *** on communicator MPI_COMM_WORLD&lt;BR /&gt;[node0023:25571] *** MPI_ERR_COMM: invalid communicator&lt;BR /&gt;[node0023:25571] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)&lt;BR /&gt;[node0027:30416] *** on communicator MPI_COMM_WORLD&lt;BR /&gt;[node0027:30416] *** MPI_ERR_COMM: invalid communicator&lt;BR /&gt;[node0027:30416] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)&lt;BR /&gt;[node0028:15920] *** An error occurred in MPI_comm_size&lt;BR /&gt;[node0028:15920] *** on communicator MPI_COMM_WORLD&lt;BR /&gt;[node0028:15920] *** MPI_ERR_COMM: invalid communicator&lt;BR /&gt;[node0028:15920] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)&lt;BR /&gt;--------------------------------------------------------------------------&lt;BR /&gt;&lt;BR /&gt;I have compiled both programs using this command &lt;BR /&gt;$ mpicc -openmp -w test.c -Wl,--start-group $MKL_INCLUDE $MKL_LIB/libmkl_cdft_core.a $MKL_LIB/libmkl_blacs_openmpi_lp64.a $MKL_LIB/libmkl_intel_lp64.a $MKL_LIB/libmkl_intel_thread.a $MKL_LIB/libmkl_core.a -Wl,--end-group -L$MKL_LIB -liomp5 -lpthread -lm -o x.fft_mpi_mkl ("lib/em64t" in both case) &lt;BR /&gt;&lt;BR /&gt;The program, with stupid additions to print debug informations, is attached to this post. &lt;BR /&gt;What's wrong? Can the problem be related to Open MPI?&lt;BR /&gt;&lt;BR /&gt;Thank you very much in advance ,&lt;BR /&gt;Cheers!</description>
      <pubDate>Tue, 08 Sep 2009 20:14:58 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875884#M8915</guid>
      <dc:creator>Filippo_Spiga</dc:creator>
      <dc:date>2009-09-08T20:14:58Z</dc:date>
    </item>
    <item>
      <title>Re: MKL FFT Cluster: real DFTI_BAD_DESCRIPTOR problem or intern</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875885#M8916</link>
      <description>&lt;DIV style="margin:0px;"&gt;&lt;/DIV&gt;
&lt;BR /&gt;Hi Filippo,&lt;BR /&gt;Have you checked that compiler doesn't complaint on undeclared malloc? If undeclared it is assumed to return int, possibly cutting pointers to 32bit. This might be the cause of the failure you see.&lt;BR /&gt;Thanks&lt;BR /&gt;Dima</description>
      <pubDate>Wed, 09 Sep 2009 08:44:07 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875885#M8916</guid>
      <dc:creator>Dmitry_B_Intel</dc:creator>
      <dc:date>2009-09-09T08:44:07Z</dc:date>
    </item>
    <item>
      <title>Re: MKL FFT Cluster: real DFTI_BAD_DESCRIPTOR problem or intern</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875886#M8917</link>
      <description>&lt;DIV style="margin:0px;"&gt;
&lt;DIV id="quote_reply" style="width: 100%; margin-top: 5px;"&gt;
&lt;DIV style="margin-left:2px;margin-right:2px;"&gt;Quoting - &lt;A href="https://community.intel.com/en-us/profile/93647"&gt;Dmitry Baksheev (Intel)&lt;/A&gt;&lt;/DIV&gt;
&lt;DIV style="background-color:#E5E5E5; padding:5px;border: 1px; border-style: inset;margin-left:2px;margin-right:2px;"&gt;&lt;EM&gt; &lt;BR /&gt;Hi Filippo,&lt;BR /&gt;Have you checked that compiler doesn't complaint on undeclared malloc? If undeclared it is assumed to return int, possibly cutting pointers to 32bit. This might be the cause of the failure you see.&lt;BR /&gt;Thanks&lt;BR /&gt;Dima&lt;/EM&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;BR /&gt;Hi Dmitry,&lt;BR /&gt;are you suggesting to compile using "-m64" flag explicitly? &lt;BR /&gt;&lt;BR /&gt;I have just tried to add "-m64" flag but nothing changes. I think that the problem can be related to OpenMPI. I switch from Open MPI 1.3.2 to OpenMPI 1.2.6 but the problem persist!&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&amp;gt;$ mpicc -openmp -m64 test.c -Wl,--start-group $MKL_INCLUDE $MKL_LIB/libmkl_cdft_core.a $MKL_LIB/libmkl_blacs_openmpi_lp64.a $MKL_LIB/libmkl_intel_lp64.a $MKL_LIB/libmkl_intel_thread.a $MKL_LIB/libmkl_core.a -Wl,--end-group -L$MKL_LIB -liomp5 -lpthread -lm -o x.fft_mpi_mkl &lt;BR /&gt;&lt;BR /&gt;&amp;gt;$ ldd x.fft_mpi_mkl  &lt;BR /&gt; libiomp5.so =&amp;gt; /opt/MKL/10.0.2/intel--10.1/lib/em64t/libiomp5.so (0x00002aaaaaac7000) &lt;BR /&gt; libpthread.so.0 =&amp;gt; /lib64/libpthread.so.0 (0x00002aaaaacc6000)&lt;BR /&gt; libimf.so =&amp;gt; /opt/intel/fce/10.1.011/lib/libimf.so (0x00002aaaaaee0000)&lt;BR /&gt; libm.so.6 =&amp;gt; /lib64/libm.so.6 (0x00002aaaab243000)&lt;BR /&gt; libmpi.so.0 =&amp;gt; /opt/openmpi/1.2.6/intel--10.1/lib/libmpi.so.0 (0x00002aaaab4c6000)&lt;BR /&gt; libopen-rte.so.0 =&amp;gt; /opt/openmpi/1.2.6/intel--10.1/lib/libopen-rte.so.0 (0x00002aaaab854000)&lt;BR /&gt; libopen-pal.so.0 =&amp;gt; /opt/openmpi/1.2.6/intel--10.1/lib/libopen-pal.so.0 (0x00002aaaabb61000)&lt;BR /&gt; libibverbs.so.1 =&amp;gt; /usr/lib64/libibverbs.so.1 (0x00002aaaabdd5000)&lt;BR /&gt; librt.so.1 =&amp;gt; /lib64/librt.so.1 (0x00002aaaabfe0000)&lt;BR /&gt; libdl.so.2 =&amp;gt; /lib64/libdl.so.2 (0x00002aaaac1ea000)&lt;BR /&gt; libnsl.so.1 =&amp;gt; /lib64/libnsl.so.1 (0x00002aaaac3ee000)&lt;BR /&gt; libutil.so.1 =&amp;gt; /lib64/libutil.so.1 (0x00002aaaac606000)&lt;BR /&gt; libguide.so =&amp;gt; opt/MKL/10.0.2/intel--10.1/lib/em64t/libguide.so (0x00002aaaac80a000)&lt;BR /&gt; libgcc_s.so.1 =&amp;gt; /lib64/libgcc_s.so.1 (0x0000003081c00000)&lt;BR /&gt; libc.so.6 =&amp;gt; /lib64/libc.so.6 (0x00002aaaac971000)&lt;BR /&gt; /lib64/ld-linux-x86-64.so.2 (0x00002aaaaaaab000)&lt;BR /&gt; libsvml.so =&amp;gt; /opt/intel/fce/10.1.011/lib/libsvml.so (0x00002aaaaccc2000)&lt;BR /&gt; libintlc.so.5 =&amp;gt; /opt/intel/fce/10.1.011/lib/libintlc.so.5 (0x00002aaaace49000)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;[node0020:22164] *** An error occurred in MPI_comm_size&lt;BR /&gt;[node0020:22164] *** on communicator MPI_COMM_WORLD&lt;BR /&gt;[node0020:22164] *** MPI_ERR_COMM: invalid communicator&lt;BR /&gt;[node0020:22164] *** MPI_ERRORS_ARE_FATAL (goodbye)&lt;BR /&gt;[node0027:04801] *** An error occurred in MPI_comm_size&lt;BR /&gt;[node0027:04801] *** on communicator MPI_COMM_WORLD&lt;BR /&gt;[node0027:04801] *** MPI_ERR_COMM: invalid communicator&lt;BR /&gt;[node0027:04801] *** MPI_ERRORS_ARE_FATAL (goodbye)&lt;BR /&gt;[node0019:09289] *** An error occurred in MPI_comm_size&lt;BR /&gt;[node0019:09289] *** on communicator MPI_COMM_WORLD&lt;BR /&gt;[node0019:09289] *** MPI_ERR_COMM: invalid communicator&lt;BR /&gt;[node0019:09289] *** MPI_ERRORS_ARE_FATAL (goodbye)&lt;BR /&gt;[node0022:32707] *** An error occurred in MPI_comm_size&lt;BR /&gt;[node0022:32707] *** on communicator MPI_COMM_WORLD&lt;BR /&gt;[node0022:32707] *** MPI_ERR_COMM: invalid communicator&lt;BR /&gt;[node0022:32707] *** MPI_ERRORS_ARE_FATAL (goodbye)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 09 Sep 2009 09:04:54 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875886#M8917</guid>
      <dc:creator>Filippo_Spiga</dc:creator>
      <dc:date>2009-09-09T09:04:54Z</dc:date>
    </item>
    <item>
      <title>Re: MKL FFT Cluster: real DFTI_BAD_DESCRIPTOR problem or intern</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875887#M8918</link>
      <description>&lt;DIV style="margin:0px;"&gt;&lt;/DIV&gt;
&lt;BR /&gt;Filippo,&lt;BR /&gt;&lt;BR /&gt;I am sorry I didn't put it clearly. I noticed that #include &lt;MALLOC.H&gt; is missing in your example, and so I suspect that compiler uses implicit declaration for it, which is 'int malloc()' instead of 'void *malloc(size_t)'.&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;Dima&lt;/MALLOC.H&gt;</description>
      <pubDate>Wed, 09 Sep 2009 09:16:17 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875887#M8918</guid>
      <dc:creator>Dmitry_B_Intel</dc:creator>
      <dc:date>2009-09-09T09:16:17Z</dc:date>
    </item>
    <item>
      <title>Re: MKL FFT Cluster: real DFTI_BAD_DESCRIPTOR problem or intern</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875888#M8919</link>
      <description>&lt;DIV style="margin:0px;"&gt;&lt;/DIV&gt;
Hi Filippo,&lt;BR /&gt;&lt;BR /&gt;According to the release notes, MKL currently supports only the versions 1.2.x of OpenMPI.&lt;BR /&gt;Do you have a possibility to try this older version?&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;-Vladimir&lt;BR /&gt;</description>
      <pubDate>Wed, 09 Sep 2009 09:54:54 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875888#M8919</guid>
      <dc:creator>Vladimir_Petrov__Int</dc:creator>
      <dc:date>2009-09-09T09:54:54Z</dc:date>
    </item>
    <item>
      <title>Re: MKL FFT Cluster: real DFTI_BAD_DESCRIPTOR problem or intern</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875889#M8920</link>
      <description>&lt;DIV style="margin:0px;"&gt;&lt;/DIV&gt;
Dear all,&lt;BR /&gt; I have tried using "#include &lt;MALLOC.H&gt;" and different versions of OpenMPI (1.2.5, 1.2.6 and 1.2.7) compiled with icc 10.1. The problem remains the same as above. Only on one of the two cluster I have the possibility to modify or change the OpenMPI version. &lt;BR /&gt;&lt;BR /&gt;If it can be useful, OpenMPI 1.2.7 was compiled using these flags...&lt;BR /&gt;&lt;BR /&gt;export CC=icc&lt;BR /&gt;export CXX=icc&lt;BR /&gt;export F77=ifort&lt;BR /&gt;export F90=ifort&lt;BR /&gt;export FC=$F90&lt;BR /&gt;export CFLAGS="-O2"&lt;BR /&gt;export CXXFLAGS="$CFLAGS -lstdc++"&lt;BR /&gt;export FFLAGS="-O2"&lt;BR /&gt;export FCFLAGS="-O2"&lt;BR /&gt;export LDFLAGS="-O2"&lt;BR /&gt;export F77FLAGS="-02"&lt;BR /&gt;&lt;BR /&gt;./configure --prefix=... --disable-ipv6 --enable-static --with-openib=/usr/local/ofed --with-openib-libdir=/usr/local/ofed/lib64 --with-mpi-f90-size=medium --with-io-romio-flags="--with-filesystems=ufs" --enable-mpi-threads --enable-cxx-exceptions&lt;BR /&gt;&lt;/MALLOC.H&gt;</description>
      <pubDate>Wed, 09 Sep 2009 10:26:49 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875889#M8920</guid>
      <dc:creator>Filippo_Spiga</dc:creator>
      <dc:date>2009-09-09T10:26:49Z</dc:date>
    </item>
    <item>
      <title>Re: MKL FFT Cluster: real DFTI_BAD_DESCRIPTOR problem or intern</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875890#M8921</link>
      <description>&lt;DIV style="margin: 0px; height: auto;"&gt;&lt;/DIV&gt;
Hi Filippo,&lt;BR /&gt;&lt;BR /&gt;While I am trying to reproduce your issue, could you please check the following two potential problems with compiling your test:&lt;BR /&gt;1. The MKL_INCLUDE directory does not seem to be placed correctly on the compile-link line - please prepend it with "-I" and place outside the group-clauses;&lt;BR /&gt;2. ldd reports dependecies on two Intel threading libraries - both libguide.so and libiomp5.so. I would strongly recommend that you use exactly one - that which was used to build Open MPI.&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;-Vladimir&lt;BR /&gt;</description>
      <pubDate>Thu, 10 Sep 2009 09:29:31 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875890#M8921</guid>
      <dc:creator>Vladimir_Petrov__Int</dc:creator>
      <dc:date>2009-09-10T09:29:31Z</dc:date>
    </item>
    <item>
      <title>Re: MKL FFT Cluster: real DFTI_BAD_DESCRIPTOR problem or intern</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875891#M8922</link>
      <description>&lt;DIV style="margin:0px;"&gt;
&lt;DIV id="quote_reply" style="width: 100%; margin-top: 5px;"&gt;
&lt;DIV style="margin-left:2px;margin-right:2px;"&gt;Quoting - &lt;A href="https://community.intel.com/en-us/profile/93654"&gt;Vladimir Petrov (Intel)&lt;/A&gt;&lt;/DIV&gt;
&lt;DIV style="background-color:#E5E5E5; padding:5px;border: 1px; border-style: inset;margin-left:2px;margin-right:2px;"&gt;&lt;EM&gt;1. The MKL_INCLUDE directory does not seem to be placed correctly on the compile-link line - please prepend it with "-I" and place outside the group-clauses;&lt;BR /&gt;2. ldd reports dependecies on two Intel threading libraries - both libguide.so and libiomp5.so. I would strongly recommend that you use exactly one - that which was used to build Open MPI.&lt;BR /&gt;&lt;/EM&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;BR /&gt;1. MKL_INCLUDE environment variable includes "-I"&lt;BR /&gt;$ echo $MKL_INCLUDE&lt;BR /&gt;-I/opt/MKL/10.0.2/intel--10.1/include&lt;BR /&gt;$ ls -1 /opt/MKL/10.0.2/intel--10.1/includefftw&lt;BR /&gt;i_malloc.h&lt;BR /&gt;mkl_blas.f90&lt;BR /&gt;mkl_blas.fi&lt;BR /&gt;mkl_blas.h&lt;BR /&gt;mkl_cblas.h&lt;BR /&gt;[...and so on...]&lt;BR /&gt;&lt;BR /&gt;2. I have tried "-lguide" instead of "-liomp5" ...&lt;BR /&gt;&lt;BR /&gt;$ ldd x.fft_mpi_mkl &lt;BR /&gt; libguide.so =&amp;gt; /opt/MKL/10.0.2/intel--10.1/lib/em64t/libguide.so (0x00002aaaaaac7000)&lt;BR /&gt; libpthread.so.0 =&amp;gt; /lib64/libpthread.so.0 (0x00002aaaaacc6000)&lt;BR /&gt; libmpi.so.0 =&amp;gt; /opt/openmpi/1.2.7/intel--10.1/lib/libmpi.so.0 (0x00002aaaaaee0000)&lt;BR /&gt; libopen-rte.so.0 =&amp;gt; /opt/openmpi/1.2.7/intel--10.1/lib/libopen-rte.so.0 (0x00002aaaab285000)&lt;BR /&gt; libopen-pal.so.0 =&amp;gt; /opt/openmpi/1.2.7/intel--10.1/lib/libopen-pal.so.0 (0x00002aaaab59a000)&lt;BR /&gt; libibverbs.so.1 =&amp;gt; /usr/lib64/libibverbs.so.1 (0x00002aaaab80f000)&lt;BR /&gt; librt.so.1 =&amp;gt; /lib64/librt.so.1 (0x00002aaaaba1b000)&lt;BR /&gt; libdl.so.2 =&amp;gt; /lib64/libdl.so.2 (0x00002aaaabc24000)&lt;BR /&gt; libnsl.so.1 =&amp;gt; /lib64/libnsl.so.1 (0x00002aaaabe28000)&lt;BR /&gt; libutil.so.1 =&amp;gt; /lib64/libutil.so.1 (0x00002aaaac041000)&lt;BR /&gt; libm.so.6 =&amp;gt; /lib64/libm.so.6 (0x00002aaaac244000)&lt;BR /&gt; libgcc_s.so.1 =&amp;gt; /lib64/libgcc_s.so.1 (0x0000003081c00000)&lt;BR /&gt; libc.so.6 =&amp;gt; /lib64/libc.so.6 (0x00002aaaac4c8000)&lt;BR /&gt; /lib64/ld-linux-x86-64.so.2 (0x00002aaaaaaab000)&lt;BR /&gt; libimf.so =&amp;gt; /opt/intel/fce/10.1.011/lib/libimf.so (0x00002aaaac818000)&lt;BR /&gt; libsvml.so =&amp;gt; /opt/intel/fce/10.1.011/lib/libsvml.so (0x00002aaaacb7a000)&lt;BR /&gt; libintlc.so.5 =&amp;gt; /opt/intel/fce/10.1.011/lib/libintlc.so.5 (0x00002aaaacd02000)&lt;BR /&gt;&lt;BR /&gt;and also with Open MPI 1.2.5&lt;BR /&gt;&lt;BR /&gt;[cin8310a@node1310 test_mkl_fft]$ ldd x.fft_mpi_mkl &lt;BR /&gt; libguide.so =&amp;gt; /opt/MKL/10.0.2/intel--10.1/lib/em64t/libguide.so (0x00002aaaaaac7000)&lt;BR /&gt; libpthread.so.0 =&amp;gt; /lib64/libpthread.so.0 (0x00002aaaaacc6000)&lt;BR /&gt; libimf.so =&amp;gt; /opt/intel/fce/10.1.011/lib/libimf.so (0x00002aaaaaee0000)&lt;BR /&gt; libm.so.6 =&amp;gt; /lib64/libm.so.6 (0x00002aaaab243000)&lt;BR /&gt; libmpi.so.0 =&amp;gt; /opt/openmpi/1.2.5/intel/10.1/lib/libmpi.so.0 (0x00002aaaab4c6000)&lt;BR /&gt; libopen-rte.so.0 =&amp;gt; /opt/openmpi/1.2.5/intel/10.1/lib/libopen-rte.so.0 (0x00002aaaab854000)&lt;BR /&gt; libopen-pal.so.0 =&amp;gt; /opt/openmpi/1.2.5/intel/10.1/lib/libopen-pal.so.0 (0x00002aaaabb61000)&lt;BR /&gt; libibverbs.so.1 =&amp;gt; /usr/lib64/libibverbs.so.1 (0x00002aaaabdd5000)&lt;BR /&gt; librt.so.1 =&amp;gt; /lib64/librt.so.1 (0x00002aaaabfe0000)&lt;BR /&gt; libdl.so.2 =&amp;gt; /lib64/libdl.so.2 (0x00002aaaac1ea000)&lt;BR /&gt; libnsl.so.1 =&amp;gt; /lib64/libnsl.so.1 (0x00002aaaac3ee000)&lt;BR /&gt; libutil.so.1 =&amp;gt; /lib64/libutil.so.1 (0x00002aaaac606000)&lt;BR /&gt; libgcc_s.so.1 =&amp;gt; /lib64/libgcc_s.so.1 (0x0000003081c00000)&lt;BR /&gt; libc.so.6 =&amp;gt; /lib64/libc.so.6 (0x00002aaaac80a000)&lt;BR /&gt; /lib64/ld-linux-x86-64.so.2 (0x00002aaaaaaab000)&lt;BR /&gt; libsvml.so =&amp;gt; /opt/intel/fce/10.1.011/lib/libsvml.so (0x00002aaaacb5a000)&lt;BR /&gt; libintlc.so.5 =&amp;gt; /opt/intel/fce/10.1.011/lib/libintlc.so.5 (0x00002aaaacce2000)&lt;BR /&gt;&lt;BR /&gt;No changes :-(&lt;BR /&gt;&lt;BR /&gt;Thank you for the support&lt;BR /&gt;</description>
      <pubDate>Thu, 10 Sep 2009 10:08:04 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875891#M8922</guid>
      <dc:creator>Filippo_Spiga</dc:creator>
      <dc:date>2009-09-10T10:08:04Z</dc:date>
    </item>
    <item>
      <title>Re: MKL FFT Cluster: real DFTI_BAD_DESCRIPTOR problem or intern</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875892#M8923</link>
      <description>&lt;DIV style="margin:0px;"&gt;&lt;/DIV&gt;
Hi Filippo,&lt;BR /&gt;&lt;BR /&gt;I managed to reproduce your issue on my local machine. It looks like it is caused by incompatibility between MKL BLACS and the options you use to build Open MPI.&lt;BR /&gt;&lt;BR /&gt;BTW, is F77FLAGS really set to "-02" (where "0" is zero)?&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;-Vladimir&lt;BR /&gt;</description>
      <pubDate>Fri, 11 Sep 2009 02:55:53 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875892#M8923</guid>
      <dc:creator>Vladimir_Petrov__Int</dc:creator>
      <dc:date>2009-09-11T02:55:53Z</dc:date>
    </item>
    <item>
      <title>Re: MKL FFT Cluster: real DFTI_BAD_DESCRIPTOR problem or intern</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875893#M8924</link>
      <description>&lt;DIV style="margin:0px;"&gt;
&lt;DIV id="quote_reply" style="width: 100%; margin-top: 5px;"&gt;
&lt;DIV style="margin-left:2px;margin-right:2px;"&gt;Quoting - &lt;A href="https://community.intel.com/en-us/profile/93654"&gt;Vladimir Petrov (Intel)&lt;/A&gt;&lt;/DIV&gt;
&lt;DIV style="background-color:#E5E5E5; padding:5px;border: 1px; border-style: inset;margin-left:2px;margin-right:2px;"&gt;&lt;EM&gt; BTW, is F77FLAGS really set to "-02" (where "0" is zero)?&lt;BR /&gt;&lt;/EM&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;BR /&gt;Yes. If you are able to suggest me the flags to have full compatibility with MKL I will try to recompile Open MPI and make other tests&lt;BR /&gt;&lt;BR /&gt;Thanks a lot!&lt;BR /&gt;</description>
      <pubDate>Fri, 11 Sep 2009 07:28:42 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875893#M8924</guid>
      <dc:creator>Filippo_Spiga</dc:creator>
      <dc:date>2009-09-11T07:28:42Z</dc:date>
    </item>
    <item>
      <title>Re: MKL FFT Cluster: real DFTI_BAD_DESCRIPTOR problem or intern</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875894#M8925</link>
      <description>&lt;DIV style="margin:0px;"&gt;&lt;/DIV&gt;
Hi Filippo,&lt;BR /&gt;&lt;BR /&gt;I finally figured out what the problem is.&lt;BR /&gt;Since Open MPI considers MPI_COMM_WORLD to be a pointer it turns out to be 64-bit long. Whereas Cluster FFT was designed in times where sizeof(MPI_Comm) used to be 32-bit. In order to work correctly with Open MPI you just need to wrap the communicator as follows:&lt;BR /&gt;&lt;BR /&gt;DftiCreateDescriptorDM(MPI_Comm_c2f(MPI_COMM_WORLD),&amp;amp;desc,DFTI_DOUBLE,DFTI_COMPLEX,1,len);&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;-Vladimir&lt;BR /&gt;&lt;BR /&gt;P.S. I hope you will agree that later is better than never.&lt;BR /&gt;</description>
      <pubDate>Fri, 11 Sep 2009 08:59:04 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875894#M8925</guid>
      <dc:creator>Vladimir_Petrov__Int</dc:creator>
      <dc:date>2009-09-11T08:59:04Z</dc:date>
    </item>
    <item>
      <title>Re: MKL FFT Cluster: real DFTI_BAD_DESCRIPTOR problem or intern</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875895#M8926</link>
      <description>&lt;DIV style="margin:0px;"&gt;
&lt;DIV id="quote_reply" style="width: 100%; margin-top: 5px;"&gt;
&lt;DIV style="margin-left:2px;margin-right:2px;"&gt;Quoting - &lt;A href="https://community.intel.com/en-us/profile/93654"&gt;Vladimir Petrov (Intel)&lt;/A&gt;&lt;/DIV&gt;
&lt;DIV style="background-color:#E5E5E5; padding:5px;border: 1px; border-style: inset;margin-left:2px;margin-right:2px;"&gt;&lt;EM&gt;Since Open MPI considers MPI_COMM_WORLD to be a pointer it turns out to be 64-bit long. Whereas Cluster FFT was designed in times where sizeof(MPI_Comm) used to be 32-bit. In order to work correctly with Open MPI you just need to wrap the communicator as follows:&lt;BR /&gt;&lt;BR /&gt;DftiCreateDescriptorDM(MPI_Comm_c2f(MPI_COMM_WORLD),&amp;amp;desc,DFTI_DOUBLE,DFTI_COMPLEX,1,len);&lt;/EM&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;BR /&gt;Great, it works!&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;
&lt;DIV style="margin:0px;"&gt;
&lt;DIV id="quote_reply" style="width: 100%; margin-top: 5px;"&gt;
&lt;DIV style="margin-left:2px;margin-right:2px;"&gt;Quoting - &lt;A href="https://community.intel.com/en-us/profile/93654"&gt;Vladimir Petrov (Intel)&lt;/A&gt;&lt;/DIV&gt;
&lt;DIV style="background-color:#E5E5E5; padding:5px;border: 1px; border-style: inset;margin-left:2px;margin-right:2px;"&gt;&lt;EM&gt; P.S. I hope you will agree that later is better than never.&lt;BR /&gt;&lt;/EM&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;BR /&gt;I agree with you (-: Thank you very much again for your support!&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;</description>
      <pubDate>Fri, 11 Sep 2009 09:06:55 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-FFT-Cluster-real-DFTI-BAD-DESCRIPTOR-problem-or-internal-MPI/m-p/875895#M8926</guid>
      <dc:creator>Filippo_Spiga</dc:creator>
      <dc:date>2009-09-11T09:06:55Z</dc:date>
    </item>
  </channel>
</rss>

