<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: MKL ScaLAPACK pdgetrf_ in Intel® oneAPI Math Kernel Library</title>
    <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-ScaLAPACK-pdgetrf/m-p/870979#M8474</link>
    <description>&lt;DIV style="margin:0px;"&gt;
&lt;DIV id="quote_reply" style="width: 100%; margin-top: 5px;"&gt;
&lt;DIV style="margin-left:2px;margin-right:2px;"&gt;Quoting - &lt;A href="https://community.intel.com/en-us/profile/367365"&gt;tim18&lt;/A&gt;&lt;/DIV&gt;
&lt;DIV style="background-color:#E5E5E5; padding:5px;border: 1px; border-style: inset;margin-left:2px;margin-right:2px;"&gt;&lt;EM&gt;
&lt;DIV style="margin:0px;"&gt;&lt;/DIV&gt;
This looks like headers for incompatible MPI implementations are involved in your build. For example, you built some .o files with one version of MPI, and some with another, or you linked a library (including MKL) which is intended for a different version of MPI than your &lt;MPI.H&gt;. You must use consistent MPI throughout the build and at run time.&lt;BR /&gt;For example, OSU MVAPICH1 uses different encoding of MPI data types than Intel MPI.&lt;BR /&gt;&lt;/MPI.H&gt;&lt;/EM&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
Thanks a lot.
&lt;DIV&gt;&lt;SPAN style="font-family: Verdana, Arial, Helvetica, sans-serif;"&gt;I'll check it out.&lt;/SPAN&gt;&lt;/DIV&gt;</description>
    <pubDate>Sat, 05 Dec 2009 01:12:24 GMT</pubDate>
    <dc:creator>phaser75</dc:creator>
    <dc:date>2009-12-05T01:12:24Z</dc:date>
    <item>
      <title>MKL ScaLAPACK pdgetrf_</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-ScaLAPACK-pdgetrf/m-p/870977#M8472</link>
      <description>Hello!&lt;BR /&gt;&lt;BR /&gt;I'm trying to run my simple code for testing ScaLAPACK, but there is an error when running the code.&lt;BR /&gt;The code is simple. I use 4 processors and have the prescribed Block-Cyclic distributed matrix as 4 files.&lt;BR /&gt;When the program is started, each processor reads a matrix from the files and do pdgetrf_().&lt;BR /&gt;&lt;BR /&gt;If I don't call pdgetrf_(), the program is finished correctly with sign "Calculation END". But if I insert pdgetrf_(),&lt;BR /&gt;There is an error as follows:&lt;BR /&gt;0: Fatal error in PMPI_Reduce: Invalid MPI_Op, error stack:&lt;BR /&gt;0: PMPI_Reduce(1198)...........: MPI_Reduce(sbuf=0x7fbfffe810, rbuf=0x7fbfffe800, count=1, dtype=0x4c001013, MPI_MAXLOC, root=0, comm=0xc4000008) failed&lt;BR /&gt;0: MPIR_MAXLOC_check_dtype(151): MPI_Op MPI_MAXLOC operation not defined for this datatype &lt;BR /&gt;2: Fatal error in PMPI_Reduce: Invalid MPI_Op, error stack:&lt;BR /&gt;2: PMPI_Reduce(1198)...........: MPI_Reduce(sbuf=0x7fbfffe810, rbuf=0x7fbfffe800, count=1, dtype=0x4c001013, MPI_MAXLOC, root=0, comm=0xc4000003) failed&lt;BR /&gt;2: MPIR_MAXLOC_check_dtype(151): MPI_Op MPI_MAXLOC operation not defined for this datatype &lt;BR /&gt;rank 2 in job 56 node11_55838 caused collective abort of all ranks&lt;BR /&gt;exit status of rank 2: return code 1 &lt;BR /&gt;rank 0 in job 56 node11_55838 caused collective abort of all ranks&lt;BR /&gt;exit status of rank 0: return code 1 &lt;BR /&gt;&lt;BR /&gt;Could you give some advices about this error (Where do I have to look into?)&lt;BR /&gt;&lt;BR /&gt;The code is like this:&lt;BR /&gt;&lt;BR /&gt;#include &lt;MPI.H&gt;&lt;BR /&gt;#include &lt;STDLIB.H&gt;&lt;BR /&gt;#include &lt;STDIO.H&gt;&lt;BR /&gt;#include &lt;MATH.H&gt;&lt;BR /&gt;#include &lt;MKL.H&gt;&lt;BR /&gt;#include &lt;MKL_SCALAPACK.H&gt;&lt;BR /&gt;&lt;BR /&gt;#include &lt;MEMORY.H&gt;&lt;BR /&gt;#include &lt;TIME.H&gt;&lt;BR /&gt;&lt;BR /&gt;#define NRANSI&lt;BR /&gt;#include "NRUTIL.h"&lt;BR /&gt;&lt;BR /&gt;#define MXLLDA 1320&lt;BR /&gt;#define MXLLDB 1320&lt;BR /&gt;#define MXLOCR 1320&lt;BR /&gt;#define MXLOCC 1320&lt;BR /&gt;#define DLEN_ 9&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;void Cblacs_pinfo(int *, int * );&lt;BR /&gt;void Cblacs_exit(int);&lt;BR /&gt;void Cblacs_get(int, int, int * );&lt;BR /&gt;void Cblacs_gridinit(int *, char *, int, int );&lt;BR /&gt;void Cblacs_gridinfo(int, int *, int *, int *, int * );&lt;BR /&gt;void Cblacs_gridexit(int);&lt;BR /&gt;int numroc_(int *, int *, int *, int *, int *);&lt;BR /&gt;void descinit_(int *, int *, int *, int *, int *, int *, int *, int *, int *, int *);&lt;BR /&gt;&lt;BR /&gt;FILE *inp;&lt;BR /&gt;int ictxt, locr, locc, *ipvt;&lt;BR /&gt;int izero=0, ione=1, np;&lt;BR /&gt;int ProcNo, ProcID, nprow, npcol, MB_, NB_, desca[DLEN_], descb[DLEN_];&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;int main(int argc, char* argv[])&lt;BR /&gt;{&lt;BR /&gt;int i, j, maxa, maxb, info;&lt;BR /&gt;int myrow, mycol, iam, nnodes;&lt;BR /&gt;double **Aij, tmp;&lt;BR /&gt;int *indx;&lt;BR /&gt;char fname[100];&lt;BR /&gt;&lt;BR /&gt;/* setup MPI stuff */&lt;BR /&gt;MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;BR /&gt;MPI_Comm_size(MPI_COMM_WORLD, &amp;amp;ProcNo);&lt;BR /&gt;MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;ProcID);&lt;BR /&gt;&lt;BR /&gt;// processor info&lt;BR /&gt;nprow = 2;&lt;BR /&gt;npcol = 2;&lt;BR /&gt;MB_ = 4;&lt;BR /&gt;NB_ = 4;&lt;BR /&gt;&lt;BR /&gt;maxa = MXLLDA;&lt;BR /&gt;maxb = MXLLDB;&lt;BR /&gt;&lt;BR /&gt;MPI_Comm my_row_comm, my_col_comm;&lt;BR /&gt;MPI_Status stts;&lt;BR /&gt;&lt;BR /&gt;Cblacs_pinfo(&amp;amp;iam, &amp;amp;nnodes);&lt;BR /&gt;&lt;BR /&gt;np = 10;&lt;BR /&gt;&lt;BR /&gt;Cblacs_get(0, 0, &amp;amp;ictxt);&lt;BR /&gt;Cblacs_gridinit(&amp;amp;ictxt, "R", nprow, npcol);&lt;BR /&gt;Cblacs_gridinfo(ictxt, &amp;amp;nprow, &amp;amp;npcol, &amp;amp;myrow, &amp;amp;mycol);&lt;BR /&gt;&lt;BR /&gt;locr = numroc_(&amp;amp;np, &amp;amp;MB_, &amp;amp;myrow, &amp;amp;izero, &amp;amp;nprow);&lt;BR /&gt;locc = numroc_(&amp;amp;np, &amp;amp;NB_, &amp;amp;mycol, &amp;amp;izero, &amp;amp;npcol);&lt;BR /&gt;&lt;BR /&gt;descinit_(desca, &amp;amp;np, &amp;amp;np, &amp;amp;MB_, &amp;amp;NB_, &amp;amp;izero, &amp;amp;izero,&lt;BR /&gt;&amp;amp;ictxt, &amp;amp;maxa, &amp;amp;info);&lt;BR /&gt;descinit_(descb, &amp;amp;np, &amp;amp;ione, &amp;amp;NB_, &amp;amp;ione, &amp;amp;izero, &amp;amp;izero,&lt;BR /&gt;&amp;amp;ictxt, &amp;amp;maxb, &amp;amp;info);&lt;BR /&gt;&lt;BR /&gt;Aij = dmatrix(0,MXLOCC-1,0,MXLLDA-1);&lt;BR /&gt;ipvt = ivector(0,MXLOCR-1);&lt;BR /&gt;&lt;BR /&gt;sprintf(fname,"Aij%d.dat",ProcID);&lt;BR /&gt;inp = fopen(fname,"r");&lt;BR /&gt;&lt;BR /&gt;for (i=0; i&lt;MXLOCR&gt; = 0;&lt;BR /&gt;for (i=0; i&lt;LOCR&gt;&lt;/LOCR&gt;{&lt;BR /&gt;for (j=0; j&lt;LOCC&gt;&lt;/LOCC&gt;{&lt;BR /&gt;fscanf(inp,"%lf",&amp;amp;tmp);&lt;BR /&gt;Aij&lt;J&gt;&lt;I&gt; = tmp;&lt;BR /&gt;}&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;pdgetrf_(&amp;amp;np, &amp;amp;np, &amp;amp;Aij[0][0], &amp;amp;ione, &amp;amp;ione, desca, &amp;amp;ipvt[0], &amp;amp;info);&lt;BR /&gt;&lt;BR /&gt;free_dmatrix(Aij,0,MXLOCC-1,0,MXLLDA-1);&lt;BR /&gt;free_ivector(ipvt,0,MXLOCR-1);&lt;BR /&gt;&lt;BR /&gt;if (ProcID == 0) printf(" calculation END\n");&lt;BR /&gt;MPI_Barrier(MPI_COMM_WORLD);&lt;BR /&gt;&lt;BR /&gt;Cblacs_gridexit(ictxt);&lt;BR /&gt;Cblacs_exit(0);&lt;BR /&gt;&lt;BR /&gt;return 0;&lt;BR /&gt;}&lt;BR /&gt;&lt;BR /&gt;Thank U in advance.&lt;/I&gt;&lt;/J&gt;&lt;/MXLOCR&gt;&lt;/TIME.H&gt;&lt;/MEMORY.H&gt;&lt;/MKL_SCALAPACK.H&gt;&lt;/MKL.H&gt;&lt;/MATH.H&gt;&lt;/STDIO.H&gt;&lt;/STDLIB.H&gt;&lt;/MPI.H&gt;</description>
      <pubDate>Thu, 03 Dec 2009 05:34:19 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-ScaLAPACK-pdgetrf/m-p/870977#M8472</guid>
      <dc:creator>phaser75</dc:creator>
      <dc:date>2009-12-03T05:34:19Z</dc:date>
    </item>
    <item>
      <title>Re: MKL ScaLAPACK pdgetrf_</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-ScaLAPACK-pdgetrf/m-p/870978#M8473</link>
      <description>&lt;DIV style="margin:0px;"&gt;
&lt;DIV id="quote_reply" style="width: 100%; margin-top: 5px;"&gt;
&lt;DIV style="margin-left:2px;margin-right:2px;"&gt;Quoting - &lt;A href="https://community.intel.com/en-us/profile/444537"&gt;phaser75&lt;/A&gt;&lt;/DIV&gt;
&lt;DIV style="background-color:#E5E5E5; padding:5px;border: 1px; border-style: inset;margin-left:2px;margin-right:2px;"&gt;&lt;EM&gt;&lt;BR /&gt;0: MPIR_MAXLOC_check_dtype(151): MPI_Op MPI_MAXLOC operation not defined for this datatype &lt;BR /&gt;&lt;BR /&gt;#include &lt;MPI.H&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/MPI.H&gt;&lt;/EM&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
This looks like headers for incompatible MPI implementations are involved in your build. For example, you built some .o files with one version of MPI, and some with another, or you linked a library (including MKL) which is intended for a different version of MPI than your &lt;MPI.H&gt;. You must use consistent MPI throughout the build and at run time.&lt;BR /&gt;For example, OSU MVAPICH1 uses different encoding of MPI data types than Intel MPI.&lt;BR /&gt;&lt;/MPI.H&gt;</description>
      <pubDate>Thu, 03 Dec 2009 13:56:53 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-ScaLAPACK-pdgetrf/m-p/870978#M8473</guid>
      <dc:creator>TimP</dc:creator>
      <dc:date>2009-12-03T13:56:53Z</dc:date>
    </item>
    <item>
      <title>Re: MKL ScaLAPACK pdgetrf_</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-ScaLAPACK-pdgetrf/m-p/870979#M8474</link>
      <description>&lt;DIV style="margin:0px;"&gt;
&lt;DIV id="quote_reply" style="width: 100%; margin-top: 5px;"&gt;
&lt;DIV style="margin-left:2px;margin-right:2px;"&gt;Quoting - &lt;A href="https://community.intel.com/en-us/profile/367365"&gt;tim18&lt;/A&gt;&lt;/DIV&gt;
&lt;DIV style="background-color:#E5E5E5; padding:5px;border: 1px; border-style: inset;margin-left:2px;margin-right:2px;"&gt;&lt;EM&gt;
&lt;DIV style="margin:0px;"&gt;&lt;/DIV&gt;
This looks like headers for incompatible MPI implementations are involved in your build. For example, you built some .o files with one version of MPI, and some with another, or you linked a library (including MKL) which is intended for a different version of MPI than your &lt;MPI.H&gt;. You must use consistent MPI throughout the build and at run time.&lt;BR /&gt;For example, OSU MVAPICH1 uses different encoding of MPI data types than Intel MPI.&lt;BR /&gt;&lt;/MPI.H&gt;&lt;/EM&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
Thanks a lot.
&lt;DIV&gt;&lt;SPAN style="font-family: Verdana, Arial, Helvetica, sans-serif;"&gt;I'll check it out.&lt;/SPAN&gt;&lt;/DIV&gt;</description>
      <pubDate>Sat, 05 Dec 2009 01:12:24 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-ScaLAPACK-pdgetrf/m-p/870979#M8474</guid>
      <dc:creator>phaser75</dc:creator>
      <dc:date>2009-12-05T01:12:24Z</dc:date>
    </item>
  </channel>
</rss>

