<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic HPL on Xeon and Xeon PHI in Intel® oneAPI Math Kernel Library</title>
    <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/HPL-on-Xeon-and-Xeon-PHI/m-p/1168296#M28372</link>
    <description>Hello,

I would like to run Linpack on Broadwell and Knights Landing Xeons at the same time. It is running on both architectures separately, but fails with the following message, if I try to use both of them:

- The matrix A is randomly generated for each test.
- The following scaled residual check will be computed:
      ||Ax-b||_oo / ( eps * ( || x ||_oo * || A ||_oo + || b ||_oo ) * N )
- The relative machine precision (eps) is taken to be               1.110223e-16
- Computational tests pass if scaled residuals are less than                16.0

Fatal error in MPI_Sendrecv: Message truncated, error stack:
MPI_Sendrecv(259)............: MPI_Sendrecv(sbuf=0x7f80ac848608, scount=18432, MPI_DOUBLE, dest=1, stag=10001, rbuf=0x7f80c7800000, rcount=8051, MPI_DOUBLE, src=1, rtag=10001, comm=0x84000002, status=0x7ffc9c14b3d0) failed
MPID_nem_tmi_handle_rreq(688): Message from rank 1 and tag 10001 truncated; 64408 bytes received but buffer size is 64408 (75032 64408 61)

When I compile HPL by myself, it is working, but rather slow. Is there a chance to get the Intel optimized version of HPL running?

Best regards,
Holger</description>
    <pubDate>Tue, 05 Dec 2017 16:27:25 GMT</pubDate>
    <dc:creator>Holger_A_</dc:creator>
    <dc:date>2017-12-05T16:27:25Z</dc:date>
    <item>
      <title>HPL on Xeon and Xeon PHI</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/HPL-on-Xeon-and-Xeon-PHI/m-p/1168296#M28372</link>
      <description>Hello,

I would like to run Linpack on Broadwell and Knights Landing Xeons at the same time. It is running on both architectures separately, but fails with the following message, if I try to use both of them:

- The matrix A is randomly generated for each test.
- The following scaled residual check will be computed:
      ||Ax-b||_oo / ( eps * ( || x ||_oo * || A ||_oo + || b ||_oo ) * N )
- The relative machine precision (eps) is taken to be               1.110223e-16
- Computational tests pass if scaled residuals are less than                16.0

Fatal error in MPI_Sendrecv: Message truncated, error stack:
MPI_Sendrecv(259)............: MPI_Sendrecv(sbuf=0x7f80ac848608, scount=18432, MPI_DOUBLE, dest=1, stag=10001, rbuf=0x7f80c7800000, rcount=8051, MPI_DOUBLE, src=1, rtag=10001, comm=0x84000002, status=0x7ffc9c14b3d0) failed
MPID_nem_tmi_handle_rreq(688): Message from rank 1 and tag 10001 truncated; 64408 bytes received but buffer size is 64408 (75032 64408 61)

When I compile HPL by myself, it is working, but rather slow. Is there a chance to get the Intel optimized version of HPL running?

Best regards,
Holger</description>
      <pubDate>Tue, 05 Dec 2017 16:27:25 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/HPL-on-Xeon-and-Xeon-PHI/m-p/1168296#M28372</guid>
      <dc:creator>Holger_A_</dc:creator>
      <dc:date>2017-12-05T16:27:25Z</dc:date>
    </item>
    <item>
      <title>Hello,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/HPL-on-Xeon-and-Xeon-PHI/m-p/1168297#M28373</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;

&lt;P&gt;&amp;nbsp;Please use same architecture for vertical nodes. MPI_Sendrecv failed because each architecture&amp;nbsp;assumes different blocking size.&lt;/P&gt;

&lt;P&gt;Thanks,&lt;/P&gt;

&lt;P&gt;&amp;nbsp;Kazushige Goto&lt;/P&gt;</description>
      <pubDate>Tue, 05 Dec 2017 17:07:39 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/HPL-on-Xeon-and-Xeon-PHI/m-p/1168297#M28373</guid>
      <dc:creator>Kazushige_G_Intel</dc:creator>
      <dc:date>2017-12-05T17:07:39Z</dc:date>
    </item>
  </channel>
</rss>

