<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic probably make sense to try in Intel® oneAPI Math Kernel Library</title>
    <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Performance-degrade-by-combine-MPI-with-MKL/m-p/1141278#M26318</link>
    <description>&lt;P&gt;probably make sense to try the existing p[s,d, c,z]gemm routines. Have you tried it?&lt;/P&gt;</description>
    <pubDate>Thu, 19 Mar 2020 11:05:49 GMT</pubDate>
    <dc:creator>Gennady_F_Intel</dc:creator>
    <dc:date>2020-03-19T11:05:49Z</dc:date>
    <item>
      <title>Performance degrade by combine MPI with MKL</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Performance-degrade-by-combine-MPI-with-MKL/m-p/1141276#M26316</link>
      <description>&lt;P&gt;I am new to the field of MPI. I write my program by using Intel Math Kernel Library and I want to compute a matrix-matrix multiplication by blocks, which means that I split the large matrix X into many small matrixs along the column as the following. My matrix is large, so each time I only compute (N, M) x (M, N) where I can set M manually.&lt;/P&gt;
&lt;PRE class="brush:plain; class-name:dark;"&gt;XX^Ty = X_1X_1^Ty + X_2X_2^Ty + ... + X_nX_n^Ty&lt;/PRE&gt;

&lt;P&gt;I first set the number of total threads as 16 and M equals to 1024. Then I run my program directly as the following . I check my cpu state and I find that the cpu usage is 1600%, which is normal.&lt;/P&gt;

&lt;PRE class="brush:bash; class-name:dark;"&gt;./MMNET_MPI --block 1024 --numThreads 16&lt;/PRE&gt;

&lt;P&gt;However, I tried to run my program by using MPI as the following. Then I find that cpu usage is only 200-300%. Strangely, I change the block number to 64 and I can get a little performance improvement to cpu usage 1200%.&lt;/P&gt;

&lt;PRE class="brush:bash; class-name:dark;"&gt;mpirun -n 1 --bind-to none ./MMNET_MPI --block 1024 --numThreads 16&lt;/PRE&gt;

&lt;P&gt;I do not know what the problem is. It seems that&amp;nbsp;mpirun&amp;nbsp;does some default setting which has an impact on my program. The following is a part of my matrix multiplication code. The command `#pragma amp parallel for` aims to extract the small N by M matrix from compression format parallel. After that I use&amp;nbsp;clubs_dgemv&amp;nbsp;to compute the matrix-matrix multiplication.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;PRE class="brush:cpp; class-name:dark;"&gt;void LMMCPU::multXXTTrace(double *out, const double *vec) const {

  double *snpBlock = ALIGN_ALLOCATE_DOUBLES(Npad * snpsPerBlock);
  double (*workTable)[4] = (double (*)[4]) ALIGN_ALLOCATE_DOUBLES(omp_get_max_threads() * 256 * sizeof(*workTable));

  // store the temp result
  double *temp1 = ALIGN_ALLOCATE_DOUBLES(snpsPerBlock);
  for (uint64 m0 = 0; m0 &amp;lt; M; m0 += snpsPerBlock) {
    uint64 snpsPerBLockCrop = std::min(M, m0 + snpsPerBlock) - m0;
#pragma omp parallel for
    for (uint64 mPlus = 0; mPlus &amp;lt; snpsPerBLockCrop; mPlus++) {
      uint64 m = m0 + mPlus;
      if (projMaskSnps&lt;M&gt;)
        buildMaskedSnpCovCompVec(snpBlock + mPlus * Npad, m,
                                 workTable + (omp_get_thread_num() &amp;lt;&amp;lt; 8));
      else
        memset(snpBlock + mPlus * Npad, 0, Npad * sizeof(snpBlock[0]));
    }

      // compute A=X^TV
      MKL_INT row = Npad;
      MKL_INT col = snpsPerBLockCrop;
      double alpha = 1.0;
      MKL_INT lda = Npad;
      MKL_INT incx = 1;
      double beta = 0.0;
      MKL_INT incy = 1;
      cblas_dgemv(CblasColMajor,
                  CblasTrans,
                  row,
                  col,
                  alpha,
                  snpBlock,
                  lda,
                  vec,
                  incx,
                  beta,
                  temp1,
                  incy);

      // compute XA
      double beta1 = 1.0;
      cblas_dgemv(CblasColMajor, CblasNoTrans, row, col, alpha, snpBlock, lda, temp1, incx, beta1, out,
                  incy);


  }
  ALIGN_FREE(snpBlock);
  ALIGN_FREE(workTable);
  ALIGN_FREE(temp1);
}&lt;/M&gt;&lt;/PRE&gt;

&lt;P&gt;Actually, I have checked the following&amp;nbsp;part can fully use the cpu resources. It seems that there are some problems with cblas_dgemv.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;PRE class="brush:cpp; class-name:dark;"&gt;#pragma omp parallel for
    for (uint64 mPlus = 0; mPlus &amp;lt; snpsPerBLockCrop; mPlus++) {
      uint64 m = m0 + mPlus;
      if (projMaskSnps&lt;M&gt;)
        buildMaskedSnpCovCompVec(snpBlock + mPlus * Npad, m,
                                 workTable + (omp_get_thread_num() &amp;lt;&amp;lt; 8));
      else
        memset(snpBlock + mPlus * Npad, 0, Npad * sizeof(snpBlock[0]));
    }&lt;/M&gt;&lt;/PRE&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;My CPU information is as the following.&lt;/P&gt;

&lt;PRE class="brush:plain; class-name:dark;"&gt;Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              44
On-line CPU(s) list: 0-43
Thread(s) per core:  1
Core(s) per socket:  22
Socket(s):           2
NUMA node(s):        2
Vendor ID:           GenuineIntel
CPU family:          6
Model:               85
Model name:          Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz
Stepping:            4
CPU MHz:             1252.786
CPU max MHz:         2101.0000
CPU min MHz:         1000.0000
BogoMIPS:            4200.00
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            1024K
L3 cache:            30976K
NUMA node0 CPU(s):   0-21
NUMA node1 CPU(s):   22-43&lt;/PRE&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 13 Mar 2020 14:55:18 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Performance-degrade-by-combine-MPI-with-MKL/m-p/1141276#M26316</guid>
      <dc:creator>zhang__Shunkang</dc:creator>
      <dc:date>2020-03-13T14:55:18Z</dc:date>
    </item>
    <item>
      <title>Hello,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Performance-degrade-by-combine-MPI-with-MKL/m-p/1141277#M26317</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;It's hard to say.&amp;nbsp;How do you compile and link your application? Which OpenMP are you using? Have you seen the same with Intel MPI (I guess you're using OpenMPI)? Do you set affinity (e.g., via KMP_AFFINITY, for Intel OpenMP)?&amp;nbsp;&lt;/P&gt;&lt;P&gt;There are multiple things you can do to investigate. You can check the bindings via --report-bindings, and use --cpu-set to probide explicitly the set of cores to be used. You can do "export&amp;nbsp;MKL_VERBOSE=1" and see if the output from gemv calls shows anything weird. You can try to create a simple reproducer, where only calls to gemv exist. If the problem still exists, you can try to replace gemv by some simple scalable multi-threaded code (like adding two vectors) to check if the issue is coming from the way you set up your&amp;nbsp;run configuration.&lt;/P&gt;&lt;P&gt;Best,&lt;BR /&gt;Kirill&lt;/P&gt;</description>
      <pubDate>Mon, 16 Mar 2020 03:19:27 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Performance-degrade-by-combine-MPI-with-MKL/m-p/1141277#M26317</guid>
      <dc:creator>Kirill_V_Intel</dc:creator>
      <dc:date>2020-03-16T03:19:27Z</dc:date>
    </item>
    <item>
      <title>probably make sense to try</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Performance-degrade-by-combine-MPI-with-MKL/m-p/1141278#M26318</link>
      <description>&lt;P&gt;probably make sense to try the existing p[s,d, c,z]gemm routines. Have you tried it?&lt;/P&gt;</description>
      <pubDate>Thu, 19 Mar 2020 11:05:49 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Performance-degrade-by-combine-MPI-with-MKL/m-p/1141278#M26318</guid>
      <dc:creator>Gennady_F_Intel</dc:creator>
      <dc:date>2020-03-19T11:05:49Z</dc:date>
    </item>
  </channel>
</rss>

