<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic KNL cache performance using SIMD intrinsic in Software Archive</title>
    <link>https://community.intel.com/t5/Software-Archive/KNL-cache-performance-using-SIMD-intrinsic/m-p/1134285#M77954</link>
    <description>&lt;P&gt;Hi&lt;BR /&gt;
	I am very curious about the cache performance of KNL with SIMD intrinsic. I have the following observations.&lt;BR /&gt;
	I write a matrix to matrix multiplication program. I have two versions. The first one does gemm in a formal way, without intrinsic. And I wrote another one with intrinsic. Let's say the matrices are small ones, i.e., 16 * 16. I profile the two versions using VTune. I find that the first version really has a very small number of L1 cache misses. However, the second one has much more L1 cache misses than the first version, several times more.&lt;BR /&gt;
	The first version is compiled with -O1, so it is not vectrized. The second version is fully vectorized since I use the AVX512 intrinsic instructions. For the runtime, the first version takes much more time without doubt.&lt;BR /&gt;
	The question is why the cache miss number is so much different? The two versions should have the same memory access pattern. And all data (three 16*16 floats matrices) should be cached in the L1 cache. There should be only compulsory cache misses.&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="font-size: 1em; line-height: 1.5;"&gt;Could anyone help to explain why?&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Sat, 24 Jun 2017 00:33:44 GMT</pubDate>
    <dc:creator>Zhen</dc:creator>
    <dc:date>2017-06-24T00:33:44Z</dc:date>
    <item>
      <title>KNL cache performance using SIMD intrinsic</title>
      <link>https://community.intel.com/t5/Software-Archive/KNL-cache-performance-using-SIMD-intrinsic/m-p/1134285#M77954</link>
      <description>&lt;P&gt;Hi&lt;BR /&gt;
	I am very curious about the cache performance of KNL with SIMD intrinsic. I have the following observations.&lt;BR /&gt;
	I write a matrix to matrix multiplication program. I have two versions. The first one does gemm in a formal way, without intrinsic. And I wrote another one with intrinsic. Let's say the matrices are small ones, i.e., 16 * 16. I profile the two versions using VTune. I find that the first version really has a very small number of L1 cache misses. However, the second one has much more L1 cache misses than the first version, several times more.&lt;BR /&gt;
	The first version is compiled with -O1, so it is not vectrized. The second version is fully vectorized since I use the AVX512 intrinsic instructions. For the runtime, the first version takes much more time without doubt.&lt;BR /&gt;
	The question is why the cache miss number is so much different? The two versions should have the same memory access pattern. And all data (three 16*16 floats matrices) should be cached in the L1 cache. There should be only compulsory cache misses.&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="font-size: 1em; line-height: 1.5;"&gt;Could anyone help to explain why?&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 24 Jun 2017 00:33:44 GMT</pubDate>
      <guid>https://community.intel.com/t5/Software-Archive/KNL-cache-performance-using-SIMD-intrinsic/m-p/1134285#M77954</guid>
      <dc:creator>Zhen</dc:creator>
      <dc:date>2017-06-24T00:33:44Z</dc:date>
    </item>
  </channel>
</rss>

