<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic actually, you may try to in Intel® oneAPI Math Kernel Library</title>
    <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/2017-version-of-MKL/m-p/1159127#M27800</link>
    <description>&lt;P&gt;actually, you may try to request this version from this page -&amp;nbsp;https://software.intel.com/en-us/performance-libraries&amp;nbsp; or submit ticket from Intel Online Service Center in the case if you have a valid license.&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Wed, 05 Jun 2019 00:44:54 GMT</pubDate>
    <dc:creator>Gennady_F_Intel</dc:creator>
    <dc:date>2019-06-05T00:44:54Z</dc:date>
    <item>
      <title>2017 version of MKL?</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/2017-version-of-MKL/m-p/1159126#M27799</link>
      <description>&lt;P&gt;Is MKL 2017 is still available?&lt;/P&gt;&lt;P&gt;I have a Phi 3120a and would like to try the automatic offload feature - which was removed in the MKL 2018 release.&lt;/P&gt;</description>
      <pubDate>Tue, 04 Jun 2019 02:19:54 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/2017-version-of-MKL/m-p/1159126#M27799</guid>
      <dc:creator>Rogahn__Dan</dc:creator>
      <dc:date>2019-06-04T02:19:54Z</dc:date>
    </item>
    <item>
      <title>actually, you may try to</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/2017-version-of-MKL/m-p/1159127#M27800</link>
      <description>&lt;P&gt;actually, you may try to request this version from this page -&amp;nbsp;https://software.intel.com/en-us/performance-libraries&amp;nbsp; or submit ticket from Intel Online Service Center in the case if you have a valid license.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 05 Jun 2019 00:44:54 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/2017-version-of-MKL/m-p/1159127#M27800</guid>
      <dc:creator>Gennady_F_Intel</dc:creator>
      <dc:date>2019-06-05T00:44:54Z</dc:date>
    </item>
    <item>
      <title>I dont have paid support,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/2017-version-of-MKL/m-p/1159128#M27801</link>
      <description>&lt;P&gt;I see MKL-2017 is available in Conda, but there are complications.&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;I dont have paid support, just the free license.&lt;/LI&gt;&lt;LI&gt;At this point, I'm running on Windows, which no longer has a MKL-2017 installer.&lt;/LI&gt;&lt;LI&gt;I'm using the Intel DNN library -- and it doesn't have a 2017 version anymore. So it may be using the installed MKL-2019 dll?&lt;/LI&gt;&lt;LI&gt;I doubt I'd be able to build Tensorflow inside a 1xx Phi (bazel is difficult to build on a regular system)&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;I'm building a complex set of applications with limitations ranging from memory bandwidth to compute:cores*clock to floating point math (neural).&amp;nbsp; -- To prove scale and limiting factors vs cost -- i.e. should we invest in one big cpu, a cluster, GPU, or mix?&lt;/P&gt;&lt;P&gt;I was pretty successful running Linux on the Phi, and learned a lot.&lt;BR /&gt;This was definitely off-use for a 1xx phi, this test is scaled down, but it still runs out of memory quickly.&lt;BR /&gt;About 47 threads was only just catching up to 1 current-gen core (or 4 threads on a 2-core Atom with only 2G RAM), due to the old in-order cores and slow clock.&amp;nbsp; [around 32 threads, the app started needing virtual memory to complete; over 47 overall performance decreased]&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper"&gt;&lt;img src="https://community.intel.com/skins/images/7B13F55A7CE623EF42E69096FA81A3A1/2021_redesign/images/image_not_found.png" /&gt;&lt;/span&gt;&lt;BR /&gt;htop screenshot: mid-run of an app on 57 threads&lt;/P&gt;&lt;P&gt;I see this promising benchmark that a Phi 7250 can beat a dual 32-core monster, or a GTX1080 at inference on AlexNet and GoogleNet&lt;BR /&gt;but a 1080 is much more affordable/available than the others...&lt;BR /&gt;&lt;A href="https://software.intel.com/en-us/articles/tensorflow-optimizations-on-modern-intel-architecture" target="_blank"&gt;https://software.intel.com/en-us/articles/tensorflow-optimizations-on-modern-intel-architecture&lt;/A&gt;&lt;BR /&gt;&lt;A href="https://www.phoronix.com/scan.php?page=article&amp;amp;item=nvidia-rtx2080ti-tensorflow&amp;amp;num=3" target="_blank"&gt;https://www.phoronix.com/scan.php?page=article&amp;amp;item=nvidia-rtx2080ti-tensorflow&amp;amp;num=3&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Just disappointed I wont see my Phi running at full steam after all the work I put into it...&lt;/P&gt;</description>
      <pubDate>Sun, 30 Jun 2019 19:40:44 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/2017-version-of-MKL/m-p/1159128#M27801</guid>
      <dc:creator>Rogahn__Dan</dc:creator>
      <dc:date>2019-06-30T19:40:44Z</dc:date>
    </item>
  </channel>
</rss>

