<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic &amp;gt;&amp;gt;...While we are working on in Intel® oneAPI Math Kernel Library</title>
    <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Deep-Neural-Network-extensions-for-Intel-MKL/m-p/1062098#M21728</link>
    <description>&amp;gt;&amp;gt;...While we are working on new functionality we published a series of articles demonstrating DNN optimizations with Caffe
&amp;gt;&amp;gt;framework and AlexNet topology...

I simply would like to note that when a &lt;STRONG&gt;Neural Network&lt;/STRONG&gt; is implemented in a &lt;STRONG&gt;Classic&lt;/STRONG&gt; way only ~&lt;STRONG&gt;400&lt;/STRONG&gt; lines of code in C/C++ are needed. This is what I have for a simple pattern recognition software subsystem which is easily portable to &lt;STRONG&gt;Any&lt;/STRONG&gt; systems ( Linux, Windows, Android, Embedded, MS-DOS, etc ), with no any dependencies on 3rd party software, and very small.

Absolutely &lt;STRONG&gt;No&lt;/STRONG&gt; time for all these "monsters" with dependencies on &lt;STRONG&gt;Too&lt;/STRONG&gt; many 3rd party libraries, like &lt;STRONG&gt;Caffe&lt;/STRONG&gt;, &lt;STRONG&gt;AlexNet&lt;/STRONG&gt;, etc...</description>
    <pubDate>Mon, 15 May 2017 19:12:00 GMT</pubDate>
    <dc:creator>SergeyKostrov</dc:creator>
    <dc:date>2017-05-15T19:12:00Z</dc:date>
    <item>
      <title>Deep Neural Network extensions for Intel MKL</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Deep-Neural-Network-extensions-for-Intel-MKL/m-p/1062093#M21723</link>
      <description>&lt;P&gt;&amp;nbsp; &amp;nbsp; Deep neural network (DNN) applications grow in importance in various areas including internet search engines, retail and medical imaging. Intel recognizes importance of these workloads and is developing software solutions to accelerate these applications on Intel Architecture that will become available in future versions of Intel® Math Kernel Library (Intel® MKL) and Intel® Data Analytics Acceleration Library (Intel® DAAL).&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="font-size: 1em; line-height: 1.5;"&gt;While we are working on new functionality we published a series of articles demonstrating DNN optimizations with Caffe framework and AlexNet topology:&lt;/SPAN&gt;&lt;/P&gt;

&lt;UL&gt;
	&lt;LI&gt;&lt;SPAN style="font-size: 1em; line-height: 1.5;"&gt;&lt;A href="https://software.intel.com/en-us/articles/deep-neural-network-technical-preview-for-intel-math-kernel-library-intel-mkl"&gt;Deep Neural Network Technical Preview For Intel Math(R) Kernel Library&lt;/A&gt;&lt;/SPAN&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="https://github.com/01org/mkl-dnn"&gt;&lt;SPAN style="color: rgb(102, 102, 102); font-family: -apple-system, BlinkMacSystemFont, &amp;quot;Segoe UI&amp;quot;, Helvetica, Arial, sans-serif, &amp;quot;Apple Color Emoji&amp;quot;, &amp;quot;Segoe UI Emoji&amp;quot;, &amp;quot;Segoe UI Symbol&amp;quot;; font-size: 16px;"&gt;Intel(R) Math Kernel Library for Deep Neural Networks (Intel(R) MKL-DNN)&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="https://github.com/intel/caffe"&gt;&lt;SPAN style="color: rgb(102, 102, 102); font-family: -apple-system, BlinkMacSystemFont, &amp;quot;Segoe UI&amp;quot;, Helvetica, Arial, sans-serif, &amp;quot;Apple Color Emoji&amp;quot;, &amp;quot;Segoe UI Emoji&amp;quot;, &amp;quot;Segoe UI Symbol&amp;quot;; font-size: 16px;"&gt;Intel Fork of BVLC/Caffe&lt;/SPAN&gt;&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;&lt;SPAN style="font-size: 1em; line-height: 1.5;"&gt;Technical details on optimizations we did in the technical previews are available in Intel Lab’s Pradeep Dubey blog:&lt;/SPAN&gt;&lt;/P&gt;

&lt;UL&gt;
	&lt;LI&gt;&lt;A href="https://communities.intel.com/community/itpeernetwork/datastack/blog/2015/10/14/myth-busted-general-purpose-cpus-can-t-tackle-deep-neural-network-training"&gt;Myth Busted: General Purpose CPUs Can’t Tackle Deep Neural Network Training&lt;/A&gt;&lt;/LI&gt;
	&lt;LI&gt;&lt;A href="https://communities.intel.com/community/itpeernetwork/datastack/blog/2015/11/12/myth-busted-general-purpose-cpus-can-t-tackle-deep-neural-network-training-part-2"&gt;Myth Busted: General Purpose CPUs Can’t Tackle Deep Neural Network Training – Part 2&lt;/A&gt;&lt;/LI&gt;
&lt;/UL&gt;

&lt;P&gt;&lt;SPAN style="font-size: 1em; line-height: 1.5;"&gt;You can also take a sneak peek at Intel MKL DNN extensions programming model and functionality using &lt;/SPAN&gt;&lt;A href="https://software.intel.com/en-us/articles/deep-neural-network-technical-preview-for-intel-math-kernel-library-intel-mkl" style="font-size: 1em; line-height: 1.5;"&gt;Deep Neural Network Technical Preview for Intel® Math Kernel Library (Intel® MKL)&lt;/A&gt;&lt;SPAN style="font-size: 1em; line-height: 1.5;"&gt;. The feedback we get with this preview is essential to shape future Intel MKL products.&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 22 Dec 2015 18:15:32 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Deep-Neural-Network-extensions-for-Intel-MKL/m-p/1062093#M21723</guid>
      <dc:creator>Gennady_F_Intel</dc:creator>
      <dc:date>2015-12-22T18:15:32Z</dc:date>
    </item>
    <item>
      <title>Hi John,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Deep-Neural-Network-extensions-for-Intel-MKL/m-p/1062095#M21725</link>
      <description>&lt;P&gt;Hi John,&lt;/P&gt;

&lt;P&gt;you might have seen that this was officially released this month:&amp;nbsp;https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/684829&lt;/P&gt;

&lt;P&gt;You can find the code on GitHub (&lt;SPAN style="font-size: 1em;"&gt;&lt;A href="https://github.com/01org/mkl-dnn)" target="_blank"&gt;https://github.com/01org/mkl-dnn)&lt;/A&gt; and more information on the website&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://01.org/mkl-dnn" target="_blank"&gt;https://01.org/mkl-dnn&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;Best&lt;/P&gt;</description>
      <pubDate>Tue, 27 Sep 2016 17:52:14 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Deep-Neural-Network-extensions-for-Intel-MKL/m-p/1062095#M21725</guid>
      <dc:creator>Janko__Bayncore_</dc:creator>
      <dc:date>2016-09-27T17:52:14Z</dc:date>
    </item>
    <item>
      <title>Good evening,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Deep-Neural-Network-extensions-for-Intel-MKL/m-p/1062096#M21726</link>
      <description>&lt;P&gt;Good evening,&lt;/P&gt;

&lt;P&gt;After a few hours of trying to fix the CMAKE errors, I have just discovered that it is not possible now to build MKL-DNN for Windows 64-bit.&lt;/P&gt;

&lt;P&gt;That should be clearly be specified in the release notes.&lt;/P&gt;

&lt;P&gt;Regards,&lt;/P&gt;

&lt;P&gt;Jean&lt;/P&gt;</description>
      <pubDate>Fri, 24 Mar 2017 00:18:46 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Deep-Neural-Network-extensions-for-Intel-MKL/m-p/1062096#M21726</guid>
      <dc:creator>jean-vezina</dc:creator>
      <dc:date>2017-03-24T00:18:46Z</dc:date>
    </item>
    <item>
      <title>Hi,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Deep-Neural-Network-extensions-for-Intel-MKL/m-p/1062097#M21727</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="font-size: 1em;"&gt;How can we add new layers to a network (CNN) and see if works with MKL?&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="font-size: 1em;"&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="font-size: 1em;"&gt;Tejaswini&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 10 May 2017 20:30:52 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Deep-Neural-Network-extensions-for-Intel-MKL/m-p/1062097#M21727</guid>
      <dc:creator>Tejaswini_S_Intel</dc:creator>
      <dc:date>2017-05-10T20:30:52Z</dc:date>
    </item>
    <item>
      <title>&gt;&gt;...While we are working on</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Deep-Neural-Network-extensions-for-Intel-MKL/m-p/1062098#M21728</link>
      <description>&amp;gt;&amp;gt;...While we are working on new functionality we published a series of articles demonstrating DNN optimizations with Caffe
&amp;gt;&amp;gt;framework and AlexNet topology...

I simply would like to note that when a &lt;STRONG&gt;Neural Network&lt;/STRONG&gt; is implemented in a &lt;STRONG&gt;Classic&lt;/STRONG&gt; way only ~&lt;STRONG&gt;400&lt;/STRONG&gt; lines of code in C/C++ are needed. This is what I have for a simple pattern recognition software subsystem which is easily portable to &lt;STRONG&gt;Any&lt;/STRONG&gt; systems ( Linux, Windows, Android, Embedded, MS-DOS, etc ), with no any dependencies on 3rd party software, and very small.

Absolutely &lt;STRONG&gt;No&lt;/STRONG&gt; time for all these "monsters" with dependencies on &lt;STRONG&gt;Too&lt;/STRONG&gt; many 3rd party libraries, like &lt;STRONG&gt;Caffe&lt;/STRONG&gt;, &lt;STRONG&gt;AlexNet&lt;/STRONG&gt;, etc...</description>
      <pubDate>Mon, 15 May 2017 19:12:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Deep-Neural-Network-extensions-for-Intel-MKL/m-p/1062098#M21728</guid>
      <dc:creator>SergeyKostrov</dc:creator>
      <dc:date>2017-05-15T19:12:00Z</dc:date>
    </item>
    <item>
      <title>How do we actually get hold</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Deep-Neural-Network-extensions-for-Intel-MKL/m-p/1062094#M21724</link>
      <description>&lt;P&gt;How do we actually get hold of this program -- I have use?&lt;/P&gt;</description>
      <pubDate>Tue, 24 May 2016 19:46:51 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Deep-Neural-Network-extensions-for-Intel-MKL/m-p/1062094#M21724</guid>
      <dc:creator>JohnNichols</dc:creator>
      <dc:date>2016-05-24T19:46:51Z</dc:date>
    </item>
  </channel>
</rss>

