<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Hi, in Intel® Distribution for Python*</title>
    <link>https://community.intel.com/t5/Intel-Distribution-for-Python/Intel-Optimization-for-TensorFlow-Wheel-Now-Available/m-p/1146371#M1041</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;

&lt;P&gt;It seems that this version does not use Xeon Phi Knight Corners cards plugging&amp;nbsp; to host. I try to run some tests. It only use CPU&amp;nbsp; and does not any Xeon Phi Cards. Are there any way to use&amp;nbsp; Xeon Phi Cards as Automatic Offload for MKL as in&amp;nbsp; Fortran/C/C++?&lt;/P&gt;

&lt;P&gt;&amp;nbsp;Thanks,&lt;/P&gt;

&lt;P&gt;&amp;nbsp; Minh&lt;/P&gt;</description>
    <pubDate>Fri, 16 Mar 2018 13:11:14 GMT</pubDate>
    <dc:creator>Minh_H_</dc:creator>
    <dc:date>2018-03-16T13:11:14Z</dc:date>
    <item>
      <title>Intel® Optimization for TensorFlow* Wheel Now Available</title>
      <link>https://community.intel.com/t5/Intel-Distribution-for-Python/Intel-Optimization-for-TensorFlow-Wheel-Now-Available/m-p/1146364#M1034</link>
      <description>&lt;P&gt;Intel® Optimization for&amp;nbsp;TensorFlow*&amp;nbsp;is now available for Linux* as a wheel installable through pip.&lt;/P&gt;

&lt;P&gt;For more information on the optimizations as well as performance data, see this&amp;nbsp;&lt;A href="https://software.intel.com/en-us/articles/tensorflow-optimizations-on-modern-intel-architecture"&gt;blog post&lt;/A&gt;.&lt;/P&gt;

&lt;P&gt;To install the wheel into an existing Python* installation, simply run&lt;/P&gt;

&lt;PRE class="brush:bash;"&gt;# Python 2.7
pip install &lt;A href="https://anaconda.org/intel/tensorflow/1.2.1/download/tensorflow-1.2.1-cp27-cp27mu-linux_x86_64.whl" target="_blank"&gt;https://anaconda.org/intel/tensorflow/1.2.1/download/tensorflow-1.2.1-cp27-cp27mu-linux_x86_64.whl&lt;/A&gt;

# Python 3.5
pip install &lt;A href="https://anaconda.org/intel/tensorflow/1.2.1/download/tensorflow-1.2.1-cp35-cp35m-linux_x86_64.whl" target="_blank"&gt;https://anaconda.org/intel/tensorflow/1.2.1/download/tensorflow-1.2.1-cp35-cp35m-linux_x86_64.whl&lt;/A&gt;

# Python 3.6
pip install &lt;A href="https://anaconda.org/intel/tensorflow/1.2.1/download/tensorflow-1.2.1-cp36-cp36m-linux_x86_64.whl" target="_blank"&gt;https://anaconda.org/intel/tensorflow/1.2.1/download/tensorflow-1.2.1-cp36-cp36m-linux_x86_64.whl&lt;/A&gt;&lt;/PRE&gt;

&lt;P&gt;To create a conda environment with Intel Tensorflow that also takes advantage of the Intel Distribution for Python’s optimized numpy, run&lt;/P&gt;

&lt;PRE class="brush:bash;"&gt;conda create -n tf -c intel python=&amp;lt;2|3&amp;gt; pip numpy
. activate tf
# Python 3.5
pip install &lt;A href="https://anaconda.org/intel/tensorflow/1.2.1/download/tensorflow-1.2.1-cp35-cp35m-linux_x86_64.whl" target="_blank"&gt;https://anaconda.org/intel/tensorflow/1.2.1/download/tensorflow-1.2.1-cp35-cp35m-linux_x86_64.whl&lt;/A&gt;
# Python 2.7
pip install &lt;A href="https://anaconda.org/intel/tensorflow/1.2.1/download/tensorflow-1.2.1-cp27-cp27mu-linux_x86_64.whl" target="_blank"&gt;https://anaconda.org/intel/tensorflow/1.2.1/download/tensorflow-1.2.1-cp27-cp27mu-linux_x86_64.whl&lt;/A&gt;&lt;/PRE&gt;

&lt;P&gt;Chris&lt;/P&gt;</description>
      <pubDate>Wed, 12 Jul 2017 19:50:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-for-Python/Intel-Optimization-for-TensorFlow-Wheel-Now-Available/m-p/1146364#M1034</guid>
      <dc:creator>Christophe_H_Intel2</dc:creator>
      <dc:date>2017-07-12T19:50:00Z</dc:date>
    </item>
    <item>
      <title> </title>
      <link>https://community.intel.com/t5/Intel-Distribution-for-Python/Intel-Optimization-for-TensorFlow-Wheel-Now-Available/m-p/1146365#M1035</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;Thank you so much for this Chris!&lt;/P&gt;

&lt;P&gt;Is there any plan to release optimized PyTorch as well? &amp;nbsp;I am switching from TensorFlow to PyTorch and I am aware of many other researchers who are switching as well.&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 25 Aug 2017 03:20:25 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-for-Python/Intel-Optimization-for-TensorFlow-Wheel-Now-Available/m-p/1146365#M1035</guid>
      <dc:creator>Randy_M_1</dc:creator>
      <dc:date>2017-08-25T03:20:25Z</dc:date>
    </item>
    <item>
      <title>Whatever installation method</title>
      <link>https://community.intel.com/t5/Intel-Distribution-for-Python/Intel-Optimization-for-TensorFlow-Wheel-Now-Available/m-p/1146366#M1036</link>
      <description>&lt;P&gt;No SSE4/AVX instructions.&lt;/P&gt;

&lt;P&gt;Whatever installation method I choose, I keep getting the message "The TensorFlow library wasn't compiled to use SSE4.1/SSE4.2/AVX instructions".&lt;/P&gt;

&lt;P&gt;Methods I tried:&lt;/P&gt;

&lt;PRE class="brush:bash;"&gt;$ conda create -n intel_tf -c intel --override-channels python=3 tensorflow
$ conda create -n intel_tf -c intel --override-channels python=2.7 tensorflow
$ conda create -n tf -c intel python=2 pip numpy &amp;amp;&amp;amp; . activate tf &amp;amp;&amp;amp; pip install &lt;A href="https://anaconda.org/intel/tensorflow/1.2.1/download/tensorflow-1.2.1-cp27-cp27mu-linux_x86_64.whl" target="_blank"&gt;https://anaconda.org/intel/tensorflow/1.2.1/download/tensorflow-1.2.1-cp27-cp27mu-linux_x86_64.whl&lt;/A&gt;
&lt;/PRE&gt;

&lt;P&gt;The code to test it simply was:&lt;/P&gt;

&lt;PRE class="brush:bash;"&gt;$ python -c 'import tensorflow as tf; tf.Session()'&lt;/PRE&gt;

&lt;P&gt;All the previous commands have been tried with and without setting PYTHONNOUSERSITE=1.&lt;BR /&gt;
	Just to be explicit, none of the listed commands raised error, only the Session instantiation printed out the reported warnings.&lt;/P&gt;

&lt;P&gt;My configuration:&lt;/P&gt;

&lt;PRE class="brush:bash;"&gt;$ lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                12
On-line CPU(s) list:   0-11
Thread(s) per core:    2
Core(s) per socket:    6
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 62
Model name:            Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz
Stepping:              4
CPU MHz:               1678.222
CPU max MHz:           3900,0000
CPU min MHz:           1200,0000
BogoMIPS:              6982.94
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              12288K
NUMA node0 CPU(s):     0-11
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts

$ uname -a
Linux pc107 4.4.0-92-generic #115-Ubuntu SMP Thu Aug 10 09:04:33 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

$ lsb_release -a
LSB Version:	core-9.20160110ubuntu0.2-amd64:core-9.20160110ubuntu0.2-noarch:security-9.20160110ubuntu0.2-amd64:security-9.20160110ubuntu0.2-noarch
Distributor ID:	Ubuntu
Description:	Ubuntu 16.04.3 LTS
Release:	16.04
Codename:	xenial

$ conda info
Current conda install:

               platform : linux-64
          conda version : 4.3.25
       conda is private : False
      conda-env version : 4.3.25
    conda-build version : not installed
         python version : 2.7.13.final.0
       requests version : 2.13.0
       root environment : ~/miniconda2  (writable)
    default environment : ~/miniconda2
       envs directories : ~/miniconda2/envs
                          ~/.conda/envs
          package cache : ~/miniconda2/pkgs
                          ~/.conda/pkgs
           channel URLs : &lt;A href="https://conda.anaconda.org/intel/linux-64" target="_blank"&gt;https://conda.anaconda.org/intel/linux-64&lt;/A&gt;
                          &lt;A href="https://conda.anaconda.org/intel/noarch" target="_blank"&gt;https://conda.anaconda.org/intel/noarch&lt;/A&gt;
                          &lt;A href="https://repo.continuum.io/pkgs/free/linux-64" target="_blank"&gt;https://repo.continuum.io/pkgs/free/linux-64&lt;/A&gt;
                          &lt;A href="https://repo.continuum.io/pkgs/free/noarch" target="_blank"&gt;https://repo.continuum.io/pkgs/free/noarch&lt;/A&gt;
                          &lt;A href="https://repo.continuum.io/pkgs/r/linux-64" target="_blank"&gt;https://repo.continuum.io/pkgs/r/linux-64&lt;/A&gt;
                          &lt;A href="https://repo.continuum.io/pkgs/r/noarch" target="_blank"&gt;https://repo.continuum.io/pkgs/r/noarch&lt;/A&gt;
                          &lt;A href="https://repo.continuum.io/pkgs/pro/linux-64" target="_blank"&gt;https://repo.continuum.io/pkgs/pro/linux-64&lt;/A&gt;
                          &lt;A href="https://repo.continuum.io/pkgs/pro/noarch" target="_blank"&gt;https://repo.continuum.io/pkgs/pro/noarch&lt;/A&gt;
            config file : ~/.condarc
             netrc file : ~/.netrc
           offline mode : False
             user-agent : conda/4.3.25 requests/2.13.0 CPython/2.7.13 Linux/4.4.0-92-generic debian/stretch/sid glibc/2.23    
                UID:GID : 1000:1000

$ cat ~/.condarc
channels:
  - intel
  - defaults
&lt;/PRE&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 21 Sep 2017 09:02:37 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-for-Python/Intel-Optimization-for-TensorFlow-Wheel-Now-Available/m-p/1146366#M1036</guid>
      <dc:creator>Corrado_A_</dc:creator>
      <dc:date>2017-09-21T09:02:37Z</dc:date>
    </item>
    <item>
      <title>I get the same message. Is</title>
      <link>https://community.intel.com/t5/Intel-Distribution-for-Python/Intel-Optimization-for-TensorFlow-Wheel-Now-Available/m-p/1146367#M1037</link>
      <description>&lt;P&gt;I get the same message. Is that normal?&lt;/P&gt;</description>
      <pubDate>Fri, 01 Dec 2017 00:33:05 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-for-Python/Intel-Optimization-for-TensorFlow-Wheel-Now-Available/m-p/1146367#M1037</guid>
      <dc:creator>Yong_J_Intel</dc:creator>
      <dc:date>2017-12-01T00:33:05Z</dc:date>
    </item>
    <item>
      <title>The message is normal. Intel</title>
      <link>https://community.intel.com/t5/Intel-Distribution-for-Python/Intel-Optimization-for-TensorFlow-Wheel-Now-Available/m-p/1146368#M1038</link>
      <description>&lt;P&gt;The message is normal. Intel has implemented TF primitives with MKL-DNN, which takes advantage of the AVX2, AVX512, etc. The warning is coming from a part of the code that is not performance sensitive when MKL-DNN is used. It would make sense to disable the warning when KL-DNN is used, but that has not happened.&lt;/P&gt;</description>
      <pubDate>Fri, 01 Dec 2017 16:21:47 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-for-Python/Intel-Optimization-for-TensorFlow-Wheel-Now-Available/m-p/1146368#M1038</guid>
      <dc:creator>Robert_C_Intel</dc:creator>
      <dc:date>2017-12-01T16:21:47Z</dc:date>
    </item>
    <item>
      <title>Are there plans to release an</title>
      <link>https://community.intel.com/t5/Intel-Distribution-for-Python/Intel-Optimization-for-TensorFlow-Wheel-Now-Available/m-p/1146369#M1039</link>
      <description>&lt;P&gt;Are there plans to release an optimized version of TensorFlow for Windows?&amp;nbsp;&lt;/P&gt;

&lt;P&gt;The latest TF I could find in CONDA in the intel channel is "1.1.0-np112py36_0" which is quite old...&lt;/P&gt;

&lt;P&gt;Mike&lt;/P&gt;</description>
      <pubDate>Thu, 25 Jan 2018 09:30:40 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-for-Python/Intel-Optimization-for-TensorFlow-Wheel-Now-Available/m-p/1146369#M1039</guid>
      <dc:creator>Skala__Mike</dc:creator>
      <dc:date>2018-01-25T09:30:40Z</dc:date>
    </item>
    <item>
      <title>What is the timeline for</title>
      <link>https://community.intel.com/t5/Intel-Distribution-for-Python/Intel-Optimization-for-TensorFlow-Wheel-Now-Available/m-p/1146370#M1040</link>
      <description>&lt;P&gt;What is the timeline for MacOS wheel?&lt;/P&gt;</description>
      <pubDate>Sat, 03 Mar 2018 16:04:41 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-for-Python/Intel-Optimization-for-TensorFlow-Wheel-Now-Available/m-p/1146370#M1040</guid>
      <dc:creator>prelovac__vladimir</dc:creator>
      <dc:date>2018-03-03T16:04:41Z</dc:date>
    </item>
    <item>
      <title>Hi,</title>
      <link>https://community.intel.com/t5/Intel-Distribution-for-Python/Intel-Optimization-for-TensorFlow-Wheel-Now-Available/m-p/1146371#M1041</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;

&lt;P&gt;It seems that this version does not use Xeon Phi Knight Corners cards plugging&amp;nbsp; to host. I try to run some tests. It only use CPU&amp;nbsp; and does not any Xeon Phi Cards. Are there any way to use&amp;nbsp; Xeon Phi Cards as Automatic Offload for MKL as in&amp;nbsp; Fortran/C/C++?&lt;/P&gt;

&lt;P&gt;&amp;nbsp;Thanks,&lt;/P&gt;

&lt;P&gt;&amp;nbsp; Minh&lt;/P&gt;</description>
      <pubDate>Fri, 16 Mar 2018 13:11:14 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Distribution-for-Python/Intel-Optimization-for-TensorFlow-Wheel-Now-Available/m-p/1146371#M1041</guid>
      <dc:creator>Minh_H_</dc:creator>
      <dc:date>2018-03-16T13:11:14Z</dc:date>
    </item>
  </channel>
</rss>

