After installing the Intel-Optimized Tensorflow with MKL-DNN, why does it still say that AVX2, FMA, etc, are not supported. Can someone please guide me in detail about this? I am looking to get the best performance.
Don't worry, it is expected message, because we actually build the intel -optimized Tensorflow with general Instruction AVX by design. Thus the message mentioned advanced instruction AVX2 and FMA are not supported. But as Intel-Optimized Tensorflow are based on MKL-DNN, which can automatically detect CPU instruction and distribute CPU-specific code to the machines with AVX2, AVX512 etc. So the kerner code speeded by Intel MKL-DNN is actually not limited by the TF build Option.
We will consider to add such message somewhere. And the TF Guide is here : https://software.intel.com/en-us/articles/intel-optimization-for-tensorflow-installation-guide ;
Note: All binaries distributed by Intel were built against the TensorFlow v1.12.0 tag in a centOS container with gcc 4.8.5 and glibc 2.17 with the following compiler flags (shown below as passed to bazel*)
--cxxopt=-D_GLIBCXX_USE_CXX11_ABI=0 --copt=-march=corei7-avx --copt=-mtune=core-avx-i --copt=-O3