Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

2017 version of MKL?


Is MKL 2017 is still available?

I have a Phi 3120a and would like to try the automatic offload feature - which was removed in the MKL 2018 release.

0 Kudos
2 Replies

actually, you may try to request this version from this page -  or submit ticket from Intel Online Service Center in the case if you have a valid license. 


I see MKL-2017 is available in Conda, but there are complications.

  • I dont have paid support, just the free license.
  • At this point, I'm running on Windows, which no longer has a MKL-2017 installer.
  • I'm using the Intel DNN library -- and it doesn't have a 2017 version anymore. So it may be using the installed MKL-2019 dll?
  • I doubt I'd be able to build Tensorflow inside a 1xx Phi (bazel is difficult to build on a regular system)

I'm building a complex set of applications with limitations ranging from memory bandwidth to compute:cores*clock to floating point math (neural).  -- To prove scale and limiting factors vs cost -- i.e. should we invest in one big cpu, a cluster, GPU, or mix?

I was pretty successful running Linux on the Phi, and learned a lot.
This was definitely off-use for a 1xx phi, this test is scaled down, but it still runs out of memory quickly.
About 47 threads was only just catching up to 1 current-gen core (or 4 threads on a 2-core Atom with only 2G RAM), due to the old in-order cores and slow clock.  [around 32 threads, the app started needing virtual memory to complete; over 47 overall performance decreased]

htop screenshot: mid-run of an app on 57 threads

I see this promising benchmark that a Phi 7250 can beat a dual 32-core monster, or a GTX1080 at inference on AlexNet and GoogleNet
but a 1080 is much more affordable/available than the others...

Just disappointed I wont see my Phi running at full steam after all the work I put into it...