Intel(R) Distribution for Python* will run on AMD hardware.
Please see the response to a related question about performance of Intel (R) MKL on AMD processors
Thanks for your reply. Are the codes optimized for Intel hardware to produce better performance over AMD's Threadripper? Does Intel Distribution for Python take advantage of multi-core CPU? How many cores do you recommend? Any recommendation between i7-8700K, Threadripper 1900X or 1950X?
Yes, binary code in Intel(R) Distribution for Python* is able to take advantage of features available in the latest Intel hardware, and takes advantage of multi-core CPUs through use of Intel performance libraries, such as Intel (R) MKL and Intel (R) DAAL, among others.
Performance comparison as well as hardware recommendations depend on your specific workflow, hence it is not possible to provide those.
Questions to consider: is performance of your application expected to scale with increase of computation resources?, is it compute/memory bound? are computations mostly integers/floating points?, etc.
Thanks Oleksandr. By "Intel Distribution for Python", you mean Python, tensorflow, keras, and the rest of the frameworks and software libraries related to deep learning and machine learning?
I do not know my new workflow yet. I have a PhD in CS but haven't worked on deep learning. I plan to use 1-4 Nvidia GPU for CUDA computations. In this case, is it better to go for the i5-8400 or even i7-8700K rather than the Threadripper? What is the recommended number of cores and amount of memory?
Oleksandr, The question is, what code path is chosen on Non Intel CPU's in Intel MKL / DAAL.
For instance if it is AMD CPU with AVX2 support, do you use the AVX2 code path?
What about AMD CPU with only AVX support?
What about AMD CPU with only SSE3 support?