I am interested in evaluating the impact of vector extensions compilation flags (e.g., core-avx2) on IntelPython's performance.
Does anybody know if there is a way of manually compiling IntelPython using custom flags?
Also, I am not sure if it makes sense to experiment with IntelPython, or if I should compile MKL directly. In this case, does anybody know if there is a way of compiling MKL with custom flags?
Your question is a little vague, so please feel free to clarify if my response is off mark.
IntelPython is a stack of Python interpreter (CPython) and a suite of Python packages such as NumPy, Pandas, etc.
Some of these packages use native extensions, written in either a low language directly, or generated by means of Cython, SWIG, etc.
All native extensions, as well as CPython itself allow for configuration parameters, controlling compiler flags, choice of optimizations, etc. So the manual compilation is certainly possible.
Since it is unclear performance of which package you intend to study, let me point out that each component of IntelPython distribution arrives to your computer as a conda package, you can find those in pkgs/ folder of your installation. Peeking inside a folder in pkgs/, which corresponds to the package of interest, you can find the info/ folder, that contains recipe/ folder.
The conda recipe contained in that folder details exactly how the package was compiled.
Hopefully this serves a starting point in your endeavor.
Sorry, I didn't explain myself very well because I had a wrong idea of what IntelPython is.
Thank you very much for your clarification.