Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

MKL lapack slow on Xeon Phi KNL

Adam_S_5
Beginner
641 Views

I'm running a Xeon Phi Knights Landing (64 core) and an Intel i7-6900K side by side for speed comparisons.  I'm in Python3, with latest Numpy (1.11.1) linked with all the latest MKL (11.3.3) libraries on both (via Anaconda installation).

The operation in question is a call to numpy.linalg.lstsq, which in turn calls lapack.  With MKL_NUM_THREADS=1, and vector dimension ranging from 100 to 1,000, I observe about 5x faster performance by the i7.  Increasing the number of threads scales better on the i7.  Without setting MKL_NUM_THREADS, the difference can be about 8x in favor of the i7.

This is surprising to me, since I have done speed tests of matrix multiply (using Theano's check_blas.py), in which performance is either roughly comparable or even favorable to the KNL, on a per core basis, and can be 10x in favor of KNL with no thread limit.

I'm not fully knowledgeable on least squares solving routines, but the majority (4/5) of counts during the lstsq routine do report using vectorization  (using perf stat -e r20C2,r40C2 ...).  Maybe it's not really utilizing the full register width (8 double precision) most of the time?  Could this alone explain the difference?  Of course, the matrix multiply reports an overwhelming majority of operations being vectorized.

Is there any hope of improvement?  

Happy to provide more numbers or test scripts.

Thanks,

Adam

0 Kudos
3 Replies
Gennady_F_Intel
Moderator
641 Views

Adam, it seems that some of LLS routines ( i am not sure which of them are called from numpy lstsq ) are not well optimized for KNL for these problem sizes. When do you increase the problem size, what perf gap do you see? 

0 Kudos
Adam_S_5
Beginner
641 Views

Hi Gennady,

  Thanks for writing.  I've run with some larger problem sizes, and I've dug a little deeper to see which routines are being called.

Timing results for problem sizes 1,000, 2,000, and 3,000 are attached.  Generally seeing a 2x-5x advantage to the i7 across the board.  The timing script is also attached (as a "lstsq_speed.txt", although really it's a ".py" extension, which this website does not accept).  Note this is using a positive definite matrix, as in my original problem, not sure if this is relevant.

For context, in my original problem, I had a vector size 756 being solved.  On the i7, this solves in about 0.2 s each time, but on the KNL it takes about 1.0 s (using one core on either).  It's enough to turn it into a significant performance factor in this particular problem.

The Numpy lstsq function calls the lapack routine dgelsd.  When I run using "perf record python lstsq_speed.py -t 1", then observe "perf report", it appears that the majority of time is spent in libmkl_avx2.so on the i7 and correspondingly libmkl_avx512_mic.so on the KNL.  Most of the time is split between dgemv and dgemm, which annotating shows spend most of their time manipulating ymm (on i7) and zmm (on KNL) registers, as expected.  A smaller amount of time is spent in lapack: mkl_lapack_ps_avx[2/512_mic]_dlasd4 from libmkl_avx[2/512_mic].so and both mkl_lapack_dlals0 and mkl_lapack_dlasd8 from libmkl_core.so, all of which operate on xmm registers in both machines.  The percent of time spent in these altogether is about 8% on i7 and 24% on KNL, which makes sense due to clock rate.  Maybe I'm not capturing or analyzing correctly, but it doesn't seem like vectorization alone explains it.

Any chance this can be addressed and KNL brought up to speed?

Thanks,

Adam

 

 

0 Kudos
Konstantin_A_Intel
640 Views

Hi Adam,

Thank you for reporting the issue. I reproduced DGELSD low performance.

Let our team look in more details what we can do. We will keep you updated on any progress.

Regards,

Konstantin

0 Kudos
Reply