One iteration of my code shows ~2 s spent in mkl_serv_cpuhaspnr for a total run time of 20 s. And it scales with the number of iterations---that is; 10 iterations take 223 s of which 22.404s are spent in mkl_serv_cpuhaspnr. I have attached a screen capture of the summary from vtune, for the 10 iterations run.
I couldn't find much documentation about this function, except for one web site that claims that this function "Checks for SSE 4.1 (pnr=Penryn)". But then I would expect it to do it once for the entire code.
Is there a way to avoid calling this function? What does it do anyway?
Linux CentOS 5.5, kernel 2.6.18-194.32.1.el5
CPU: Intel Xeon CPU E5345 @ 2.33GHz
Intel Composer XE 2011 Update 3 for Linux (MKL 10.3 Update 3) (composerxe-2011.3.174)
It's very strange that mkl_serv_cpuhaspnr takes more time than mkl_blas_dgemm. Maybe your test was run millions of times but dgemm only on very smallmatrices. Could you please describe your matrices used in your program? What parts of MKL are used there?
I'll keep making the changes in my code. I didn't know that calling BLAS for small matrices was a bad idea.
Just to answer your question, I link to (sequential) MKL with the following options:
-Wl,--start-group -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread
Anyway, thanks a lot Victor for putting me on the right track.
Yes,it's better to use direct 2x2 matrix/matrix and matrix/vector multiplications (using simpleinline) in order to eliminate huge overhead on calling of MKL functions on such small sizes. You know, MKL willgivepeak performance on medium and largetasks using parallelization.
For these sizes you can try to evaluate the Intel IPP library which provide set on highly optimized functions for small matrixes calculations.The functions are optimized for operations on small matrices and small vectors, particularly for matrices of size 3x3, 4x4, 5x5, 6x6, and for vectors of length 3, 4, 5, 6.--Gennady