Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

Multithreading in MKL

pawanlri
Beginner
748 Views
Hello,
I want to do a simplemulti-threadedmatrix using the routine mkl_dcoomm()
in MKL.
While compiling, I use these threaded libraries-lmkl_gnu_thread -lmkl_core -lmkl_intel_lp64
the program compiles, but I do not see any speedup, I have tried a range of matrix sizes, however
I use this routine for matrix-vector product, since matrix-vector is notmulti threadedin mkl sparse blas.
While running the program, I set the number of threads by : set MKL_NUM_THREADS=4
I do not see any significant speedup with varying the number of cores!
Am I missing something that is required to enable parallelism ?
Thanks,
Pawan
0 Kudos
2 Replies
mecej4
Honored Contributor III
748 Views
Not every program can display speedup using compiler switches to enable parallelism. Nor do all routines of MKL have inherent capability to use SMP parallelism.

See the section Parallelism in the Overview chapter of the Intel Math Kernel Library Reference Manual for guidance on the ways to take advantage of parallelism.
0 Kudos
Gennady_F_Intel
Moderator
748 Views

Pawan,

: I do not see any significant speedup with varying the number of cores!

The performance of any spare matrix operations is much lower that the dense BLAS because the memory access patterns are irregular and the ration of float point operations is lower than in some dense operations. So thats the reason why you dont see any significant speedup.

So, if the matrix sizes are fit with the RAM, when it would be more efficient to use dense BLAS calculations.

In such cases It may be make a sense to convert from sparse to dense, then use m-v calculation for dense routines

--Gennady
0 Kudos
Reply