Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

Matrix Library for C++ with MKL

Hui_Chen
Beginner
951 Views
Hi all,

Tired of writing dgemm(transa, transb, m, n, , 1, a, lda, b, ldb, 0, c, ldc) when multiplying two matrices? My Matrix Library for C++ is here to the rescue -- now you can write matrix operations in the most natural way like c = a*b !

Here's the project's page:

http://huichen.org/mlcpp

Mlcpp uses MKL (also takes GotoBlas and Atlas) to handle matrix multiplication so it's much faster than some existing C++ template libraries such as Eigen (which provides similar interfaces as mlcpp but has its own implementation of blas). See the benchmarks:

http://huichen.org/mlcpp/benchmark.html

Please feel free to give it a try and let me know what you think.

Hui
0 Kudos
4 Replies
Gennady_F_Intel
Moderator
951 Views
Hui, that's not completely clear which functionality do you use from MKL?
or you just link mlcpp with MKL lib's?
--Gennady

0 Kudos
Hui_Chen
Beginner
951 Views
Gennady, for now only gemm (shown in the last two figures) and geev are used thru a C++ wrapper/binding to MKL (yes it's linked to MKL's lib). I wrote all O(n^2) and O(n) functions which are quite efficient as you can see in the benchmarks. However I'm looking for more wrappers of O(n^3) MKL calls.

Hui
0 Kudos
Gennady_F_Intel
Moderator
951 Views
well, and what are performance overhead do you have with dgemm compare with the pure dgemm calling?
0 Kudos
Hui_Chen
Beginner
951 Views
The operator * is just a thin wrapper of gemm calls - whether it's sgemm or zgemm is determined at compile-time so there's no run-time panelty. The only overhead is that it allocates a temporary MxN matrix inside the call to save the result while you can use the same temporary array when doing gemm in C, though the time spent on allocating the temporary matrix is negnectable.
0 Kudos
Reply