Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
58 Views

Matrix Library for C++ with MKL

Hi all,

Tired of writing dgemm(transa, transb, m, n, , 1, a, lda, b, ldb, 0, c, ldc) when multiplying two matrices? My Matrix Library for C++ is here to the rescue -- now you can write matrix operations in the most natural way like c = a*b !

Here's the project's page:

http://huichen.org/mlcpp

Mlcpp uses MKL (also takes GotoBlas and Atlas) to handle matrix multiplication so it's much faster than some existing C++ template libraries such as Eigen (which provides similar interfaces as mlcpp but has its own implementation of blas). See the benchmarks:

http://huichen.org/mlcpp/benchmark.html

Please feel free to give it a try and let me know what you think.

Hui
0 Kudos
4 Replies
Highlighted
Moderator
58 Views

Hui, that's not completely clear which functionality do you use from MKL?
or you just link mlcpp with MKL lib's?
--Gennady

0 Kudos
Highlighted
Beginner
58 Views

Gennady, for now only gemm (shown in the last two figures) and geev are used thru a C++ wrapper/binding to MKL (yes it's linked to MKL's lib). I wrote all O(n^2) and O(n) functions which are quite efficient as you can see in the benchmarks. However I'm looking for more wrappers of O(n^3) MKL calls.

Hui
0 Kudos
Highlighted
Moderator
58 Views

well, and what are performance overhead do you have with dgemm compare with the pure dgemm calling?
0 Kudos
Highlighted
Beginner
58 Views

The operator * is just a thin wrapper of gemm calls - whether it's sgemm or zgemm is determined at compile-time so there's no run-time panelty. The only overhead is that it allocates a temporary MxN matrix inside the call to save the result while you can use the same temporary array when doing gemm in C, though the time spent on allocating the temporary matrix is negnectable.
0 Kudos