Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Intel Community
- Software Development SDKs and Libraries
- Intel® oneAPI Math Kernel Library & Intel® Math Kernel Library
- Matrix Library for C++ with MKL

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

Highlighted
##

Hi all,

Tired of writing dgemm(transa, transb, m, n, , 1, a, lda, b, ldb, 0, c, ldc) when multiplying two matrices? My Matrix Library for C++ is here to the rescue -- now you can write matrix operations in the most natural way like c = a*b !

Here's the project's page:

http://huichen.org/mlcpp

Mlcpp uses MKL (also takes GotoBlas and Atlas) to handle matrix multiplication so it's much faster than some existing C++ template libraries such as Eigen (which provides similar interfaces as mlcpp but has its own implementation of blas). See the benchmarks:

http://huichen.org/mlcpp/benchmark.html

Please feel free to give it a try and let me know what you think.

Hui

Hui_Chen

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

02-09-2011
02:08 PM

58 Views

Matrix Library for C++ with MKL

Tired of writing dgemm(transa, transb, m, n, , 1, a, lda, b, ldb, 0, c, ldc) when multiplying two matrices? My Matrix Library for C++ is here to the rescue -- now you can write matrix operations in the most natural way like c = a*b !

Here's the project's page:

http://huichen.org/mlcpp

Mlcpp uses MKL (also takes GotoBlas and Atlas) to handle matrix multiplication so it's much faster than some existing C++ template libraries such as Eigen (which provides similar interfaces as mlcpp but has its own implementation of blas). See the benchmarks:

http://huichen.org/mlcpp/benchmark.html

Please feel free to give it a try and let me know what you think.

Hui

4 Replies

Highlighted
##

Gennady_F_Intel

Moderator

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

02-09-2011
11:08 PM

58 Views

Hui, that's not completely clear which functionality do you use from MKL?

or you just link mlcpp with MKL lib's?

--Gennady

Highlighted
##

Gennady, for now only gemm (shown in the last two figures) and geev are used thru a C++ wrapper/binding to MKL (yes it's linked to MKL's lib). I wrote all O(n^2) and O(n) functions which are quite efficient as you can see in the benchmarks. However I'm looking for more wrappers of O(n^3) MKL calls.

Hui

Hui_Chen

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

02-10-2011
12:32 AM

58 Views

Hui

Highlighted
##

Gennady_F_Intel

Moderator

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

02-10-2011
01:10 AM

58 Views

well, and what are performance overhead do you have with dgemm compare with the pure dgemm calling?

Highlighted
##

The operator * is just a thin wrapper of gemm calls - whether it's sgemm or zgemm is determined at compile-time so there's no run-time panelty. The only overhead is that it allocates a temporary MxN matrix inside the call to save the result while you can use the same temporary array when doing gemm in C, though the time spent on allocating the temporary matrix is negnectable.

Hui_Chen

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

02-10-2011
08:15 AM

58 Views

For more complete information about compiler optimizations, see our Optimization Notice.