- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Intel® Math Kernel Library (Intel® MKL) is a highly optimized, extensively threaded, and thread-safe library of mathematical functions for engineering, scientific, and financial applications that require maximum performance.
Intel MKL 2018 Update 3 packages are now ready for download.
Intel MKL is available as part of the Intel® Parallel Studio XE and Intel® System Studio. Please visit the Intel® Math Kernel Library Product Page.
Please see What's new in Intel MKL 2018 and in MKL 2018 Update 3 follow this link - https://software.intel.com/en-us/articles/intel-math-kernel-library-release-notes-and-new-features
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
What’s New in Intel® Math Kernel Library (Intel® MKL) version 2018 Update 3:
- BLAS
- Addressed ?TRMM NaN propagation issues on Advanced Vector Extensions 512 (Intel® AVX-512) for 32-bit architectures.
- Improved performance on small sizes of multithreaded {S,D}SYRK and {C,Z}HERK for Intel® Advanced Vector Extensions 2 (Intel® AVX2) and Intel® Advanced Vector Extensions 512 (Intel® AVX-512)
- LAPACK:
- Added ?POTRF and ?GEQRF optimizations for Intel® Advanced Vector Extensions 2 and Intel® Advanced Vector Extensions 512 (Intel l®AVX2 and Intel l® AVX-512) instruction sets.
- Improved the performance of ?GESVD for very small square matrices (N<6).
- Improved performance of inverse routines ?TRTRI, ?GETRI and ?POTRI.
- SparseBLAS:
- Improved the performance of SPARSE_OPTIMIZE, SPARSE_SV and SPARSE_SYPR routines for Intel® TBB threading.
- Added support of BSR format for the SPARSE_SYPR routine.
- Library Engineering:
- Added functionality to write the output of MKL_VERBOSE to a file specified by the user.
- Enabled optimizations for Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instruction set with support of Vector Neural Network Instructions via MKL_ENABLE_INSTRUCTIONS.
Known Limitations:
When the leading dimension of matrix A is not equal to the number of rows or columns, the MKL_?GEMM_COMPACT functions can return incorrect results when executed on a processor that does not support Intel ® AVX-2 or Intel ® AVX-512 instructions.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I remember seeing a post about taking advantage of MKL in case I want multiply many matrices by the same matrix.
It shows that for that case I can get performance of large matrices multiplications in case of small.
Where can I access it?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
you may try to check batch mode option -- cblas_?gemm_batch
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page