Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

provide integer type BLAS routines?

Adam_S_5
Beginner
1,975 Views

Hi,

Is there any plan to provide BLAS routines for integer types in MKL?  GEMM is the prime example I'm thinking of.

There is a small but growing literature in deep learning on the use of reduced precision (relative to float32) to speed up these computations.  A few manual implementations have been published using Intel vectorized integer math instructions and have demonstrated good computation speedup (which to me is the important part, rather than the memory savings).  This seems like an excellent opportunity for Intel-MKL to lead. :)  Would gladly use float16 if that was an option, but 8 bit values are likely to be useful as well, at least for fast inference.  

Perhaps the main difficulty relative to floating point GEMM is handling overflow?  Prior deep learning work has sorted out at least one reasonable solution which is to accumulate the answer in extended precision (e.g. int8 -> int16) and then let the user decide how to round back to the input precision, if desired.  The risk of saturation in intermediate computation might just have to be accepted by the user?

Seems like otherwise the blocking and cache management would be the same as existing GEMM.  But I'm no expert...is there something more fundamental which makes this a bad idea or rather difficult to implement relative to floating point?

Thoughts?  

(I could re-post in MKL-DNN git?  But since GEMM is proprietary in MKL I would imagine that so would integer GEMM.)

Thank you,

Adam 

0 Kudos
1 Solution
Shane_S_Intel
Employee
1,975 Views

Great question ... please take a look at this recent presentation (www.netlib.org/utk/people/JackDongara/WEB-PAGES/Batched-BLAS-2017/talk12-gurney.pdf) from the February 2017 Workshop on Batched, Reproducible, and Reduced Precision BLAS - main site located here:(www.netlib.org/utk/people/JackDongarra/WEB-PAGES/Batched-BLAS-2017/).

View solution in original post

0 Kudos
3 Replies
Shane_S_Intel
Employee
1,976 Views

Great question ... please take a look at this recent presentation (www.netlib.org/utk/people/JackDongara/WEB-PAGES/Batched-BLAS-2017/talk12-gurney.pdf) from the February 2017 Workshop on Batched, Reproducible, and Reduced Precision BLAS - main site located here:(www.netlib.org/utk/people/JackDongarra/WEB-PAGES/Batched-BLAS-2017/).

0 Kudos
Adam_S_5
Beginner
1,975 Views

Wow!  Great to see a lot of work is already going into this.  Looking forward to MKL 2018!!

0 Kudos
Gennady_F_Intel
Moderator
1,975 Views

Adam, we will post announsment when MKL 2018 beta will be available. Then you may take it and evaluate how it will work. 

0 Kudos
Reply