Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
Announcements
The Intel sign-in experience has changed to support enhanced security controls. If you sign in, click here for more information.
6741 Discussions

Packed GEMM APIs and dynamic batch size

guillaumekln
Beginner
298 Views

Hi,

I'm interested in further optimizing my application using the packed GEMM API. However, I'm unclear how it behaves in the case of dynamic batch sizes. For example,

  • X, the input of shape [M, K] where M is the batch size
  • W, the weight of shape [N, K]

The GEMM function should compute X*WT where W can be packed as it remains constant.

How does a change in M affect the packed representation of W? Do cblas_gemm_*_compute functions silently repack W if any of M, N, K is different? Or should it be done manually?

Thanks,

Guillaume

0 Kudos
1 Reply
guillaumekln
Beginner
298 Views
Reply