Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
6977 Discussions

Optimizing matrix multiplication algorithm on Intel Xeon Gold (DevCloud)

RishabhKum_J_Intel
401 Views
Hi,
 
I am working on Case #03357624 - Benchmarking algorithms on Intel Xeon Gold (DevCloud):
 
Summary:
The concern is on time overhead while running compiled mmatest1.c, attached to the link: Performance of Classic Matrix Multiplication Algorithm on Intel® Xeon Phi™ Processor System | Intel® Software 
 
Observation:
First occurrence of loop is taking huge time. Second loop is also taking comparatively more time. Time taken by rest is similar.
I ran the code with 16 loop count and matrix size 256 and got following result for each loop:
        MKL:
        MKL  - Completed 1 in: 0.2302730 seconds
        MKL  - Completed 2 in: 0.0001534 seconds
        MKL  - Completed 3 in: 0.0001267 seconds
        MKL  - Completed 4 in: 0.0001275 seconds
        ..................
        MKL  - Completed 15 in: 0.0001280 seconds
        MKL  - Completed 16 in: 0.0001347 seconds
 
        CMMA:
        CMMA - Completed 1 in: 0.0504993 seconds
        CMMA - Completed 2 in: 0.0003169 seconds
        CMMA - Completed 3 in: 0.0001666 seconds
        CMMA - Completed 4 in: 0.0001687 seconds
        ................
        CMMA - Completed 15 in: 0.0001638 seconds
        CMMA - Completed 16 in: 0.0001636 seconds
 
Time taken by first loop should be due to warm up (initial process of loading the data in cache and Translation Look-Aside Buffer (TLB) etc.)
 
=> I need advise and confirmation on following Questions and answers which I got as per my understanding:
1) Should first result (time taken by first occurrence of loop) be included in time estimation while benchmarking?
Ans I have) No, it should be excluded. 
Further Q) Why time taken by second loop is more than other following loops? Should it also be excluded from benchmarking? How many initial loops should we not include in time estimation?
 
2)  Is the overhead primarily due to the cache misses or the warm up time?
Ans I have) It’s due to warm up time. If we will use large matrices cache miss will also come to effect. 
Further Q) As per the user it’s due to cache misses. How cache miss is effecting initially when it has no data? Is warm up not a right term instead?
 
3) If it is indeed cache misses, how can he work on that? He thought its always accessed in a row-major format and thus cache misses would be avoided if he would have accessed it in the same format.
Ans I have) It’s correct, we should access in row-major format. Data layout in memory and data access scheme should be kept best same. Possible solutions (if it’s a big matrix) are:
a) Transpose matrix B to access it with row major
b) Use loop blocking optimization technique (LBOT) with block size equal to virtual page size.
 
4) How to debug cblas_sgemm() or where to find source code of it to debug using gdb? 
 
Please advise.
Thanks and regards,
Rishabh Kumar Jain
0 Kudos
2 Replies
RishabhKum_J_Intel
401 Views

Hi,

Please can I have some update on this? 

Thanks and regards,

Rishabh Kumar Jain

0 Kudos
Ying_H_Intel
Employee
401 Views

Hi Jian,

I will contact with you by email .  for the question itself, developer may refer to 

https://software.intel.com/en-us/articles/a-simple-example-to-measure-the-performance-of-an-intel-mkl-function

The Intel® Math Kernel Library (Intel® MKL) is multi-threaded and employs internal buffers for fast memory allocation. Typically the first subroutine call initializes the threads and internal buffers. Therefore, the first function call may take more time compared to the subsequent calls with the same arguments. Although the initialization time usually insignificant compared to the execution time of SGEMM for large matrices, it can be substantial when timing SGEMM for small matrices. To remove the initialization time from the performance measurement, we recommend making a call to SGEMM with sufficiently bigger parameters (for example, M=N=K=100) and ignoring the time required for the first call. Using a small matrix for the first call won’t initialize the threads since Intel MKL executes multi-threaded code only for sufficiently large matrices.

Best Regards,
​Ying







 

 

0 Kudos
Reply