Community
cancel
Showing results for 
Search instead for 
Did you mean: 
D_Q__Y_
Beginner
102 Views

A question about the comparison between DSYMM and DGEMM of Blas pacage

I want to compare the performance of DSYMM and DGEMM in calculating matrix multiplication C=A*B, where A is a double precision symmetric matrix.

I use two different versions of code.

The first version is:

!time of DGEMM
call date_and_time(date,time,zone,values1)
call dgemm('n','n',n,n,n,1.0d0,a,n,b,n,0.0d0,c,n)
call date_and_time(date,time,zone,values2)
time_ms1=values2(8)-values1(8)
time_ms1=1000*(values2(7)-values1(7))+time_ms1
time_ms1=60*1000*(values2(6)-values1(6))+time_ms1


!time DSYMM
call date_and_time(date,time,zone,values1)
call dsymm('L','U',n,n,1.0d0,a1,n,b1,n,0.0d0,c1,n)
call date_and_time(date,time,zone,values2)
time_ms2=values2(8)-values1(8)
time_ms2=1000*(values2(7)-values1(7))+time_ms2
time_ms2=60*1000*(values2(6)-values1(6))+time_ms2

!print out the time
print*,time_ms1,time_ms2

 

Different from the first one, in the second version, I call the DGEMM/DSYMM one time before I test their performances.

In detail, the second version is:

call dgemm('n','n',n,n,n,1.0d0,a,n,b,n,0.0d0,c,n)

call date_and_time(date,time,zone,values1)
call dgemm('n','n',n,n,n,1.0d0,a,n,b,n,0.0d0,c,n)
call date_and_time(date,time,zone,values2)
time_ms1=values2(8)-values1(8)
time_ms1=1000*(values2(7)-values1(7))+time_ms1
time_ms1=60*1000*(values2(6)-values1(6))+time_ms1


call dsymm('L','U',n,n,1.0d0,a1,n,b1,n,0.0d0,c1,n)


call date_and_time(date,time,zone,values1)
call dsymm('L','U',n,n,1.0d0,a1,n,b1,n,0.0d0,c1,n)
call date_and_time(date,time,zone,values2)
time_ms2=values2(8)-values1(8)
time_ms2=1000*(values2(7)-values1(7))+time_ms2
time_ms2=60*1000*(values2(6)-values1(6))+time_ms2
print*,time_ms1,time_ms2

 

When I set the number "n" in the coed as 600 and do the calculations on a 12 Kernels DELL PC,  in the first version, the time used by DGEMM and DSYMM are ~35 ms and ~11 ms, indicating the DSYMM is faster than DGEMM.

But in the second version, the time of DGEMM and DSYMM are not very stable, sometimes they are 21 ms V.S. 23 ms, sometimes 7 ms V.S. 8 ms. So in the second version, the DGEMM and DSYMM have similar performance.

I am confused why the two versions of codes offer different conclusions? Why the time reported by second version is not very stable? Which one should be the right answer to the question that which performs better in the calculations of matrix multiplication involving symmetric matrix.

0 Kudos
5 Replies
Ying_H_Intel
Employee
102 Views

Hi D.Q. Y.

Could you please let us know some test details, like MKL verison,  32bit or 64bit hardware configuration, like if HT is enabling, windows or Linux?  or provide us a whole code. 

There aresome performance tips in MKL user guide:  https://software.intel.com/en-us/node/528551. you may try some of this and see if any change. 

set KMP_AFFINITY=granularity=fine,compact,1,0

Best Regards,

Ying 

 

 

 

D_Q__Y_
Beginner
102 Views

Ying H (Intel) wrote:

Hi D.Q. Y.

Could you please let us know some test details, like MKL verison,  32bit or 64bit hardware configuration, like if HT is enabling, windows or Linux?  or provide us a whole code. 

There aresome performance tips in MKL user guide:  https://software.intel.com/en-us/node/528551. you may try some of this and see if any change. 

set KMP_AFFINITY=granularity=fine,compact,1,0

Best Regards,

Ying 

 

 

 

OK, thanks, I will try the KMP_AFFINITY=granularity=fine,compact,1,0.

I do the calculations on a 64bit Linux system with 12 cores, the CPU is Intel(R) Xeon(R) CPU X5650  @ 2.67GHz.

I use the 13.1.3 version of  Fortran compiler and the corresponding 64bit MKL library, I link to the libmkl_rt.so file in the mkl directory. Furthermore, I also use the -O2 option to compile the code.

D_Q__Y_
Beginner
102 Views

OK, thanks, I will try the KMP_AFFINITY=granularity=fine,compact,1,0.

I do the calculations on a 64bit Linux system with 12 cores, the CPU is Intel(R) Xeon(R) CPU X5650  @ 2.67GHz.

I use the 13.1.3 version of  Fortran compiler and the corresponding 64bit MKL library, I link to the libmkl_rt.so file in the mkl directory. Furthermore, I also use the -O2 option to compile the code.

D_Q__Y_
Beginner
102 Views

Ying H (Intel) wrote:

Hi D.Q. Y.

Could you please let us know some test details, like MKL verison,  32bit or 64bit hardware configuration, like if HT is enabling, windows or Linux?  or provide us a whole code. 

There aresome performance tips in MKL user guide:  https://software.intel.com/en-us/node/528551. you may try some of this and see if any change. 

set KMP_AFFINITY=granularity=fine,compact,1,0

Best Regards,

Ying 

 

 

 

OK, thanks for your reply. I will try the KMP_AFFINITY=granularity=fine,compact,1,0.

I do the calculations on a 64bit Linux system with 12 cores, the CPU is Intel(R) Xeon(R) CPU X5650  @ 2.67GHz.

I use the 13.1.3 version of  Fortran compiler and the corresponding 64bit MKL library, I link to the libmkl_rt.so file in the mkl directory. Furthermore, I also use the -O2 option to compile the code.

TimP
Black Belt
102 Views

Ying's point about setting affinity to get consistent performance is well taken.  Not only is the platform dual CPU, it has the unusual feature of asymmetry among cores in performance of cache connections.

I hope that OMP_PLACES=cores would work with the more recent compilers which support it.  It's difficult to remember all the details of KMP_AFFINITY as it applies to these Westmere CPUs, with and without HT enabled. 

By default, MKL should use 1 thread per core even if HT is enabled, but it will not pin threads to CPU or core without a user affinity setting.  Performance variations as large as those mentioned would appear to involve thread swapping among CPUs.