Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

Error in cblas_zhemm

mboisso
Beginner
557 Views
Hello,
I am trying to multiply a general complex matrix by a hermitian complex matrix. To make things simple, I made the simplest example I could think of. I have the 2x2 matrix :
(0,2+3i)
(2-3i,4)

This matrix is hermitian. I store it once in full dense row-major format, and once in upper row-major format. I call cblas_zhemm to multiply the two matrices. The correct result is
(13, 8+12i)
(8-12i,29)

Here is the code :

#include
#include
#include

using namespace std;
int main()
{
int size = 2;
double A[6] = { 0.0,0.0,2.0,3.0,4.0,0.0 };
double B[8] = { 0.0,0.0,2.0,3.0,2.0,-3.0,4.0,0.0 };
double C[8];

complex alpha(1.0), beta(0.0);

for (int i=0; i<6; i++)
cout << A << " ";
cout << endl;
for (int i=0; i<8; i++)
cout << B << " ";
cout << endl;
cblas_zhemm(CblasRowMajor,CblasRight,CblasUpper,size,size,(void *)α,A,size,B,size,(void*)β,C,size);

for (int i=0; i<8; i++)
cout << C << " ";
cout << endl;

}

Here is the output :
0 0 2 3 4 0
0 0 2 3 2 -3 4 0
13 0 0 0 8 -12 13 0

This means that the resulting matrix zhemm gives me is
(13,0)
(8-12i,13)

Which is not at all the right result!

Oh, and by the way, include_cblas.h is just a header which will include various cblas header files (GSL one, MKL one, Accelerate one) depending on compilation flag.

Please help me!
0 Kudos
4 Replies
Gennady_F_Intel
Moderator
557 Views

Hi,
Please change your code like below:
//double A[6] = { 0.0,0.0,2.0,3.0,4.0,0.0 };

double A[8] = { 0.0,0.0,2.0,3.0,0.0,0.0,4.0,0.0 };

More info about input array A of ?hemm function you can find into MKL User ( page 167, MKL v.10.1 ).
Please let me know how it will work on your side.
--Gennady

0 Kudos
mboisso
Beginner
557 Views

Hi,
Please change your code like below:
//double A[6] = { 0.0,0.0,2.0,3.0,4.0,0.0 };

double A[8] = { 0.0,0.0,2.0,3.0,0.0,0.0,4.0,0.0 };

More info about input array A of ?hemm function you can find into MKL User ( page 167, MKL v.10.1 ).
Please let me know how it will work on your side.
--Gennady


Hello,
An analyst pointed this to me yesterday. The fact that the matrix is hermitian does not dispense one to store the whole matrix and store only the upper diagonal. IF you want my opinion, the documentation for hemm is really UNCLEAR, as it tells the user that the hermitian matrix must store the upper (or lower) triangle, and that the strictly lower (or upper) triangle is NOT REFERENCED... which lead me to the (wrong) conclusion that I could store the matrix in packed format.

By the way, it works now... but I have a conversion to do from my packed matrices (I don't see the point of not saving half the space for hermitian matrices...) to an unpacked version of it.

Also, I tested hemm against gemm, and I didn't notice any significant improvement in the speed.
0 Kudos
Gennady_F_Intel
Moderator
557 Views

1. if it works that's fine!
2. if you have some issue to MKL's documentation,
I would recommend you submit the issue against MKL to Premier support( https://premier.intel.com/ )
3. << I tested hemm against gemm, and I didn't notice any significant improvement in the speed. >>
Have you expected significant imrovements hemm vs gemm?

0 Kudos
mboisso
Beginner
557 Views

2. if you have some issue to MKL's documentation,
I would recommend you submit the issue against MKL to Premier support( https://premier.intel.com/ )
Currently, this page gives me an internal server error. Will try again later.
3. << I tested hemm against gemm, and I didn't notice any significant improvement in the speed. >>
Have you expected significant imrovements hemm vs gemm?
Well, of course not for 2x2 matrices. However, I tried for 1000x1000 matrices, and I expected an improvement. Since, for hemm, the function uses the fact that the matrix is hermitian, it has more information on the matrix and should be able to do stuff faster than for a general matrix. If it does not, what is this function supposed to be usefull for ?
0 Kudos
Reply