Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
This community is designed for sharing of public information. Please do not share Intel or third-party confidential information here.

Is there a problem with hemm() ?


If I multiply two complex diagonal 2x2 matrices, with chemm(), I don't get a diagonal matrix ! 

Here is the code for C=A.B 

Matrix A is stored as an upper triangle hermitian matrix. 

#include <cstdlib>
#include <iostream>
#include <cstdio>
#include <complex>
#include <math.h>

#define MKL_Complex8 std::complex<float>
#define MKL_INT int
#include "mkl.h"
using namespace std;
typedef std::complex<float> Comp ;
int main(int argc, char** argv)
CBLAS_ORDER order= CblasColMajor;
CBLAS_SIDE Left = CblasLeft;
CBLAS_UPLO Uplo= CblasUpper;
Comp one=1; Comp zero =0;
Comp A[3]; 
Comp B[4]; 
Comp C[4]; 
cblas_chemm ( order,  Left, Uplo, 2, 2, &one, A, 2, B, 2, &zero, C, 2); //C=A.B

for (int i=0;i<2;i++)
    for (int j=0;j<2;j++)
        cout<<C[i+2*j]<<"  ";






0 Kudos
3 Replies
Black Belt

The convention is that you must allocate space for the entire matrix, even though you may fill in only the upper or lower triangle. There is no special provision for diagonal matrices.

Therefore, correct your code to change the declaration of A to A[4] and set the values as

[cpp]A[0]=1.0;A[1]=0;A[2]=0.0; A[3]=1.0;
B[0]=2.0;B[1]=0;B[2]=0.0; B[3]=1.0;[/cpp]


Thanks Mecej4.

I have this comments: 

What is the point of storing the whole matrix in order to use  hemm() while one could also use gemm() to do the same thing? Isn't the main advantage of using symmetric functions to save the memory and store only half the matrix?

Black Belt

The documentation for ?hemm is clear in stating that the input matrices are stored in dense (full) form. For the special case where the matrix is symmetric, for your convenience the interface provides for filling in only the upper or lower triangle. You, on the other hand, want to use packed storage. While there are routines that accept symmetric and triangular matrices in packed storage format, ?hemm is not such a routine. To help you remember which format is applicable, routines accepting packed matrices have a 'p' in their given name.

Along similar lines, a routine whose specification states that an input matrix should be in packed format would not work if the matrix were passed to it in full format.

Saving memory is one issue. User convenience, as noted above, and efficiency are also important considerations, and the designers of BLAS/Lapack had to strike a balance between conflicting requirements. The people involved were the best and brightest of their day, and while we may not understand or have available at hand the deliberations that guided their design decisions, I wouldn't dare accuse them of indulging in "not good practice".