Showing results for

- Intel Community
- Software Development SDKs and Libraries
- Intel® oneAPI Math Kernel Library & Intel® Math Kernel Library
- Multiplication of two sparse matrices and one dense matrix

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

Highlighted
##

Please forgive my lack of linear algebra knowledge here:

I want the product of 3 matrices, A, B, C:

A*B*C

A and C are a row vector and a column vector, respectively, that both need to be treated as diagonal matrices. B is dense. Thus, I want to use the mkl_?csrmm function, as suggested here.

tinastone

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

06-28-2012
02:56 PM

18 Views

Multiplication of two sparse matrices and one dense matrix

I want the product of 3 matrices, A, B, C:

A*B*C

A and C are a row vector and a column vector, respectively, that both need to be treated as diagonal matrices. B is dense. Thus, I want to use the mkl_?csrmm function, as suggested here.

I'm fine for the A*B part. I'm hung up on the B*C part - as I understand it, the mkl_?csrmm functions take the left matrix as the sparse one, while the right is dense. I'm OK enough with linear algebra to remember that matrix multiplication is not commutative. How can I then get the product of B*C using the mkl_?csrmm function?

2 Replies

Highlighted
##

Hi Michael,

Itlooks rights, there is not function for B*C, where B is dense and C is csr.

Andyour understandingarecorrect that

1) the mkl_?csrmm functions take the left matrix as the sparse one, while the right is dense. The formula is

2) I'm OK enough with linear algebra to remember that matrix multiplication is not commutative.

But a little confusion about theformat ofA, C , you mentioned they are vectors, butneed to treaded as diagonal matrics, why do you need tread the vectors as matrix? Could you please give some expanation about their format?

If you use mkl_?csrmm on A*B, then please note the A need be CSR sparse matrix format.

If assume M=A*B, then M is dense matrix. And you will do M*C,

A. if c is csc sparse matrix format, thenconvert it to dense matrix mkl_?dnscsr and dodense matrix multiply: ?gemm

B. C is vector, thenyou canuse dgemv which compute matrix and vector product.

Best Regards,

Ying

Ying_H_Intel

Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

06-28-2012
08:17 PM

18 Views

Itlooks rights, there is not function for B*C, where B is dense and C is csr.

Andyour understandingarecorrect that

1) the mkl_?csrmm functions take the left matrix as the sparse one, while the right is dense. The formula is

`C` := `alpha`*`A'`*`B` + `beta`*`C`

2) I'm OK enough with linear algebra to remember that matrix multiplication is not commutative.

But a little confusion about theformat ofA, C , you mentioned they are vectors, butneed to treaded as diagonal matrics, why do you need tread the vectors as matrix? Could you please give some expanation about their format?

If you use mkl_?csrmm on A*B, then please note the A need be CSR sparse matrix format.

If assume M=A*B, then M is dense matrix. And you will do M*C,

A. if c is csc sparse matrix format, thenconvert it to dense matrix mkl_?dnscsr and dodense matrix multiply: ?gemm

B. C is vector, thenyou canuse dgemv which compute matrix and vector product.

Best Regards,

Ying

Highlighted
##

Hi Ying,

Thank you for your answer. What I am trying to do is to weight an array of data to compensate for Poissonian noise. The formula for the weighted data is:

D' = R^(1/2)*D*C^(1/2)

where R is the row mean of the array, D is the unweighted data, and C is the column mean of the array. R and C as calculated are column and row vectors, accordingly. Please correct me if I'm wrong, but I don't think that multiplying D by a column or row vector is the same as multiplying it with a diagonal matrix that has the elements of the row or column vector along its diagonal. Multiplying by the vector will effectively yield the sum of the matrix you get by multiplying the diagonal matrix by the original matrix.

In effect, I think I need element-wise multiply each element of the column vector by each row of the matrix, and then the analogous operation for the row vector - element-wise multiplication of each row vector element by the corresponding column of the input matrix.

I am only familiar with this in Python (numpy), where they call this kind of operation broadcasting.

I will look for equivalents in MKL.

tinastone

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

06-29-2012
02:35 PM

18 Views

Thank you for your answer. What I am trying to do is to weight an array of data to compensate for Poissonian noise. The formula for the weighted data is:

D' = R^(1/2)*D*C^(1/2)

where R is the row mean of the array, D is the unweighted data, and C is the column mean of the array. R and C as calculated are column and row vectors, accordingly. Please correct me if I'm wrong, but I don't think that multiplying D by a column or row vector is the same as multiplying it with a diagonal matrix that has the elements of the row or column vector along its diagonal. Multiplying by the vector will effectively yield the sum of the matrix you get by multiplying the diagonal matrix by the original matrix.

In effect, I think I need element-wise multiply each element of the column vector by each row of the matrix, and then the analogous operation for the row vector - element-wise multiplication of each row vector element by the corresponding column of the input matrix.

I am only familiar with this in Python (numpy), where they call this kind of operation broadcasting.

I will look for equivalents in MKL.

For more complete information about compiler optimizations, see our Optimization Notice.