Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
6977 Discussions

sparse matrix - sparse matrix multiplication

pradalunga
Beginner
1,346 Views

Hi,

I want to compute the product of two sparse matrices, simply C=A*B where A,B and C are sparse. I use the sparsiety because I'm working with a very large data, and I cannot use dense matrices for memory insufficiency. Aren't there any MKL functions that do this operation? Maybe there's a banal solution but I'm a "principiant" of programmation..

Thanks a lot,

Cristian

0 Kudos
14 Replies
Tabrez_Ali
Beginner
1,346 Views
0 Kudos
Intel_C_Intel
Employee
1,346 Views

Actually MKL has sparse version of all the BLAS including matrix multiplication support. There is support far a number of different sparse formats including compressed sparse row and column, skyline, diagonal, coordinate and block sparse row. Check the BLAS section of the manual for more information on how to use these functions.

Bruce

0 Kudos
pradalunga
Beginner
1,346 Views

Thank you Bruce,

Iread the manual before posting, and the Level 3 Sparse Blas functions only deal with one sparse matrix and one dense matrix. For example, mkl_dcsrmm Computes matrix - matrix product of a sparse matrix stored in the CSR format,

C := alpha*A*B + beta*C

with A sparse, but B and Cdense matrices.

Is there a routine that computes theproduct of two sparse matrices without converting one to dense ? Or maybe do you know an efficient way to do this product?

Cris

0 Kudos
Intel_C_Intel
Employee
1,346 Views

Sorry for assuming that you may have missed the information in manual.

Your problem is interesting. I don't know what is available, in general. Certainly MKL does not have anything like this. Can you indicate what kind of problem requires this functionality and what the matrix characteristics are? For instance, it seems to me that the sparse row entries of A would have to have corresponding sparse column entries in B.

Bruce

0 Kudos
ascotti
Beginner
1,346 Views
I stumble onto this post doing some research on the subject. A potential application, in fact the one I am working on, is the generation of the sparse matrix representation of a discretized differential operators, arising from PDEs. It is much easier to assemble the final operator as a product of simple operation (e.g., one sided derivatives) than to write the operator from scratch. amub (and aplb) from SPARSEKIT2 (see above) works well from this point of view, but it would be nice to have an optimized version. For instance, for many PDEs, the sparse matrices are almost always banded, which I think could be exploited to optmized the multiplication process...

Thank you
Alberto

0 Kudos
Intel_C_Intel
Employee
1,346 Views

Alberto,

Thanks. I will have our Sparse BLAS expert take a look at this issue.

Bruce

0 Kudos
Dishaw__Jim
Beginner
1,346 Views
It would be interesting to see if anything comes of this. I've looked at SPARSKITv2 because a sparse x sparse matrix multiply comes up in my code.
0 Kudos
Sergey_K_Intel1
Employee
1,346 Views

Dear Alberto

I'm a MKL developer. The Sparse BLAS routines mentioned by you are under development nowbecause the routines were requested by manyother customers. We are looking for real life tests. So we would be much obliged to you if youcould provide such example. It can be doneby submitting aQuAD report.

Thanks in advance

Sergey

0 Kudos
ascotti
Beginner
1,346 Views
Sergey,
you can download (if your employer allows it) a package called ellikit that I have developed that makes use of binary sparse-matrix operations (multiplications and additions). The code is rather tersely documented, but you should be able to see quickly how and where amub and aplb are used...
The tar ball (ellikit.tgz) can be downloaded from

ftp://ftp.unc.edu/pub/marine/ascotti

On a related note, I have a performance question. Dealing with banded matrixes (symmetric and non), is it better speed-wise to use MKL BLAS routines for banded matrixes or the ones for sparse matrixes. Also, to LU factorize such a matrix (symmetric), am I better off with PARDISO or should I use the BLAS routines?


Thank you

Alberto

0 Kudos
ascotti
Beginner
1,346 Views
Sergey,
let me elaborate on the last issue. What I have are matrixes that have non zero entries on the main diagonal and a subset of super and under diagonals, that is the non zero elements are a(j+m,j), with m=...,-m3,-m2,-m1,0,m1,m2,m3,.... So the standard banded storage is not really an option, since it would waste a lot of space on zero entries, yet relative to arbitrary sparse matrices they posses a regularity that I suspect can be employed to speed up the calculation of say a matrix-vector product. Poking around the MKL reference manual I see that there is a diagonal compact storage format. So my actual question is: how much faster is to do sprase-matrix-vector multiplication (or LU decomposition) using the diagonal compact storage scheme vs. the PARDISO CSR scheme?

Thank you
Alberto

0 Kudos
Sergey_K_Intel1
Employee
1,346 Views

Dear Alberto,

Thank you very much for supplying materials and sorry for the delay taken in answering. Ill try to get your files on Sunday when

Yes I agree with you that the diagonal storage scheme is more appropriate for your matrix structures. As our performance measurements show the MKL sparse BLAS routines for the diagonal format are in 2-4 times faster than their counterparts for the compressed row format used in PARDISO. However the performance of sparse operations also depends on the structure of the matrix, because the distribution of the nonzero elements in a sparse matrix determines the memory access patterns. So the performance advantage varies depending on the structure and additional investigation is needed to find out what is the performance advantage. Please dont forget that MKL Sparse BLAS routines are threaded and since that the performance advantage also depends on the number of threads.

If you dont mind I have got a couple of questions regarding the sparse matrix operations discussed early. Im very interested in your opinion about what kind of other sparse matrix operations are needed to be optimized and integrated into MKL? For example do you need operations like A^T*B or A*B^T where the symbol ^T means matrix transposed? What performance numbers for this type of operation will satisfy you?

Thank you very much again

All the best

Sergey
0 Kudos
ascotti
Beginner
1,346 Views
Sergey,
ideally, it would be nice to have the functionality offered in SPARSKIT implemented in the MKL (limited, that is to the operations on matrixes), but with the added feature of not being limited to CRS format. CRS (or CCS) is of course a storage scheme that makes the least assumptions (other than sparseness) on the matrix, and as such can accomodate a broad spectrum of matrixes. Typically, matrixes arising from finite elements discretizations of PDEs are not very "structured" and the CRS is ok. On the other hand, discretization of PDEs using finite volume (or finite differences) lead quite naturally to "structured" sparse matrixes, and in this case the use of routines that take into account the structure should lead to improvement. In fact, the matrices generated in ellikit follow in the latter category, and thus I'd like to rewrite ellikit using diagonal format as opposed to CRS.
There is an interesting discussion in the documentation of SPARSKIT. It basically points out that the optimal strategy is to write a sparse matrix as a combination of a structured one (e.g. diagonal) + the remainder few "odd points" stashed in CRS. The problem is that product of matrices involve now mixed storage schemes (e.g., diagonal*diagonal, crs*crs, diagonal*crs and crs*diagonal). Unfortunately that increases the number of subroutines needed... Personally, I'd like to see at least diagonal*crs and crs*diagonal implemented in addition to diag*diag and crs*crs.

Hope this helps

Alberto

0 Kudos
norppa75
Beginner
1,346 Views
Hi!

As far as I know, there is no such a routine available in the current version of MKL. We have also found the benefit for "sparse x sparse" operation in the field of applied physics (FEM), and therefore implemented the corresponding CSR operation ourselves, actually by using "pair" and "map" structures from C++ standard library for the resulting element ordering, while the relevant linear algebra (multiplications) follows the known rules.

Personally, I think that "sparse = sparse x sparse" operation should be included to the future MKL releases, since researchers from wide areas of scientific computation have encountered the same problem, definitely. Additionally, Intel architecture would exploit the relevant sparsity more efficiently in comparison to "our own implementations".

-Villesamuli

0 Kudos
Spinnler__David
Beginner
1,346 Views

Hi,

I'm having the same problem as OP. I would like to multiply two sparse matrices in diagonal storage format. Those matrices arise from Finit Differences as part of an QP Problem In the 2018 MKL reference, I haven't found a function that could do it. Only mkl_ddiamm, but that one only accepts one sparse matrix as input.

Have you guys found a solution yet?

thanks

David

0 Kudos
Reply