I want to compute the product of two sparse matrices, simply C=A*B where A,B and C are sparse. I use the sparsiety because I'm working with a very large data, and I cannot use dense matrices for memory insufficiency. Aren't there any MKL functions that do this operation? Maybe there's a banal solution but I'm a "principiant" of programmation..
Thanks a lot,
Actually MKL has sparse version of all the BLAS including matrix multiplication support. There is support far a number of different sparse formats including compressed sparse row and column, skyline, diagonal, coordinate and block sparse row. Check the BLAS section of the manual for more information on how to use these functions.
Thank you Bruce,
Iread the manual before posting, and the Level 3 Sparse Blas functions only deal with one sparse matrix and one dense matrix. For example, mkl_dcsrmm Computes matrix - matrix product of a sparse matrix stored in the CSR format,
C := alpha*A*B + beta*C
with A sparse, but B and Cdense matrices.
Is there a routine that computes theproduct of two sparse matrices without converting one to dense ? Or maybe do you know an efficient way to do this product?
Sorry for assuming that you may have missed the information in manual.
Your problem is interesting. I don't know what is available, in general. Certainly MKL does not have anything like this. Can you indicate what kind of problem requires this functionality and what the matrix characteristics are? For instance, it seems to me that the sparse row entries of A would have to have corresponding sparse column entries in B.
I'm a MKL developer. The Sparse BLAS routines mentioned by you are under development nowbecause the routines were requested by manyother customers. We are looking for real life tests. So we would be much obliged to you if youcould provide such example. It can be doneby submitting aQuAD report.
Thanks in advance
Thank you very much for supplying materials and sorry for the delay taken in answering. Ill try to get your files on Sunday when
Yes I agree with you that the diagonal storage scheme is more appropriate for your matrix structures. As our performance measurements show the MKL sparse BLAS routines for the diagonal format are in 2-4 times faster than their counterparts for the compressed row format used in PARDISO. However the performance of sparse operations also depends on the structure of the matrix, because the distribution of the nonzero elements in a sparse matrix determines the memory access patterns. So the performance advantage varies depending on the structure and additional investigation is needed to find out what is the performance advantage. Please dont forget that MKL Sparse BLAS routines are threaded and since that the performance advantage also depends on the number of threads.
If you dont mind I have got a couple of questions regarding the sparse matrix operations discussed early. Im very interested in your opinion about what kind of other sparse matrix operations are needed to be optimized and integrated into MKL? For example do you need operations like A^T*B or A*B^T where the symbol ^T means matrix transposed? What performance numbers for this type of operation will satisfy you?
Thank you very much again
All the bestSergey
I'm having the same problem as OP. I would like to multiply two sparse matrices in diagonal storage format. Those matrices arise from Finit Differences as part of an QP Problem In the 2018 MKL reference, I haven't found a function that could do it. Only mkl_ddiamm, but that one only accepts one sparse matrix as input.
Have you guys found a solution yet?