Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
6974 Discussions

mkl_sparse_s_spmmd is slower than tensorflow tf.sparse_matmul

zhou__jianqian
Beginner
513 Views

I have two sparse matrix,both of them sparsity is about 55%, I use mkl_sparse_s_spmmd function and pack the funtion .so dynamic library.in python program import .so.I compare .so and th.sparse_matmul use the same dataset.but I found the tf.sparse_matmul performance is good than the .so.

I compile MKL tensorflow,and test in this tensorflow.

0 Kudos
5 Replies
Gennady_F_Intel
Moderator
513 Views

55% of sparsity - this is dense but not sparse matrixes. what is the problem size? Probably, it makes sense to try the "classical" sgemm if the RAM size will allow doing that. 

0 Kudos
zhou__jianqian
Beginner
513 Views

I improve sparsity,the first matrix is(1507, 256),which sparsity is 70% more,and The other one is(256,31140),which sparsity is 70% more,here is my code. and I test in  Intel(R) Xeon(R) Silver.

0 Kudos
zhou__jianqian
Beginner
513 Views

Meanwhile,when use mkl sparse product,the CPU Utilization ratio is lower than tf.matmul() and tf.sparse_matmul()

0 Kudos
Gennady_F_Intel
Moderator
513 Views

nevertheless, why don't you want to call sgemm? Your typical problem sizes are not too big. You may convert from csr to dense representation and make the [s,d]gemm call. I do believe the performance and scalability will be pretty fine.

0 Kudos
zhou__jianqian
Beginner
513 Views

Thank you for your answer,at frist I think my matrix is sparse,so I replace dense matrix product with sparse matrix product,meanwhile,I enlarge my matrix size and sparsity,the size is13000*256,256*31140,the sparsity is 80%,I originally think this will improve my performance,but the running result shows my idea is wrong.

0 Kudos
Reply