- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

I have two sparse matrix,both of them sparsity is about 55%, I use mkl_sparse_s_spmmd function and pack the funtion .so dynamic library.in python program import .so.I compare .so and th.sparse_matmul use the same dataset.but I found the tf.sparse_matmul performance is good than the .so.

I compile MKL tensorflow,and test in this tensorflow.

Link Copied

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

55% of sparsity - this is dense but not sparse matrixes. what is the problem size? Probably, it makes sense to try the "classical" sgemm if the RAM size will allow doing that.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Meanwhile,when use mkl sparse product,the CPU Utilization ratio is lower than tf.matmul() and tf.sparse_matmul()

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

nevertheless, why don't you want to call sgemm? Your typical problem sizes are not too big. You may convert from csr to dense representation and make the [s,d]gemm call. I do believe the performance and scalability will be pretty fine.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Thank you for your answer，at frist I think my matrix is sparse，so I replace dense matrix product with sparse matrix product，meanwhile,I enlarge my matrix size and sparsity,the size is13000*256,256*31140,the sparsity is 80%,I originally think this will improve my performance，but the running result shows my idea is wrong.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page