Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Intel Community
- Software
- Software Development SDKs and Libraries
- Intel® oneAPI Math Kernel Library
- Zeros in the result of mkl_sparse_spmm

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

Sid

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

10-10-2018
09:53 AM

111 Views

Zeros in the result of mkl_sparse_spmm

Hello,

Per https://software.intel.com/en-us/mkl-developer-reference-c-sparse-blas-csr-matrix-storage-format a matrix stored in CSR storage format should contain only non-zero values.

We've ran into a case where the result of mkl_sparse_spmm contains some zero values. Is this expected? I can provide a repro if needed.

Thanks,

Sid

Link Copied

2 Replies

Spencer_P_Intel

Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

10-10-2018
02:30 PM

111 Views

Hi Sid,

This is an interesting question. When we talk about the sparsity pattern of a matrix, we typically mean the set of non zero elements, but it is not always the case. For CSR matrix format, this sparsity pattern is represented by a rowPtr array of length nrows+1 and a colIndex array of length nnz (number of non zeros). Then the values array of length nnz finishes out the CSR storage format. In general, when we write down a matrix into sparse format, we like to only store the non zero elements (the sparsity pattern), however once we start doing operations to this matrix, it is definitely possible to have "numerical cancelation" where due to the specific values stored in the matrix (or matrices) the final values ends up being zero even though if we were to perturb the initial values a little, they otherwise might not be zero.

In Intel MKL's implementation of the Sparse BLAS operations, we compute the sparsity pattern of the resulting matrix ignoring the possibility of numerical cancelation. That is, we can often break the computations into two steps: 1. compute the resulting sparsity pattern (we return all values in the sparsity pattern) without looking at the values (only using the rowPtr and colIndex arrays). When the sparse_request_t parameter is available in the API, this is the SPARSE_STAGE_NNZ_COUNT option. 2. Go through and fill in the resulting values. When the sparse_request_t parameter is available, this is the SPARSE_STAGE_FINALIZE_MULT option. We do not come back at the end and remove all the zero elements.

Now, there are some implementations of sparse matrix operations (for example, MATLAB used to do this, but I am not sure if they still do) which enforce that the final result only contains nonzero elements, but most implementations do not do this.

Here are some advantages of not pruning the zeros:

1. If you have two matrices with the same sparsity pattern and are doing the same operations for each one (or you perturb your values a little and want to run again), you can skip the computations of the resulting sparsity pattern for the second matrix run as it will be the same is in the first matrix run. So use the row_ptr and colIndex arrays from the first case and just fill in a new values array. This is possible for sp2m, sypr,

2. How do we determine what is truly zero and what is just really small? Floating point operations have some margins of error based on their representations and order of operations. Do we threshold anything smaller than, say 1e-16 for double or 1e-8 for single precision to be 0? There are definitely ways to deal with this, but it is an extra complication.

3. In operations like sparse Cholesky, or sparse LU, sparse QR, etc, there are some nice theorems from graph theory that come into play and simplify the algorithms if you leave the "numerical cancelation" elements in the matrix representations. So in those cases, it is more efficient to leave them in and if needed, prune after the entire operation is finished.

4. The operation of pruning the non zeros is a rather expensive operation since it might require allocating and copying values from the matrix representation to get everything back into contiguous arrays for csr matrix format (or whatever format you are using). Since we are focused on achieving performance and it typically isn't a big deal to have a couple zeros stored, we leave them in. This being said, there may be cases where it makes sense to prune them. For now, we let the users make that decision and implement the pruning and rearrangement themselves if it is desired. Note: The 4-array csr matrix format might be more useful here so that you only have to push the "0" values and indices to the front or back of the colIndex/values arrays internal to each row and update the begin and end values for the rowPtr to exlude them...

Anyway, I hope that helps a little :)

Best Regards,

Spencer

Sid

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

10-10-2018
02:34 PM

111 Views

Thanks for the clarification.

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

For more complete information about compiler optimizations, see our Optimization Notice.