I hope this is not too much of a beginner's question, I just started using sparse matrices with MKL a few days ago. This is work within a large codebase so working examples are very time consuming to create. I will try to ask my question without any code snippet. Here goes:
I am using d/zcsrcoo to convert from COO to CSR, for square matrices ~10k-~100k, and it seems that everything is okay.
However, I need to convert also rectangular sparse matrices with the same number of rows, but the number of columns easily x2~4.
The info webpage for ?csrcoo states that the matrix size is a single variable, so I assume it is only for square sparse matrices
(I actually tried using it anyways and it hangs when I try to convert one of the rectangular ones).
My question is: am I better off creating a vitrual NxN matrix where N is the number of columns, converting it to CSR and then manually truncating the rows to fit an MxN rectangular one, OR, is there some trivial way to do this using MKL that I am overlooking?
Dr. Ariel Biller
Weizmann Institute of Science
I think that in MKL the COO, CSR and other representations of square matrices are elements of a larger task: solving N X N simultaneous linear equations. You appear to have other goals and requirements. Assuming that we can find a suitable compact representation for a sparse rectangular matrix, what do you plan to do with such a matrix, once it is formed? In particular, which MKL routines do you plan to use upon those rectangular matrices?
Thanks for replying, I think CSR can work with rectangular matrices just fine, but I am new to all of this, so I might be wrong. Currently I just wish to avoid deploying my own COO2CSR converter for the rectangular ones and am trying to think of alternatives using MKL.
Since you think more details are needed, the task (might be boring): the rectangular matrices are actually the halos in a domain-decomposition code, for a high-order laplacian (up to 72 neighbors in the stencil). Essentially, every row along the diagonal of the matrix is a grid point in 3d space, and if I include the halo elements - then the matrix is very large. I intend to decompose the matrix into two parts, the local part which is square and symmetric(sometimes skewed), and the part that comes from neighboring processing elements (only lower/upper part, depending on how you look at it). The latter will become rectangular as I break my grid off to smaller and smaller pieces, since more and more elements for my laplacian will be present on neighboring PEs. By decomposing the laplacian in this manner I can hide the communication behind the sparse-matvec for the square part and later perform the rectangular sparse matvec.
Dr. Ariel Biller,
Right, basically, the function is designed for support square matrix.
As i recalled, y mkl_?csrcoo work with mxn (n>>m), but can't work n<<m. You may try it and see if it works.
Or you may suppose there are zero element in last rows of coo matrix, thus take the matrix as full square, convert to CSR, then modify the row index manually.
Intel MKL Support
The end result is that I wrote a small conversion routine for the rectangular part, it is not threaded but I only do this conversion once per run so I do not care so much.
As for the square part, I am still undecided whether I am going with CSR or some other form of representation, but maybe you could answer me this follow up question -
My main diagonal is all zeros, but I am not inputting them directly. I understand that the zeros are necessary in order to use the symmetric mode of ?csrmv. Am I correct in this understanding?
Computes matrix - vector product of a sparse matrix stored in the CSR format.
It should not have the limitation of zeros are necessary. It should be ok with all zero in diagonal and just follow the CSR format discripted in MKL manual. https://software.intel.com/en-us/node/522243#MKL_APPA_SMSF_2