Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Nan_Deng
Beginner
47 Views

Can PARDISO be fully parallelized? Or Any fully parallelized matrix inversion routine in MKL?

I am using the PARDISO solver in my application to solve one large sparse matrix (very successfully) and invert one large fully-populated matrix (not that good). It appears that PARDISO can only run on a single CPU/core during the forward factorization phase. I wonder if one can twist somewhat to let PARDISO to take full advantage of multi-core computing in all three phases of matrix factorization/substitution. Is it possible?

Alternatively, is there any callable routine(s) in the MKL for matrix inversion that has been fully parallelized? The matrix in question is complex*16, fully populated, and symmetric.

Any suggestions are greatly appreciated.
0 Kudos
4 Replies
Murat_G_Intel
Employee
47 Views

You can compute the inverse of a dense (fully populated) complex*16 matrix by calling zgetri after computing the LU factors using zgetrf.
47 Views

Hi,
Could you provide testcase with your matrix and linkline that you use to compile/run you code? I've asked it because PARDISO solver is parallel.
With best regards,
Alexander Kalinkin
Nan_Deng
Beginner
47 Views

Thanks for your reply. (Also thanks for Mr. Guney in the previous post). I providedmore details on my other post about the problem and the link line. The question on providing test case is the problem shows up only when the matrix becomes very large. For the case I quoted in my post, the matrix has a dimension of 60200 by 60200, with complex*16, half of it with PARDISO required index arrays are more than 40GB. I can have a smaller one with about 6200 by 6200, the sizeis about 420 MB and takes about a little more than 4 minutes to run on my machine (HP DL380, 24 cores). Would you be willing to try the small one? We have some file transfer site that can be used to accomodate large files.
47 Views

If this issue reproduce on small test (420 Mb) let's try to work with it at first.
With best regards,
Alexander Kalinkin
Reply