Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

Pardiso for interior point method

hgfang
Beginner
1,165 Views

Hi all. I am writing an interior point algorithm for linear programming and trying to use Pardiso as the linear system solver. I am new to the Pardiso and have some questions:

- I want to apply Cholesky factorization for a matrix AA^T. CHOLMOD (a popular Cholesky factorization subroutine, but does not support multi-core computing) allows me to factorize AA^T without directly form the matrix AA^T, that is I can do something like 'cholesky(A, args)' and I don't need to compute AA^T. I wonder if Pardiso also supports this functionality? Or I must form the matrix AA^T by myself and then feed it to Pardiso?

- Does Pardiso support Cholesky-like factorization for symmetric quasi-definite matrices?

- It seems that Pardiso is widely used for interior point method. Is there any documentation available to understand how to use Pardiso for interior point method? 

Thanks for your time!

Best,

Huang Fang

0 Kudos
1 Solution
Kirill_V_Intel
Employee
1,107 Views

Hello!

1) About A * A^T.

PARDISO does not support the case when the input matrix is a symmetric product A A^T. So, to use PARDISO, you would need to create A*A^T as a CSR matrix (e.g. using mkl_sparse_syrk).

But, there is another approach using QR, and we have separate Sparse QR functionality on oneMKL. See, e.g. https://math.stackexchange.com/questions/2547431/cholesky-decomposition-for-aat.

You can compute QR of A^T and then A*A^T = RQ^T * QR = R^T R = your triangular decomposition as R is upper triangular. I am not sure, but it is possible this is exactly what happens in the original CHOLMOD by Tim Davis for A * A^T.

For Sparse QR in oneMKL, see https://software.intel.com/content/www/us/en/develop/documentation/onemkl-developer-reference-c/top/blas-and-sparse-blas-routines/sparse-qr-routines.html.

2) About interior point method, as @MRajesh_intel  mentioned, you can use matching and scaling, also change the pivoting strategy and play with iterative refinement (#steps and possibly extended precision residual computation).

I don't think we have a dedicated document at the moment which describes specificallythe knobs in PARDISO which can be used for interior point problems. It is just a question of increasing numerical stability while retaining solution accuracy which is regularly done for general badly conditioned problems (I guess your IP problems are of this kind as well).

Best,
Kirill

View solution in original post

0 Kudos
7 Replies
MRajesh_intel
Moderator
1,121 Views

Hi,


>> Does Pardiso support Cholesky-like factorization for symmetric quasi-definite matrices?

Yes. Pardiso supports this functionality. The solver first computes a symmetric fill-in reducing permutation P based on either the minimum degree algorithm or the nested dissection algorithm from the METIS package (both included with Intel® oneAPI Math Kernel Library), followed by the parallel left-right looking numerical Cholesky factorization of PAPT = LLT for symmetric positive-definite matrices, or PAPT = LDLT for symmetric indefinite matrices.


Please refer to the link below for further reference:

https://software.intel.com/content/www/us/en/develop/documentation/onemkl-developer-reference-c/top/sparse-solver-routines/onemkl-pardiso-parallel-direct-sparse-solver-interface.html


>>Is there any documentation available to understand how to use Pardiso for interior point method? 

Use iparm[10] = 1 (scaling) and iparm[12] = 1 (matching) for highly indefinite symmetric matrices.


Please refer to the link below for further reference:

https://software.intel.com/content/www/us/en/develop/documentation/onemkl-developer-reference-c/top/sparse-solver-routines/onemkl-pardiso-parallel-direct-sparse-solver-interface/pardiso-iparm-parameter.html


Thanks

Rajesh.




0 Kudos
hgfang
Beginner
1,110 Views

Thanks for your reply, it is very helpful.

0 Kudos
Kirill_V_Intel
Employee
1,108 Views

Hello!

1) About A * A^T.

PARDISO does not support the case when the input matrix is a symmetric product A A^T. So, to use PARDISO, you would need to create A*A^T as a CSR matrix (e.g. using mkl_sparse_syrk).

But, there is another approach using QR, and we have separate Sparse QR functionality on oneMKL. See, e.g. https://math.stackexchange.com/questions/2547431/cholesky-decomposition-for-aat.

You can compute QR of A^T and then A*A^T = RQ^T * QR = R^T R = your triangular decomposition as R is upper triangular. I am not sure, but it is possible this is exactly what happens in the original CHOLMOD by Tim Davis for A * A^T.

For Sparse QR in oneMKL, see https://software.intel.com/content/www/us/en/develop/documentation/onemkl-developer-reference-c/top/blas-and-sparse-blas-routines/sparse-qr-routines.html.

2) About interior point method, as @MRajesh_intel  mentioned, you can use matching and scaling, also change the pivoting strategy and play with iterative refinement (#steps and possibly extended precision residual computation).

I don't think we have a dedicated document at the moment which describes specificallythe knobs in PARDISO which can be used for interior point problems. It is just a question of increasing numerical stability while retaining solution accuracy which is regularly done for general badly conditioned problems (I guess your IP problems are of this kind as well).

Best,
Kirill

0 Kudos
hgfang
Beginner
1,101 Views

Thanks, your reply on handling AA^T is very useful to me. I appreciate your help!

0 Kudos
MRajesh_intel
Moderator
1,067 Views

Hi,


Can we go ahead and close this thread since your issue has been resolved?


Regards

Rajesh.


0 Kudos
hgfang
Beginner
1,049 Views
0 Kudos
MRajesh_intel
Moderator
1,037 Views

Hi,


Thanks for the confirmation!


As this issue has been resolved, we will no longer respond to this thread.

If you require any additional assistance from Intel, please start a new thread.

Any further interaction in this thread will be considered community only.

Have a Good day.

Regards

Rajesh


0 Kudos
Reply