Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Intel Community
- Software Development SDKs and Libraries
- Intel® oneAPI Math Kernel Library & Intel® Math Kernel Library
- Pardiso is much slower than Multi frontal Solver

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

Tien_Hung_P_

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

07-10-2017
03:46 PM

68 Views

Pardiso is much slower than Multi frontal Solver

Hallo

I use Pardiso for solving Matrix which comes from full coupled 3D Biot's equation. For small problem, Pardiso has no problem. However, when size of matrix increases, the problem appears.

Currently, the matrix is around 1e6 x 1e6 (1 mil x 1 mil), and nnz=77589160 (around 77.5 mil). Phase 1 (reordering with iparm(2)=3 does not take too much time, however Phase 2 - Factorization takes so long to finish. In my case, with core i7 - 6800k and 64GB Ram, it took around 10 mins for Phase 2.

I noticed that the CPU ran only single core. I did some research on Intel forum and found that, the reason came from fill-in process. Then, I tried to compare with Direct Solver of Ansys, which I believe that is Multifrontal solver. Because the finite element mesh was exported from Ansys, so the matrix size is exactly the same. Ansys needed only 64 seconds for everything. Here is log from Ansys.

Here is the log from Pardiso. Pardiso ran single thread, used more memory than Ansys, and much slower. Ansys also used Metis as reordering method. According to this article, Pardiso should be as fast as multifrontal solver. So what is the wrong thing here? What did I wrongly config?

Thank you very much.

--------------------

Pham Hung

Link Copied

3 Replies

Zhen_Z_Intel

Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

07-11-2017
01:35 AM

68 Views

Hi Pham,

To your problem, I have few questions would like to check with you.

- Did you use OOC mode for PARDISO ?
- I found the num of equation is different from Anasys (980973) that PARDISO is 1007570, there almost 20k difference between two solvers, are you sure you are using totally same input matrix?
- Would you mind share your iparm setting and input matrix to us through private message? Thanks.

Best regards,

Fiona

Tien_Hung_P_

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

07-11-2017
03:58 AM

68 Views

Hi Fiona,

Thank you a lot for your reply

1. The different between number of equations because of constraint boundary conditions. I pretty sure they are the same because I use mesh from Ansys.

2.I use in-core mode, here is my setting for iparm (default is zero)

3. You could download my matrix from here. There are 4 txt files (ia.txt, ja.txt, a.txt, rhs.txt) is row index , col index, value, and right hand side vector.

Thank you.

Pham Hung

Tien_Hung_P_

Beginner

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

07-12-2017
02:35 AM

68 Views

Hallo again,

I tried my problem in Linux (Ubuntu), and Pardiso used all 6 cores for Factorization step. That's really strange.

For more complete information about compiler optimizations, see our Optimization Notice.