- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hello all,

Last week I have been trying out Cluster Pardiso, testing various options and so on, and I found a problem. Cluster Pardiso fails with various error messages when I enable two-level factorization (that is iparm(24)=1).

The error can be easily reproduced in the following way:

1. Edit cl_solver_sym_sp_0_based_c.c example and add "iparm[23] = 0" at line 112 (or around there)

2. Recompile: mpiicc -mkl -mt_mpi cl_solver_sym_sp_0_based_c.c -o cl_solver_sym_sp_0_based_c

3. Run: export OMP_NUM_THREADS=2; mpirun -np 6 ./cl_solver_sym_sp_0_based_c

Here is the output:

*****************************************************************************************

=== CPARDISO: solving a symmetric indefinite system ===

0-based array is turned ON

CPARDISO single precision computation is turned ON

METIS algorithm at reorder step is turned ON

Matching is turned ON

Summary: ( reordering phase )

================

Times:

======

Time spent in calculations of symmetric matrix portrait (fulladj): 0.001182 s

Time spent in reordering of the initial matrix (reorder) : 0.000001 s

Time spent in symbolic factorization (symbfct) : 0.002049 s

Time spent in data preparations for factorization (parlist) : 0.000005 s

Time spent in allocation of internal data structures (malloc) : 0.005455 s

Time spent in additional calculations : 0.000024 s

Total time spent : 0.008716 s

Statistics:

===========

Parallel Direct Factorization is running on 6 MPI and 2 OpenMP per MPI process

< Linear system Ax = b >

number of equations: 8

number of non-zeros in A: 18

number of non-zeros in A (%): 28.125000

number of right-hand sides: 1

< Factors L and U >

number of columns for each panel: 64

number of independent subgraphs: 0

< Preprocessing with state of the art partitioning metis>

number of supernodes: 4

size of largest supernode: 4

number of non-zeros in L: 31

number of non-zeros in U: 1

number of non-zeros in L+U: 32

**Reordering completed ... *** Error in PARDISO ( insufficient_memory) error_num= 10
*** Error in PARDISO memory allocation: SOLVING_ITERREF_WORK_DATA, allocation of 1 bytes failed
total memory wanted here: 144 kbyte**

=== CPARDISO: solving a symmetric indefinite system ===

Two-level factorization algorithm is turned ON

Summary: ( factorization phase )

================

Times:

======

Time spent in copying matrix to internal data structure (A to LU): 0.000000 s

Time spent in factorization step (numfct) : 0.000000 s

Time spent in allocation of internal data structures (malloc) : 0.000029 s

Time spent in additional calculations : 0.000109 s

Total time spent : 0.000138 s

Statistics:

===========

Parallel Direct Factorization is running on 6 MPI and 2 OpenMP per MPI process

< Linear system Ax = b >

number of equations: 8

number of non-zeros in A: 18

number of non-zeros in A (%): 28.125000

number of right-hand sides: 1

< Factors L and U >

number of columns for each panel: 64

number of independent subgraphs: 0

< Preprocessing with state of the art partitioning metis>

number of supernodes: 4

size of largest supernode: 4

number of non-zeros in L: 31

number of non-zeros in U: 1

number of non-zeros in L+U: 32

gflop for the numerical factorization: 0.000000

*****************************************************************************************

With the original example eveyrhing works out alright.

Also, on other matrices I got different types of errors - every time I enable the two-level factorization feature the solver fails.

Am I doing something wrong?

Any ideeas would be highly appreciated!

Thank you in advance,

Serban

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

The link that Gennady gave is for the cluster parallel direct solver. The link in #3 is not. In #1 you said that you used the cluster version of the solver.

Link Copied

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

hello Serban,

please look at the documentation for cluster_sparse_solver iparm Parameter description - you will see the follow:

iparm(22) - iparm(26) |
Reserved. Set to zero |

--Gennady

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi Gennady,

Thanks a lot for the very quick reply.

In the documentation that I found here https://software.intel.com/en-us/node/521691 I did not found that information. It is written that `iparm(26) `

is reserved, but I got the impression that `iparm(24) `

can be used.

Am I looking in the wrong place?

Cheers,

Serban

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

The link that Gennady gave is for the cluster parallel direct solver. The link in #3 is not. In #1 you said that you used the cluster version of the solver.

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi macej4,

Indeed, following your comment I've realized the I've been looking at the wrong documentation. Now all makes sense, that option was wrong.

Thank you and to Gennady for your help!

Serban

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page