- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I'm trying to use cluster_sparse_solver and solve a system in-place (iparm(6) = 1), with a distributed format (iparm(40) = 2). I adapted the example cl_solver_unsym_distr_c.c as you can see attached, and at runtime, on two MPI processes, I get the following output:
The solution out-of-place of the system is:
on zero process x [0] = 0.263109 rhs [0] = 1.000000
on zero process x [1] = 0.305243 rhs [1] = 1.000000
on zero process x [2] = -0.347378 rhs [2] = 0.250000
The solution out-of-place of the system is:
on first process x [0] = -0.347378 rhs [0] = 0.750000
on first process x [1] = 0.205993 rhs [1] = 1.000000
on first process x [2] = 0.288390 rhs [2] = 1.000000
Solving system in-place...
The solution in-place of the system is:
on zero process x [0] = 1.000000
on zero process x [1] = 1.000000
on zero process x [2] = 0.250000
The solution in-place of the system is:
on first process x [0] = 0.750000
on first process x [1] = 1.000000
on first process x [2] = 1.000000
Can you reproduce this behavior ? The solution in-place is obviously wrong. Do you see how to fix that ? Thank you in advance.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
Thanks for your report. Our engineer owner could verify this is problem, and they are planning to fix it in our near future release.
Regards,
Chao
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This problem is fixed and the fix would be available into the nearest update 1 - MKL v.11.2. We will make the announcement of this release on the Top of the forum.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
With the new update, I'm now getting errors that I can't reproduce on a simple program such as:
*** Error in PARDISO ( insufficient_memory) error_num= 10
*** Error in PARDISO memory allocation: SOLVING_ITERREF_WORK_DATA, allocation of 1 bytes failed
total memory wanted here: 159 kbyte
The thing is that the matrix is really small (36 x 36 on 8 processes), and I'm turning iterative refinement off, so I don't understand why is PARDISO trying to allocate memory, and why it is failing (I have 6 GB of RAM on the machine I'm getting these errors on).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Here is the exact same test case as above, except this time, the RHS and the solution are centralized while the matrix is distributed. Clearly the problem is still here. Can you reproduce this problem on your side ?
Thank you.
$ icc -v
icc version 15.0.1 (gcc version 4.9.0 compatibility)
$ mpicc -cc=icc cl_solver_unsym_distr_c.c -lmkl_intel_thread -lmkl_core -lmkl_intel_lp64 -liomp5
$ mpirun -np 2 ./a.out
The solution out-of-place of the system is:
on zero process x [0] = 0.149579 rhs [0] = 1.000000
on zero process x [1] = 0.259831 rhs [1] = 1.000000
on zero process x [2] = -0.370084 rhs [2] = 0.250000
on zero process x [3] = 0.011236 rhs [3] = 1.000000
on zero process x [4] = 0.415730 rhs [4] = 1.000000
Solving system in-place...
The solution in-place of the system is:
on zero process x [0] = 0.149579
on zero process x [1] = 0.259831
on zero process x [2] = -0.370084
on zero process x [3] = 1.000000
on zero process x [4] = 1.000000
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Could you confirm that you can reproduce this new issue please ? The original issue is fixed though. Thank you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It has been three weeks since I posted this new issue, but I still don't have a clear answer, could you please let me know if the problem is on my side ?
Thank you.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page