- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hello,

I'm trying to use cluster_sparse_solver and solve a system in-place (iparm(6) = 1), with a distributed format (iparm(40) = 1). I adapted the example cl_solver_unsym_distr_c.c as you can see attached, and at runtime, on two MPI processes, I get the following output:

$ icpc -V

Intel(R) C++ Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version 15.0.1.133 Build 20141023

$ mpicc -cc=icc cl_solver_unsym_distr_c.c -lmkl_intel_thread -lmkl_core -lmkl_intel_lp64 -liomp5

$ mpirun -np 2 ./a.out

The solution out-of-place of the system is:

on zero process x [0] = 0.149579 rhs [0] = 1.000000

on zero process x [1] = 0.259831 rhs [1] = 1.000000

on zero process x [2] = -0.370084 rhs [2] = 0.250000

on zero process x [3] = 0.011236 rhs [3] = 1.000000

on zero process x [4] = 0.415730 rhs [4] = 1.000000

Solving system in-place...

The solution in-place of the system is:

on zero process x [0] = 0.149579

on zero process x [1] = 0.259831

on zero process x [2] = -0.370084

on zero process x [3] = 1.000000

on zero process x [4] = 1.000000

Can you reproduce this behavior ? The solution in-place is obviously wrong. Do you see how to fix that ? Thank you in advance.

Link Copied

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi,

Everything is correct, current version of Direct Sparse Solver for Clusters doesn't support combination of in-place (iparm(6) = 1) solution with distributed rhs and non-distributed solution vector (iparm(40)=1)

Thanks,

Alex

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page