Hello,
How is the feature iparm(2) = 10 supposed to work with cluster_sparse_solver? If I adapt cl_solver_unsym_distr_c.c by changing the value of iparm[ 1] to 10, with 2 processes, it hangs on both Linux with IntelMPI and macOS with MPICH, and it segfaults with more than 2 processes.
Moreover, there is some unwanted output:
$ mpirun -np 2 ./a.out
RANK # 0 Total mem 0
RANK # 1 Total mem 0
<deadlock>
$ mpirun -np 4 ./a.out
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 48932 RUNNING AT
= EXIT CODE: 11
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault: 11 (signal 11)
Could this be fixed please?
Thanks.
链接已复制
With the Fortran example, and iparm(2) = 10, it also segfaults with more than 2 processes, but it seems OK with 2 processes, though there is a lot of unwanted output.
$ mpirun -np 4 _results/intel_intelmpi_lp64_intel64_a/cl_solver_sym_distr_f.exe
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image PC Routine Line Source
cl_solver_sym_dis 000000000652FC3D Unknown Unknown Unknown
libpthread-2.26.s 00007FC1AB87F160 Unknown Unknown Unknown
$ mpirun -np 2 _results/intel_intelmpi_lp64_intel64_a/cl_solver_sym_distr_f.exe
RANK # 0 Total mem 0
RANK # 1 Total mem 0
Memory allocated on phase 11 on Rank # 0 0.0000 Gb
Memory allocated on phase 11 on Rank # 1 0.0000 Gb
yes, we see the problem on our side. We will record it and fix it in future release. the next time, If any new issue, please feel free to submit to Online Service Center
Hello,
we fixed the problem and if you have access to the Online Service Center, then you may submit ticket there and I will give you the engineering build to check if the problem exists on your side.
thanks,
Gennady
