I find out that cluster_sparse_solver can not release physical memory for rank=0 process.
I apply two processes and two threads for each process to solve mtype=6, complex and symmetric matrix and use distribute assembled matrix input format as well as distribute RHS elements. The full example is in the attachment file.
The test I do is use literation statement to do the same calculation again and again. The result for every loop is correct. And the physical memory for rank=1 remains the same for each loop. However, the physical memory for rank=0 keeps on climbing. My computer has 16G memory and the physical memory occupation is shown below:
loop rank=0(%) rank=1(%)
phase 11 phase 23
0 4.7 6.5 4.6
1 5.7 7.4
2 6.6 8.3
3 7.5 9.3
4 8.4 10.2
5 9.4 11.2
I use the following command to compile: mpic++ -cxx=icpc -std=c++1y -mkl -xHost plain.cpp
and to run: mpiexec -n 2 ./a.out
I use mpich 3.1, mkl 11.2, icpc 15.0.0 on linux 64
I see the leaks on my side too but the size of these leaks of memory not so big like you saw on your side. Nevertheless, we will check what's going on with this tests and will back to you soon.