Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Chaowen_G_
Beginner
35 Views

cluster_sparse_solver can not release physical memory for rank=0 process

Hi:

   I find out that cluster_sparse_solver can not release physical memory for rank=0 process.

   I apply two processes and two threads for each process to solve mtype=6, complex and symmetric matrix and use distribute assembled matrix input format as well as distribute RHS elements. The full example is in the attachment file.

   The test I do is use literation statement to do the same calculation again and again. The result for every loop is correct. And the physical memory for rank=1 remains the same for each loop. However, the physical memory for rank=0 keeps on climbing. My computer has 16G memory and the physical memory occupation is shown below:

loop             rank=0(%)               rank=1(%)

         phase 11   phase 23

0       4.7            6.5                   4.6

1       5.7            7.4

2       6.6            8.3

3       7.5            9.3

4       8.4            10.2

5       9.4            11.2

I use the following command to compile: mpic++ -cxx=icpc -std=c++1y -mkl -xHost plain.cpp

and to run: mpiexec -n 2 ./a.out

I use mpich 3.1, mkl 11.2, icpc 15.0.0 on linux 64

0 Kudos
4 Replies
Gennady_F_Intel
Moderator
35 Views

Hi!

I see the leaks on my side too but the size of these leaks of memory not so big like you saw on your side. Nevertheless, we will check what's going on with this tests and will back to you soon.

--Gennady

Chaowen_G_
Beginner
35 Views

I try to use mkl 11.2. update 1, but the problem still remains the same. Does mkl have any plan to fix the problem?

Gennady_F_Intel
Moderator
35 Views

yes. the preliminary plan to add this fix into the next update. you will be updated when the fix would be released.

Gennady_F_Intel
Moderator
35 Views

Hi!   the problem has been fixed in 11.2 update 2 which has been released officially. Please check the problem on your side and let us know the results. thanks.

Reply