Hi Pardiso experts,
I have a couple of questions about PARDISO.
First a simple one. I am using out of core PARDISO and specify PARDISO_OOC_KEEP_FILE=0 to keep the files.
But then when I want to delete them, I am changing PARDISO_OOC_KEEP_FILE to 1 and call pardiso with the finalize flag.
However, it seems that the environment variable is read only during factorization so when I call pardiso with the finalize flag the files are not deleted probably because the environment variable is not reread. How is it possible to delete the ooc files without redoing a factorization ?(apart from doing it manually of course)
Another question regards the accuracy of the solution of highly indefinite systems with a high condition number especially when a really small right hand side is used. In the context of nonlinear static solution I have noticed that as the newton Raphson residual becomes smaller PARDISO fails to give an accurate result especially if scaling and weight matching are turned off and even many times the iterative refinement process in the solution phase fails to converge. If iterative refinement is turned off the results most of the times cause the NR iteration to fail to converge, Is there a way to get really accurate results probably by sacrificing some speed? Additionally I have noticed that even the usage of multiple threads may sacrifice accuracy and cause a NR loop to fail. Btw when the system is positive definite the aforementioned problems almost never occur. Also I have tried MUMPS and I get consistent successful results so my systems seem solvable.
Last, I have a couple of questions about cluster pardiso. I am trying to do multiple factorizations after one analysis but only the first factorization is successful and the rest fail, therefore I have to do an analysis(symbolic) before each numerical factorization. Is it possible to overcome this problem? Finally, cluster pardiso lacks a few useful feature which pardiso has, such as ooc capability, reuse capability and the option to solve with the transpose, to mention a few. Is there a schedule when cluster pardiso will fully support the pardiso features?
1)... How is it possible to delete the ooc files without redoing a factorization? ((apart from doing it manually of course)
that's not possible right for now. We need to some additional option to the pardiso_setenv(...) routine.
Do you really think that such options would be useful? in your case - is this real scenario?
2) You may try to play with extended precision ( iparm )
3)I am not sure understand -- Do you want to apply symbolic factorization once and then compute factorization for many of matrixes?
4) Yes, we have some plan to add some of option to Cluster version of Pardiso. Would you prioritize the list of such features we need to add from your point of view?
1) Actually in my case it is a very real scenario met in optimizataion analysis.
The process briefly is as follows. In the first iteration one calculates the solution for many boundary conitions (aka matrices). For each solution the factorization is stored. From all the solutions the objective function and constraints are evaluated and then the sensitivities are calculated which involves the solution of many right hand sides with the previously saved factorizations for each boundary condition. Then the cycle is repeated until convergence. So I need to be able after the calculation of sensitivities to delete the files.
2) Actually I have, but it seems that the solution never finishes (or is taking too long to complete). I will try again and see if it works.
3) Yes, but I don't want to do it at once (with one call to the pardiso routine and input many matrices), but one matrix after the other. I can do this with pardiso but it seems that for cpardiso it is not possible for some reason and I am getting a segmentation fault if I start a second factorization after another without first redoing the analysis phase.
4)This is great. I would say first we need 64 bit integer wrappers to be available in the 32 bit integer mkl just like pardiso and pardiso_64 so that we can solve really big models without switching mkls. Secondly, we surely need the ooc functionality for the same reason. Thirdly I would say that this problem with the one Symboli-multiple Numeric factorizations would be nice if it was fixed because symbolic does not scale in dmp and in the context of a parallel run it's runtime becomes signifficant. Fourth the solution with the transpose iparm(12) (fortran format) which is missing is necessary in many case.
5) A couple more questions:
a) What would be a good way to distribute a matrix for a dmp run. Currently I am just splitting the rows evenly among the processors (so in the symmetric case the last processor gets only a few nonzeros) and trust that pardiso will do the load balancing internally. The reason for this is that initially the matrix is split element wise (mumps way). Therefore each processor holds her part of the whole array (all rows) which is populated by a fraction of the elements. Then in order to bring the matrix to PARDISO format and to avoid gathering (centralizing) the matrix, the processors exchange between them the relevant rows. Since there is no way to know what would be the optimal distribution of rows to balance let's say the nonzeros of each processor I chose a fixed number of rows. Should I try another way? Do you think the element-wise distribution format could be supported?
b) As we discussed previously for big models I am getting some crashes in the analysis or beginning of factorization phase when I use the 32 bit integer pardiso library. The matrices have far less than the 500 million nonzeros which is suggested in the documentation as the threshold to switch to 64 integer bits. Fortunately switching to 64 integer bit pardiso allows the analysis to be solved so I am able to get solutions but I was forced to lower my switch threshold to 100 million. I was wondering whether you have encountered such a case or what would be a good way to debug. For the cases where the crash occurs in analysis I am getting the same crash with cpardiso, but I don't know if the 64 bit cpardiso would work since it is not available in the 32 bit integer mkl.
Sorry for the long list. I appreciate your help.