<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Some questions in use cluster pardiso OOC mode iparm[59]=2 in Intel® oneAPI Math Kernel Library</title>
    <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Some-questions-in-use-cluster-pardiso-OOC-mode-iparm-59-2/m-p/1694298#M37186</link>
    <description>&lt;P&gt;First of all, I'm glad you responded and I'm sorry for my late response.&lt;BR /&gt;I used a simple test code to calculate a 4*4 linear equation system in OOC mode. The code is as follows:&lt;BR /&gt;I used Visual Studio 2022 and used Intel mpiexec to execute:&lt;/P&gt;&lt;P&gt;#include &amp;lt;iostream&amp;gt;&lt;BR /&gt;#include &amp;lt;mpi.h&amp;gt;&lt;BR /&gt;#include "mkl_cluster_sparse_solver.h"&lt;BR /&gt;#include "mkl_types.h"&lt;BR /&gt;#include &amp;lt;vector&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;int main(int argc, char* argv[]) {&lt;BR /&gt;MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;BR /&gt;int myrank;&lt;BR /&gt;MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;myrank);&lt;/P&gt;&lt;P&gt;// Problem size&lt;BR /&gt;MKL_INT n = 4; // Small example; for true OOC test, use a much larger system.&lt;BR /&gt;// SPD CSR&lt;BR /&gt;MKL_INT ia[5] = { 1, 3, 6, 9, 11 };&lt;BR /&gt;MKL_INT ja[10] = { 1, 2, 1, 2, 3, 2, 3, 4, 3, 4 };&lt;BR /&gt;double a[10] = { 4, -1, -1, 4, -1, -1, 4, -1, -1, 3 };&lt;BR /&gt;double b[4] = { 1.0, 2.0, 3.0, 4.0 };&lt;BR /&gt;double x[4] = { 0.0,0.0,0.0,0.0 };&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;// Pardiso internal data&lt;BR /&gt;void* pt[64] = { 0 };&lt;BR /&gt;MKL_INT iparm[64] = { 0 };&lt;BR /&gt;MKL_INT maxfct = 1, mnum = 1, phase, error = 0, msglvl = 1;&lt;BR /&gt;MKL_INT mtype = 2; // Real symmetric positive definite&lt;BR /&gt;MKL_INT nrhs = 1;&lt;/P&gt;&lt;P&gt;// Set iparm values&lt;BR /&gt;for (int i = 0; i &amp;lt; 64; i++) iparm[i] = 0;&lt;BR /&gt;iparm[0] = 1; // No solver default&lt;BR /&gt;iparm[1] = 2; // Fill-in reordering from METIS&lt;BR /&gt;iparm[7] = 0; // Max number of iterative refinement steps&lt;BR /&gt;iparm[59] = 2; // Enable OOC mode&lt;BR /&gt;iparm[10] = 0;&lt;BR /&gt;iparm[12] = 0;&lt;/P&gt;&lt;P&gt;std::cout &amp;lt;&amp;lt; "2" &amp;lt;&amp;lt; std::endl;&lt;BR /&gt;MPI_Comm comm = MPI_COMM_WORLD;&lt;BR /&gt;// Phase 11: Reordering and Symbolic Factorization&lt;BR /&gt;phase = 11;&lt;BR /&gt;cluster_sparse_solver(pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,&lt;BR /&gt;&amp;amp;n, a, ia, ja, NULL, &amp;amp;nrhs,&lt;BR /&gt;iparm, &amp;amp;msglvl, b, x, &amp;amp;comm, &amp;amp;error);&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;std::cout &amp;lt;&amp;lt; "3" &amp;lt;&amp;lt; std::endl;&lt;BR /&gt;// Phase 22: Numerical factorization&lt;BR /&gt;phase = 22;&lt;BR /&gt;cluster_sparse_solver(pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,&lt;BR /&gt;&amp;amp;n, a, ia, ja, NULL, &amp;amp;nrhs,&lt;BR /&gt;iparm, &amp;amp;msglvl, b, x, &amp;amp;comm, &amp;amp;error);&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;std::cout &amp;lt;&amp;lt; "4" &amp;lt;&amp;lt; std::endl;&lt;BR /&gt;// Phase 33: Back substitution&lt;BR /&gt;phase = 33;&lt;BR /&gt;cluster_sparse_solver(pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,&lt;BR /&gt;&amp;amp;n, a, ia, ja, NULL, &amp;amp;nrhs,&lt;BR /&gt;iparm, &amp;amp;msglvl, b, x, &amp;amp;comm, &amp;amp;error);&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;if (myrank == 0) {&lt;BR /&gt;std::cout &amp;lt;&amp;lt; "Solution x:\n";&lt;BR /&gt;for (int i = 0; i &amp;lt; n; i++) std::cout &amp;lt;&amp;lt; x[i] &amp;lt;&amp;lt; " ";&lt;BR /&gt;std::cout &amp;lt;&amp;lt; "\n";&lt;BR /&gt;}&lt;/P&gt;&lt;P&gt;std::cout &amp;lt;&amp;lt; "5" &amp;lt;&amp;lt; std::endl;&lt;BR /&gt;// Phase -1: Release internal memory&lt;BR /&gt;phase = -1;&lt;BR /&gt;cluster_sparse_solver(pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,&lt;BR /&gt;&amp;amp;n, a, ia, ja, NULL, &amp;amp;nrhs,&lt;BR /&gt;iparm, &amp;amp;msglvl, b, x, &amp;amp;comm, &amp;amp;error);&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;std::cout &amp;lt;&amp;lt; "6" &amp;lt;&amp;lt; std::endl;&lt;BR /&gt;MPI_Finalize();&lt;BR /&gt;return 0;&lt;BR /&gt;}&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I tried using an MPI process and running with Openmp turned off, but it didn't work.&lt;BR /&gt;In addition, the size of the _lnz_0_0.bin generated in my folder is 0kb, and the current error is still as follows:&lt;/P&gt;&lt;P&gt;2&lt;BR /&gt;Memory allocated on phase 11 0.0014 MB&lt;BR /&gt;3&lt;BR /&gt;Memory allocated on phase 22 3072.0024 MB&lt;/P&gt;&lt;P&gt;Percentage of computed non-zeros for LL^T factorization&lt;BR /&gt;25 %&lt;BR /&gt;50 %&lt;BR /&gt;100 %&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;===================================================================================&lt;BR /&gt;= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;= RANK 0 PID 21112 RUNNING AT DESKTOP-4UVO226&lt;BR /&gt;= EXIT STATUS: -1073740791 (c0000409)&lt;BR /&gt;===================================================================================&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Looking forward to your next reply！&lt;/P&gt;</description>
    <pubDate>Tue, 03 Jun 2025 08:34:15 GMT</pubDate>
    <dc:creator>Liwufan</dc:creator>
    <dc:date>2025-06-03T08:34:15Z</dc:date>
    <item>
      <title>Some questions in use cluster pardiso OOC mode iparm[59]=2</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Some-questions-in-use-cluster-pardiso-OOC-mode-iparm-59-2/m-p/1692481#M37177</link>
      <description>&lt;P&gt;Hi, all&lt;/P&gt;&lt;P&gt;I recently tried to use Parallel Direct Sparse Solver for Clusters Interface to solve the Ax=b linear equation system. Due to my RAM memory limit, the size of the matrix I can calculate is limited, so I want to try to use OOC mode to store L and U on disk to reduce RAM usage. I set iparm[59]=2, and set iparm[10] and iparm[12] to 0 according to the prompt. However, when I run it with multiple MPI processes and one Openmp process, it will exit directly during the LU factorization process.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I would like to know what causes this and its solution, and I look forward to your reply.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;My environment variables are set to:&lt;/P&gt;&lt;P&gt;MKL_PARDISO_OOC_PATH=D:\msvc_project\code_zp\GeoAdaptiveRefine\demo1\ooctemp&lt;BR /&gt;MKL_PARDISO_OOC_MAX_CORE_SIZE=10240&lt;BR /&gt;MKL_PARDISO_OOC_MAX_SWAP_SIZE=10240&lt;BR /&gt;MKL_PARDISO_OOC_KEEP_FILE=0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Here is my error message:&lt;/P&gt;&lt;P&gt;Memory allocated on phase 22 on Rank # 0 10242.0785 MB&lt;BR /&gt;Memory allocated on phase 22 on Rank # 1 10242.0785 MB&lt;BR /&gt;Memory allocated on phase 22 on Rank # 2 10242.0785 MB&lt;BR /&gt;Memory allocated on phase 22 on Rank # 3 10242.0785 MB&lt;BR /&gt;Memory allocated on phase 22 on Rank # 4 10242.0785 MB&lt;BR /&gt;Memory allocated on phase 22 on Rank # 5 10242.0785 MB&lt;BR /&gt;Memory allocated on phase 22 on Rank # 6 10242.0785 MB&lt;BR /&gt;Memory allocated on phase 22 on Rank # 7 10242.0785 MB&lt;/P&gt;&lt;P&gt;Percentage of computed non-zeros for LL^T factorization&lt;BR /&gt;1 %&lt;BR /&gt;2 %&lt;BR /&gt;4 %&lt;BR /&gt;11 %&lt;BR /&gt;24 %&lt;BR /&gt;25 %&lt;BR /&gt;27 %&lt;BR /&gt;28 %&lt;BR /&gt;29 %&lt;BR /&gt;32 %&lt;BR /&gt;46 %&lt;BR /&gt;66 %&lt;/P&gt;&lt;P&gt;===================================================================================&lt;BR /&gt;= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;= RANK 0 PID 20524 RUNNING AT WIN-BB34P4BOBS1&lt;BR /&gt;= EXIT STATUS: -1 (ffffffff)&lt;BR /&gt;===================================================================================&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you again&lt;/P&gt;</description>
      <pubDate>Mon, 26 May 2025 12:42:40 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Some-questions-in-use-cluster-pardiso-OOC-mode-iparm-59-2/m-p/1692481#M37177</guid>
      <dc:creator>Liwufan</dc:creator>
      <dc:date>2025-05-26T12:42:40Z</dc:date>
    </item>
    <item>
      <title>Re: Some questions in use cluster pardiso OOC mode iparm[59]=2</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Some-questions-in-use-cluster-pardiso-OOC-mode-iparm-59-2/m-p/1692802#M37179</link>
      <description>&lt;P&gt;The MKL pardiso solver is a&amp;nbsp;&lt;SPAN&gt;shared-memory multiprocessing parallel direct sparse solver. It should work in any one of the MPI ranks. Would you please provide your test code(source code and build command/script/instructions)? And also, please provide your hardware and software env details, so we can reproduce. Does your code work in another setup? Such as one MPI rank case? Turn off OpenMP?&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 27 May 2025 19:04:34 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Some-questions-in-use-cluster-pardiso-OOC-mode-iparm-59-2/m-p/1692802#M37179</guid>
      <dc:creator>Shiquan_Su</dc:creator>
      <dc:date>2025-05-27T19:04:34Z</dc:date>
    </item>
    <item>
      <title>Re: Some questions in use cluster pardiso OOC mode iparm[59]=2</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Some-questions-in-use-cluster-pardiso-OOC-mode-iparm-59-2/m-p/1694297#M37185</link>
      <description>&lt;P&gt;First of all, thank you very much for your reply&amp;nbsp;and I'm sorry for my late reply.&lt;/P&gt;&lt;P&gt;I used a simple test code to try to use OOC mode to calculate a 4*4 matrix linear equation system. The code is as follows:&lt;/P&gt;&lt;P&gt;I used Visual Studio 2022 and used Intel mpiexec to execute：&lt;/P&gt;&lt;P&gt;#include &amp;lt;iostream&amp;gt;&lt;BR /&gt;#include &amp;lt;mpi.h&amp;gt;&lt;BR /&gt;#include "mkl_cluster_sparse_solver.h"&lt;BR /&gt;#include "mkl_types.h"&lt;BR /&gt;#include &amp;lt;vector&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;int main(int argc, char* argv[]) {&lt;BR /&gt;MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;BR /&gt;int myrank;&lt;BR /&gt;MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;myrank);&lt;/P&gt;&lt;P&gt;// Problem size&lt;BR /&gt;MKL_INT n = 4; // Small example; for true OOC test, use a much larger system.&lt;BR /&gt;// SPD CSR&lt;BR /&gt;MKL_INT ia[5] = { 1, 3, 6, 9, 11 };&lt;BR /&gt;MKL_INT ja[10] = { 1, 2, 1, 2, 3, 2, 3, 4, 3, 4 };&lt;BR /&gt;double a[10] = { 4, -1, -1, 4, -1, -1, 4, -1, -1, 3 };&lt;BR /&gt;double b[4] = { 1.0, 2.0, 3.0, 4.0 };&lt;BR /&gt;double x[4] = { 0.0 };&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;// Pardiso internal data&lt;BR /&gt;void* pt[64] = { 0 };&lt;BR /&gt;MKL_INT iparm[64] = { 0 };&lt;BR /&gt;MKL_INT maxfct = 1, mnum = 1, phase, error = 0, msglvl = 1;&lt;BR /&gt;MKL_INT mtype = 2; // Real symmetric positive definite&lt;BR /&gt;MKL_INT nrhs = 1;&lt;/P&gt;&lt;P&gt;// Set iparm values&lt;BR /&gt;for (int i = 0; i &amp;lt; 64; i++) iparm[i] = 0;&lt;BR /&gt;iparm[0] = 1; // No solver default&lt;BR /&gt;iparm[1] = 2; // Fill-in reordering from METIS&lt;BR /&gt;iparm[7] = 0; // Max number of iterative refinement steps&lt;BR /&gt;iparm[59] = 2; // Enable OOC mode&lt;BR /&gt;iparm[10] = 0;&lt;BR /&gt;iparm[12] = 0;&lt;/P&gt;&lt;P&gt;std::cout &amp;lt;&amp;lt; "2" &amp;lt;&amp;lt; std::endl;&lt;BR /&gt;MPI_Comm comm = MPI_COMM_WORLD;&lt;BR /&gt;// Phase 11: Reordering and Symbolic Factorization&lt;BR /&gt;phase = 11;&lt;BR /&gt;cluster_sparse_solver(pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,&lt;BR /&gt;&amp;amp;n, a, ia, ja, NULL, &amp;amp;nrhs,&lt;BR /&gt;iparm, &amp;amp;msglvl, b, x, &amp;amp;comm, &amp;amp;error);&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;std::cout &amp;lt;&amp;lt; "3" &amp;lt;&amp;lt; std::endl;&lt;BR /&gt;// Phase 22: Numerical factorization&lt;BR /&gt;phase = 22;&lt;BR /&gt;cluster_sparse_solver(pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,&lt;BR /&gt;&amp;amp;n, a, ia, ja, NULL, &amp;amp;nrhs,&lt;BR /&gt;iparm, &amp;amp;msglvl, b, x, &amp;amp;comm, &amp;amp;error);&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;std::cout &amp;lt;&amp;lt; "4" &amp;lt;&amp;lt; std::endl;&lt;BR /&gt;// Phase 33: Back substitution&lt;BR /&gt;phase = 33;&lt;BR /&gt;cluster_sparse_solver(pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,&lt;BR /&gt;&amp;amp;n, a, ia, ja, NULL, &amp;amp;nrhs,&lt;BR /&gt;iparm, &amp;amp;msglvl, b, x, &amp;amp;comm, &amp;amp;error);&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;if (myrank == 0) {&lt;BR /&gt;std::cout &amp;lt;&amp;lt; "Solution x:\n";&lt;BR /&gt;for (int i = 0; i &amp;lt; n; i++) std::cout &amp;lt;&amp;lt; x[i] &amp;lt;&amp;lt; " ";&lt;BR /&gt;std::cout &amp;lt;&amp;lt; "\n";&lt;BR /&gt;}&lt;/P&gt;&lt;P&gt;std::cout &amp;lt;&amp;lt; "5" &amp;lt;&amp;lt; std::endl;&lt;BR /&gt;// Phase -1: Release internal memory&lt;BR /&gt;phase = -1;&lt;BR /&gt;cluster_sparse_solver(pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,&lt;BR /&gt;&amp;amp;n, a, ia, ja, NULL, &amp;amp;nrhs,&lt;BR /&gt;iparm, &amp;amp;msglvl, b, x, &amp;amp;comm, &amp;amp;error);&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;std::cout &amp;lt;&amp;lt; "6" &amp;lt;&amp;lt; std::endl;&lt;BR /&gt;MPI_Finalize();&lt;BR /&gt;return 0;&lt;BR /&gt;}&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I tried running with 1 mpi process and closing the openmp thread, but it didn't work. The error is still as follows, and the size of the _lnz_0_0.bin generated in the folder is 0kb.&lt;/P&gt;&lt;P&gt;2&lt;BR /&gt;Memory allocated on phase 11 0.0014 MB&lt;BR /&gt;3&lt;BR /&gt;Memory allocated on phase 22 3072.0024 MB&lt;/P&gt;&lt;P&gt;Percentage of computed non-zeros for LL^T factorization&lt;BR /&gt;25 %&lt;BR /&gt;50 %&lt;BR /&gt;100 %&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;===================================================================================&lt;BR /&gt;= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;= RANK 0 PID 28436 RUNNING AT DESKTOP-4UVO226&lt;BR /&gt;= EXIT STATUS: -1073740791 (c0000409)&lt;BR /&gt;===================================================================================&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Looking forward to your next reply！&lt;/P&gt;</description>
      <pubDate>Tue, 03 Jun 2025 08:25:47 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Some-questions-in-use-cluster-pardiso-OOC-mode-iparm-59-2/m-p/1694297#M37185</guid>
      <dc:creator>Liwufan</dc:creator>
      <dc:date>2025-06-03T08:25:47Z</dc:date>
    </item>
    <item>
      <title>Re: Some questions in use cluster pardiso OOC mode iparm[59]=2</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Some-questions-in-use-cluster-pardiso-OOC-mode-iparm-59-2/m-p/1694298#M37186</link>
      <description>&lt;P&gt;First of all, I'm glad you responded and I'm sorry for my late response.&lt;BR /&gt;I used a simple test code to calculate a 4*4 linear equation system in OOC mode. The code is as follows:&lt;BR /&gt;I used Visual Studio 2022 and used Intel mpiexec to execute:&lt;/P&gt;&lt;P&gt;#include &amp;lt;iostream&amp;gt;&lt;BR /&gt;#include &amp;lt;mpi.h&amp;gt;&lt;BR /&gt;#include "mkl_cluster_sparse_solver.h"&lt;BR /&gt;#include "mkl_types.h"&lt;BR /&gt;#include &amp;lt;vector&amp;gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;int main(int argc, char* argv[]) {&lt;BR /&gt;MPI_Init(&amp;amp;argc, &amp;amp;argv);&lt;BR /&gt;int myrank;&lt;BR /&gt;MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;myrank);&lt;/P&gt;&lt;P&gt;// Problem size&lt;BR /&gt;MKL_INT n = 4; // Small example; for true OOC test, use a much larger system.&lt;BR /&gt;// SPD CSR&lt;BR /&gt;MKL_INT ia[5] = { 1, 3, 6, 9, 11 };&lt;BR /&gt;MKL_INT ja[10] = { 1, 2, 1, 2, 3, 2, 3, 4, 3, 4 };&lt;BR /&gt;double a[10] = { 4, -1, -1, 4, -1, -1, 4, -1, -1, 3 };&lt;BR /&gt;double b[4] = { 1.0, 2.0, 3.0, 4.0 };&lt;BR /&gt;double x[4] = { 0.0,0.0,0.0,0.0 };&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;// Pardiso internal data&lt;BR /&gt;void* pt[64] = { 0 };&lt;BR /&gt;MKL_INT iparm[64] = { 0 };&lt;BR /&gt;MKL_INT maxfct = 1, mnum = 1, phase, error = 0, msglvl = 1;&lt;BR /&gt;MKL_INT mtype = 2; // Real symmetric positive definite&lt;BR /&gt;MKL_INT nrhs = 1;&lt;/P&gt;&lt;P&gt;// Set iparm values&lt;BR /&gt;for (int i = 0; i &amp;lt; 64; i++) iparm[i] = 0;&lt;BR /&gt;iparm[0] = 1; // No solver default&lt;BR /&gt;iparm[1] = 2; // Fill-in reordering from METIS&lt;BR /&gt;iparm[7] = 0; // Max number of iterative refinement steps&lt;BR /&gt;iparm[59] = 2; // Enable OOC mode&lt;BR /&gt;iparm[10] = 0;&lt;BR /&gt;iparm[12] = 0;&lt;/P&gt;&lt;P&gt;std::cout &amp;lt;&amp;lt; "2" &amp;lt;&amp;lt; std::endl;&lt;BR /&gt;MPI_Comm comm = MPI_COMM_WORLD;&lt;BR /&gt;// Phase 11: Reordering and Symbolic Factorization&lt;BR /&gt;phase = 11;&lt;BR /&gt;cluster_sparse_solver(pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,&lt;BR /&gt;&amp;amp;n, a, ia, ja, NULL, &amp;amp;nrhs,&lt;BR /&gt;iparm, &amp;amp;msglvl, b, x, &amp;amp;comm, &amp;amp;error);&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;std::cout &amp;lt;&amp;lt; "3" &amp;lt;&amp;lt; std::endl;&lt;BR /&gt;// Phase 22: Numerical factorization&lt;BR /&gt;phase = 22;&lt;BR /&gt;cluster_sparse_solver(pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,&lt;BR /&gt;&amp;amp;n, a, ia, ja, NULL, &amp;amp;nrhs,&lt;BR /&gt;iparm, &amp;amp;msglvl, b, x, &amp;amp;comm, &amp;amp;error);&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;std::cout &amp;lt;&amp;lt; "4" &amp;lt;&amp;lt; std::endl;&lt;BR /&gt;// Phase 33: Back substitution&lt;BR /&gt;phase = 33;&lt;BR /&gt;cluster_sparse_solver(pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,&lt;BR /&gt;&amp;amp;n, a, ia, ja, NULL, &amp;amp;nrhs,&lt;BR /&gt;iparm, &amp;amp;msglvl, b, x, &amp;amp;comm, &amp;amp;error);&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;if (myrank == 0) {&lt;BR /&gt;std::cout &amp;lt;&amp;lt; "Solution x:\n";&lt;BR /&gt;for (int i = 0; i &amp;lt; n; i++) std::cout &amp;lt;&amp;lt; x[i] &amp;lt;&amp;lt; " ";&lt;BR /&gt;std::cout &amp;lt;&amp;lt; "\n";&lt;BR /&gt;}&lt;/P&gt;&lt;P&gt;std::cout &amp;lt;&amp;lt; "5" &amp;lt;&amp;lt; std::endl;&lt;BR /&gt;// Phase -1: Release internal memory&lt;BR /&gt;phase = -1;&lt;BR /&gt;cluster_sparse_solver(pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,&lt;BR /&gt;&amp;amp;n, a, ia, ja, NULL, &amp;amp;nrhs,&lt;BR /&gt;iparm, &amp;amp;msglvl, b, x, &amp;amp;comm, &amp;amp;error);&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;std::cout &amp;lt;&amp;lt; "6" &amp;lt;&amp;lt; std::endl;&lt;BR /&gt;MPI_Finalize();&lt;BR /&gt;return 0;&lt;BR /&gt;}&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I tried using an MPI process and running with Openmp turned off, but it didn't work.&lt;BR /&gt;In addition, the size of the _lnz_0_0.bin generated in my folder is 0kb, and the current error is still as follows:&lt;/P&gt;&lt;P&gt;2&lt;BR /&gt;Memory allocated on phase 11 0.0014 MB&lt;BR /&gt;3&lt;BR /&gt;Memory allocated on phase 22 3072.0024 MB&lt;/P&gt;&lt;P&gt;Percentage of computed non-zeros for LL^T factorization&lt;BR /&gt;25 %&lt;BR /&gt;50 %&lt;BR /&gt;100 %&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;===================================================================================&lt;BR /&gt;= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;= RANK 0 PID 21112 RUNNING AT DESKTOP-4UVO226&lt;BR /&gt;= EXIT STATUS: -1073740791 (c0000409)&lt;BR /&gt;===================================================================================&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Looking forward to your next reply！&lt;/P&gt;</description>
      <pubDate>Tue, 03 Jun 2025 08:34:15 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Some-questions-in-use-cluster-pardiso-OOC-mode-iparm-59-2/m-p/1694298#M37186</guid>
      <dc:creator>Liwufan</dc:creator>
      <dc:date>2025-06-03T08:34:15Z</dc:date>
    </item>
    <item>
      <title>Re: Some questions in use cluster pardiso OOC mode iparm[59]=2</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Some-questions-in-use-cluster-pardiso-OOC-mode-iparm-59-2/m-p/1696133#M37199</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you for submitting your query. The main issue with the code is that your matrix type is SPD (mtype=2), but you have specified the full matrix as input. For symmetric matrices, both the Cluster Sparse Solver and PARDISO expect only the upper triangular part of the matrix to be specified.&lt;/P&gt;&lt;P&gt;You can find more details in the description of ja in &lt;A href="https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2025-1/cluster-sparse-solver.html" target="_blank" rel="noopener"&gt;https://www.intel.com/content/www/us/en/docs/onemkl/developer-reference-c/2025-1/cluster-sparse-solver.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Therefore, your CSR matrix input should be:&lt;/P&gt;&lt;P&gt;MKL_INT ia[5] = { 1, 3, 5, 7, 8 };&lt;BR /&gt;MKL_INT ja[7] = { 1, 2, 2, 3, 3, 4, 4 };&lt;BR /&gt;double a[7] = { 4, -1, 4, -1, 4, -1, 3 };&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Hope it helps.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Kind Regards,&lt;/P&gt;&lt;P&gt;Chris&lt;/P&gt;</description>
      <pubDate>Tue, 10 Jun 2025 13:55:07 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Some-questions-in-use-cluster-pardiso-OOC-mode-iparm-59-2/m-p/1696133#M37199</guid>
      <dc:creator>c_sim</dc:creator>
      <dc:date>2025-06-10T13:55:07Z</dc:date>
    </item>
  </channel>
</rss>

