<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic I will try to join up with in Intel® oneAPI Math Kernel Library</title>
    <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135715#M25979</link>
    <description>&lt;P&gt;I will try to join up with the beta program. This would save me a large headache.&lt;/P&gt;</description>
    <pubDate>Fri, 01 Sep 2017 20:05:15 GMT</pubDate>
    <dc:creator>William_D_2</dc:creator>
    <dc:date>2017-09-01T20:05:15Z</dc:date>
    <item>
      <title>Direct Sparse Solver for Clusters - Pardiso Memory Allocation Error</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135708#M25972</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;

&lt;P&gt;Once again having some trouble with the Direct Sparse Solver for clusters. I am getting the following error when running on a single process&lt;/P&gt;

&lt;PRE class="brush:bash;"&gt;entering matrix solver
*** Error in PARDISO  (     insufficient_memory) error_num= 1
*** Error in PARDISO memory allocation: MATCHING_REORDERING_DATA, allocation of 1 bytes failed
total memory wanted here: 142 kbyte

=== PARDISO: solving a real structurally symmetric system ===
1-based array indexing is turned ON
PARDISO double precision computation is turned ON
METIS algorithm at reorder step is turned ON


Summary: ( reordering phase )
================

Times:
======
Time spent in calculations of symmetric matrix portrait (fulladj): 0.000005 s
Time spent in reordering of the initial matrix (reorder)         : 0.000000 s
Time spent in symbolic factorization (symbfct)                   : 0.000000 s
Time spent in allocation of internal data structures (malloc)    : 0.000465 s
Time spent in additional calculations                            : 0.000080 s
Total time spent                                                 : 0.000550 s

Statistics:
===========
Parallel Direct Factorization is running on 1 OpenMP

&amp;lt; Linear system Ax = b &amp;gt;
             number of equations:           6
             number of non-zeros in A:      8
             number of non-zeros in A (%): 22.222222

             number of right-hand sides:    1

&amp;lt; Factors L and U &amp;gt;
             number of columns for each panel: 128
             number of independent subgraphs:  0
&amp;lt; Preprocessing with state of the art partitioning metis&amp;gt;
             number of supernodes:                    0
             size of largest supernode:               0
             number of non-zeros in L:                0
             number of non-zeros in U:                0
             number of non-zeros in L+U:              0

ERROR during solution: 4294967294
&lt;/PRE&gt;

&lt;P&gt;I just hangs when running on a single process. Below is the CSR format of my matrix and the provided RHS to solve for&lt;/P&gt;

&lt;P&gt;CSR row values&lt;BR /&gt;
	0&lt;BR /&gt;
	2&lt;BR /&gt;
	6&lt;BR /&gt;
	9&lt;BR /&gt;
	12&lt;BR /&gt;
	16&lt;BR /&gt;
	18&lt;/P&gt;

&lt;P&gt;CSR col values&lt;BR /&gt;
	0&lt;BR /&gt;
	1&lt;BR /&gt;
	0&lt;BR /&gt;
	1&lt;BR /&gt;
	2&lt;BR /&gt;
	3&lt;BR /&gt;
	1&lt;BR /&gt;
	2&lt;BR /&gt;
	4&lt;BR /&gt;
	1&lt;BR /&gt;
	3&lt;BR /&gt;
	4&lt;BR /&gt;
	2&lt;BR /&gt;
	3&lt;BR /&gt;
	4&lt;BR /&gt;
	5&lt;BR /&gt;
	4&lt;BR /&gt;
	5&lt;/P&gt;

&lt;P&gt;Rank 0 rhs vector :&lt;BR /&gt;
	1&lt;BR /&gt;
	0&lt;BR /&gt;
	0&lt;BR /&gt;
	0&lt;BR /&gt;
	0&lt;BR /&gt;
	1&lt;/P&gt;

&lt;P&gt;Now my calling file looks like:&lt;/P&gt;

&lt;PRE class="brush:cpp;"&gt;void SolveMatrixEquations(MKL_INT numRows, MatrixPointerStruct &amp;amp;cArrayStruct, const std::pair&amp;lt;MKL_INT,MKL_INT&amp;gt;&amp;amp; rowExtents)
{
	
	double pressureSolveTime = -omp_get_wtime();

	MKL_INT mtype = 1;  /* set matrix type to "real structurally symmetric" */
	MKL_INT nrhs = 1;  /* number of right hand sides. */

	void *pt[64] = { 0 }; //internal memory Pointer

						  /* Cluster Sparse Solver control parameters. */
	MKL_INT iparm[64] = { 0 };
	MKL_INT maxfct, mnum, phase=13, msglvl, error;

	/* Auxiliary variables. */
	float   ddum; /* float dummy   */
	MKL_INT idum; /* Integer dummy. */
	MKL_INT i, j;

	/* -------------------------------------------------------------------- */
	/* .. Init MPI.                                                         */
	/* -------------------------------------------------------------------- */
	
	int     mpi_stat = 0;
	int     comm, rank;
	mpi_stat = MPI_Comm_rank(MPI_COMM_WORLD, &amp;amp;rank);
	comm = MPI_Comm_c2f(MPI_COMM_WORLD);

	/* -------------------------------------------------------------------- */
	/* .. Setup Cluster Sparse Solver control parameters.                                 */
	/* -------------------------------------------------------------------- */
	iparm[0] = 0; /* Solver default parameters overridden with provided by iparm */
	iparm[1] =3; /* Use METIS for fill-in reordering */
	//iparm[1] = 10; /* Use parMETIS for fill-in reordering */
	iparm[5] = 0; /* Write solution into x */
	iparm[7] = 2; /* Max number of iterative refinement steps */
	iparm[9] = 8; /* Perturb the pivot elements with 1E-13 */
	iparm[10] = 0; /* Don't use non-symmetric permutation and scaling MPS */
	iparm[12] = 0; /* Switch on Maximum Weighted Matching algorithm (default for non-symmetric) */
	iparm[17] = 0; /* Output: Number of non-zeros in the factor LU */
	iparm[18] = 0; /* Output: Mflops for LU factorization */
	iparm[20] = 0; /*change pivoting for use in symmetric indefinite matrices*/
	iparm[26] = 1;
	iparm[27] = 0; /* Single precision mode of Cluster Sparse Solver */
	iparm[34] = 1; /* Cluster Sparse Solver use C-style indexing for ia and ja arrays */

	iparm[39] = 2; /* Input: matrix/rhs/solution stored on master */
	iparm[40] = rowExtents.first+1;
	iparm[41] = rowExtents.second+1; 
	maxfct = 3; /* Maximum number of numerical factorizations. */
	mnum = 1; /* Which factorization to use. */
	msglvl = 1; /* Print statistical information in file */
	error = 0; /* Initialize error flag */
	//cout &amp;lt;&amp;lt; "Rank " &amp;lt;&amp;lt; rank &amp;lt;&amp;lt; ": " &amp;lt;&amp;lt; iparm[40] &amp;lt;&amp;lt; " " &amp;lt;&amp;lt; iparm[41] &amp;lt;&amp;lt; endl;
#ifdef UNIT_TESTS
	//msglvl = 0;
#endif




	phase = 11;
	#ifndef UNIT_TESTS
	if (rank == 0)printf("Restructuring system...\n");
	cout &amp;lt;&amp;lt; "Restructuring system...\n" &amp;lt;&amp;lt;endl;;
	#endif

	cluster_sparse_solver(pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,
		&amp;amp;numRows, &amp;amp;ddum, cArrayStruct.rowIndexArray, cArrayStruct.colIndexArray, &amp;amp;idum, &amp;amp;nrhs, iparm, &amp;amp;msglvl,
		&amp;amp;ddum, &amp;amp;ddum, &amp;amp;comm, &amp;amp;error);
	if (error != 0)
	{
		cout &amp;lt;&amp;lt; "\nERROR during solution: " &amp;lt;&amp;lt; error &amp;lt;&amp;lt; endl;
		exit(error);
	}


	phase = 23;

#ifndef UNIT_TESTS
//	if (rank == 0) printf("\nSolving system...\n");
	printf("\nSolving system...\n");
#endif

	cluster_sparse_solver_64(pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,
		&amp;amp;numRows, cArrayStruct.valArray, cArrayStruct.rowIndexArray, cArrayStruct.colIndexArray, &amp;amp;idum, &amp;amp;nrhs, iparm, &amp;amp;msglvl,
		cArrayStruct.rhsVector, cArrayStruct.pressureSolutionVector, &amp;amp;comm, &amp;amp;error);
	if (error != 0)
	{
		cout &amp;lt;&amp;lt; "\nERROR during solution: " &amp;lt;&amp;lt; error &amp;lt;&amp;lt; endl;
		exit(error);
	}

	phase = -1; /* Release internal memory. */
	cluster_sparse_solver_64(pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,
		&amp;amp;numRows, &amp;amp;ddum, cArrayStruct.rowIndexArray, cArrayStruct.colIndexArray, &amp;amp;idum, &amp;amp;nrhs, iparm, &amp;amp;msglvl, &amp;amp;ddum, &amp;amp;ddum, &amp;amp;comm, &amp;amp;error);
	if (error != 0)
	{
		cout &amp;lt;&amp;lt; "\nERROR during release memory: " &amp;lt;&amp;lt; error &amp;lt;&amp;lt; endl;
		exit(error);
	}
	/* Check residual */

	pressureSolveTime += omp_get_wtime();


#ifndef UNIT_TESTS
	//cout &amp;lt;&amp;lt; "Pressure Solve Time: " &amp;lt;&amp;lt; pressureSolveTime &amp;lt;&amp;lt; endl;
#endif
	
	//TestPrintCsrMatrix(cArrayStruct,rowExtents.second-rowExtents.first +1);
}&lt;/PRE&gt;

&lt;P&gt;This is based on the format of one of the examples. Now i am trying to use the ILP64 interface becasue my example system is very large. (16 billion non-zeros). I am using the Intel C++ compiler 2017 as part of the Intel Composer XE Cluster Edition Update 1. I using the following link lines in my Cmake files:&amp;nbsp;&lt;/P&gt;

&lt;PRE class="brush:plain;"&gt;TARGET_COMPILE_OPTIONS(${MY_TARGET_NAME} PUBLIC "-mkl:cluster"  "-DMKL_ILP64" "-I$ENV{MKLROOT}/include")
TARGET_LINK_LIBRARIES(${MY_TARGET_NAME} "-Wl,--start-group $ENV{MKLROOT}/lib/intel64/libmkl_intel_ilp64.a $ENV{MKLROOT}/lib/intel64/libmkl_intel_thread.a $ENV{MKLROOT}/lib/intel64/libmkl_core.a $ENV{MKLROOT}/lib/intel64/libmkl_blacs_intelmpi_ilp64.a -Wl,--end-group -liomp5 -lpthread -lm -ldl")&lt;/PRE&gt;

&lt;P&gt;What is interesting is that this same code runs perfectly fine on my windows development machine. Porting it to my linux cluster is causing issues. Any Ideas?&lt;/P&gt;

&lt;P&gt;I am currently awaiting the terribly long download for the update 4 Composer XE package. But I don't have much hope of that fixing it because this code used to run fine on this system.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 25 Jun 2017 19:16:19 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135708#M25972</guid>
      <dc:creator>William_D_2</dc:creator>
      <dc:date>2017-06-25T19:16:19Z</dc:date>
    </item>
    <item>
      <title>Having a similar problem with</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135709#M25973</link>
      <description>&lt;P&gt;Having a similar problem with the function mkl_dcsrcoo()&lt;/P&gt;

&lt;P&gt;input COO matrix&lt;/P&gt;

&lt;P&gt;0 0 1&lt;BR /&gt;
	0 1 0&lt;BR /&gt;
	1 0 52745.6&lt;BR /&gt;
	1 1 -135815&lt;BR /&gt;
	1 2 41534.7&lt;BR /&gt;
	1 3 41534.7&lt;BR /&gt;
	2 1 41534.7&lt;BR /&gt;
	2 2 -83069.4&lt;BR /&gt;
	2 4 41534.7&lt;BR /&gt;
	3 1 41534.7&lt;BR /&gt;
	3 3 -83069.4&lt;BR /&gt;
	3 4 41534.7&lt;BR /&gt;
	4 2 41534.7&lt;BR /&gt;
	4 3 41534.7&lt;BR /&gt;
	4 4 -135815&lt;BR /&gt;
	4 5 52745.6&lt;BR /&gt;
	5 4 52745.6&lt;BR /&gt;
	5 5 -52745.6&lt;/P&gt;

&lt;P&gt;output csr row indexes:&amp;nbsp;17179869184 30064771079 30064771079 7 0 0 0&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 25 Jun 2017 21:48:50 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135709#M25973</guid>
      <dc:creator>William_D_2</dc:creator>
      <dc:date>2017-06-25T21:48:50Z</dc:date>
    </item>
    <item>
      <title>Hi William,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135710#M25974</link>
      <description>&lt;P&gt;Hi William,&lt;/P&gt;

&lt;P&gt;I did check with main program with your input parameters.&amp;nbsp; I haven't seen the exact error as yours, but&amp;nbsp; seemingly there is some issues in these parameters.&lt;/P&gt;

&lt;P&gt;for example,&amp;nbsp; iparm[34] = 1;&amp;nbsp;&amp;nbsp;mean 0 based. &amp;nbsp;/* Cluster Sparse Solver use C-style indexing for ia and ja arrays */&lt;/P&gt;

&lt;P&gt;but in solver's output&amp;nbsp; . it report&amp;nbsp; 1 -based.&lt;/P&gt;

&lt;TABLE&gt;
	&lt;TBODY&gt;
		&lt;TR&gt;
			&lt;TD class="content"&gt;&lt;CODE class="bash plain"&gt;&lt;FONT face="Courier New"&gt;=== PARDISO: solving a real structurally symmetric system ===&lt;/FONT&gt;&lt;/CODE&gt;&lt;/TD&gt;
		&lt;/TR&gt;
	&lt;/TBODY&gt;
&lt;/TABLE&gt;

&lt;DIV class="line alt1"&gt;
	&lt;TABLE&gt;
		&lt;TBODY&gt;
			&lt;TR&gt;
				&lt;TD class="number"&gt;&lt;CODE&gt;&lt;FONT face="Courier New"&gt;07&lt;/FONT&gt;&lt;/CODE&gt;&lt;/TD&gt;
				&lt;TD class="content"&gt;&lt;CODE class="bash plain"&gt;&lt;FONT face="Courier New"&gt;1-based array indexing is turned ON&lt;/FONT&gt;&lt;/CODE&gt;&lt;/TD&gt;
			&lt;/TR&gt;
		&lt;/TBODY&gt;
	&lt;/TABLE&gt;
&lt;/DIV&gt;

&lt;P&gt;I attached the main code. Please have a check and let me know if it works on your environment.&lt;/P&gt;

&lt;P&gt;Best Regards,&lt;/P&gt;

&lt;P&gt;Ying&lt;/P&gt;

&lt;P&gt;build command :&lt;/P&gt;

&lt;P&gt;&lt;A href="mailto:yhu5@kbl01-ub:~/Cluster_pardiso/cluster_sparse_solverc$"&gt;yhu5@kbl01-ub:~/Cluster_pardiso/cluster_sparse_solverc$&lt;/A&gt; mpiicc -Wall -DMKL_ILP64 -I/opt/intel/compilers_and_libraries_2018.0.098/linux/mkl/include&amp;nbsp; w_solver.cpp -Wl,--start-group&amp;nbsp; "/opt/intel/compilers_and_libraries_2018.0.098/linux/mkl/lib/intel64"/libmkl_blacs_intelmpi_ilp64.a "/opt/intel/compilers_and_libraries_2018.0.098/linux/mkl/lib/intel64"/libmkl_intel_ilp64.a "/opt/intel/compilers_and_libraries_2018.0.098/linux/mkl/lib/intel64"/libmkl_core.a "/opt/intel/compilers_and_libraries_2018.0.098/linux/mkl/lib/intel64"/libmkl_intel_thread.a -Wl,--end-group -L "/opt/intel/compilers_and_libraries_2018.0.098/linux/mkl/../compiler/lib/intel64" -liomp5 -mt_mpi -lm&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;and run command:&lt;BR /&gt;
	&lt;A href="mailto:yhu5@kbl01-ub:~/Cluster_pardiso/cluster_sparse_solverc$"&gt;yhu5@kbl01-ub:~/Cluster_pardiso/cluster_sparse_solverc$&lt;/A&gt; mpirun -n 1 ./a.out&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Restructuring system...&lt;/P&gt;

&lt;P&gt;=== PARDISO: solving a real structurally symmetric system ===&lt;BR /&gt;
	Matrix checker is turned ON&lt;BR /&gt;
	0-based array is turned ON&lt;BR /&gt;
	PARDISO double precision computation is turned ON&lt;BR /&gt;
	Parallel METIS algorithm at reorder step is turned ON&lt;/P&gt;

&lt;P&gt;&lt;BR /&gt;
	Summary: ( reordering phase )&lt;BR /&gt;
	================&lt;/P&gt;

&lt;P&gt;Times:&lt;BR /&gt;
	======&lt;BR /&gt;
	Time spent in calculations of symmetric matrix portrait (fulladj): 0.000023 s&lt;BR /&gt;
	Time spent in reordering of the initial matrix (reorder)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : 0.000041 s&lt;BR /&gt;
	Time spent in symbolic factorization (symbfct)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : 0.000270 s&lt;BR /&gt;
	Time spent in data preparations for factorization (parlist)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : 0.000000 s&lt;BR /&gt;
	Time spent in allocation of internal data structures (malloc)&amp;nbsp;&amp;nbsp;&amp;nbsp; : 0.000052 s&lt;BR /&gt;
	Time spent in additional calculations&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : 0.000015 s&lt;BR /&gt;
	Total time spent&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : 0.000401 s&lt;/P&gt;

&lt;P&gt;Statistics:&lt;BR /&gt;
	===========&lt;BR /&gt;
	Parallel Direct Factorization is running on 4 OpenMP&lt;/P&gt;

&lt;P&gt;&amp;lt; Linear system Ax = b &amp;gt;&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of equations:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 6&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of non-zeros in A:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 18&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of non-zeros in A (%): 50.000000&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of right-hand sides:&amp;nbsp;&amp;nbsp;&amp;nbsp; 1&lt;/P&gt;

&lt;P&gt;&amp;lt; Factors L and U &amp;gt;&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of columns for each panel: 128&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of independent subgraphs:&amp;nbsp; 0&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of supernodes:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 3&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; size of largest supernode:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 3&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of non-zeros in L:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 19&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of non-zeros in U:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 5&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of non-zeros in L+U:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 24&lt;/P&gt;

&lt;P&gt;Reordering completed ...&lt;BR /&gt;
	Solving system...&lt;BR /&gt;
	=== PARDISO is running in In-Core mode, because iparam(60)=0 ===&lt;/P&gt;

&lt;P&gt;Percentage of computed non-zeros for LL^T factorization&lt;BR /&gt;
	&amp;nbsp;42 %&amp;nbsp; 52 %&amp;nbsp; 100 %&lt;/P&gt;

&lt;P&gt;=== PARDISO: solving a real structurally symmetric system ===&lt;BR /&gt;
	Single-level factorization algorithm is turned ON&lt;/P&gt;

&lt;P&gt;&lt;BR /&gt;
	Summary: ( starting phase is factorization, ending phase is solution )&lt;BR /&gt;
	================&lt;/P&gt;

&lt;P&gt;Times:&lt;BR /&gt;
	======&lt;BR /&gt;
	Time spent in copying matrix to internal data structure (A to LU): 0.000000 s&lt;BR /&gt;
	Time spent in factorization step (numfct)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : 0.000078 s&lt;BR /&gt;
	Time spent in direct solver at solve step (solve)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : 0.000021 s&lt;BR /&gt;
	Time spent in allocation of internal data structures (malloc)&amp;nbsp;&amp;nbsp;&amp;nbsp; : 0.000036 s&lt;BR /&gt;
	Time spent in additional calculations&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : 0.000001 s&lt;BR /&gt;
	Total time spent&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : 0.000136 s&lt;/P&gt;

&lt;P&gt;Statistics:&lt;BR /&gt;
	===========&lt;BR /&gt;
	Parallel Direct Factorization is running on 4 OpenMP&lt;/P&gt;

&lt;P&gt;&amp;lt; Linear system Ax = b &amp;gt;&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of equations:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 6&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of non-zeros in A:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 18&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of non-zeros in A (%): 50.000000&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of right-hand sides:&amp;nbsp;&amp;nbsp;&amp;nbsp; 1&lt;/P&gt;

&lt;P&gt;&amp;lt; Factors L and U &amp;gt;&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of columns for each panel: 128&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of independent subgraphs:&amp;nbsp; 0&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of supernodes:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 3&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; size of largest supernode:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 3&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of non-zeros in L:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 19&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of non-zeros in U:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 5&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; number of non-zeros in L+U:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 24&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; gflop&amp;nbsp;&amp;nbsp; for the numerical factorization: 0.000000&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; gflop/s for the numerical factorization: 0.000718&lt;/P&gt;

&lt;P&gt;&lt;BR /&gt;
	The solution of the system is:&lt;BR /&gt;
	&amp;nbsp;x [0] =&amp;nbsp; 0.000000&lt;BR /&gt;
	&amp;nbsp;x [1] =&amp;nbsp; 0.000000&lt;BR /&gt;
	&amp;nbsp;x [2] =&amp;nbsp; 0.000000&lt;BR /&gt;
	&amp;nbsp;x [3] =&amp;nbsp; 0.000000&lt;BR /&gt;
	&amp;nbsp;x [4] =&amp;nbsp; 0.000000&lt;BR /&gt;
	&amp;nbsp;x [5] =&amp;nbsp; 0.000000&lt;BR /&gt;
	Relative residual = -nan&lt;/P&gt;

&lt;P&gt;&amp;nbsp;TEST PASSED&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 10 Jul 2017 04:07:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135710#M25974</guid>
      <dc:creator>Ying_H_Intel</dc:creator>
      <dc:date>2017-07-10T04:07:00Z</dc:date>
    </item>
    <item>
      <title>OK so I tried running your</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135711#M25975</link>
      <description>&lt;P&gt;OK so I tried running your solver. And it works in its current state. But i have tried&amp;nbsp;&lt;SPAN style="font-size: 1em;"&gt;setting iparm[1]=10 causes a failure in the msg system. although I am still getting a 0 residual. This is strange to me.&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="font-size: 1em;"&gt;Also the code seems to work fine on smaller matricies of my data as well. It fails at larger matricies, This was my motivation for sending test data.&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;Although I have some more sample codes, from the Intel MKL Examples that fail when iparam[1]=10. I am using the 2017 update 4 compilers not the 2018 edition.&lt;/P&gt;

&lt;P&gt;compile line is similar to yours, just changed it to use the environmental variable MKLROOT:&amp;nbsp;&lt;/P&gt;

&lt;P&gt;Serial Version&lt;/P&gt;

&lt;P&gt;compile line -&amp;nbsp;&lt;SPAN style="font-size: 1em;"&gt;mpiicpc -Wall -DMKL_ILP64 -I$MKLROOT/include cl_solver_unsym_c.c &amp;nbsp;-Wl,--start-group &amp;nbsp;"$MKLROOT/lib/intel64"/libmkl_blacs_intelmpi_ilp64.a "$MKLROOT/lib/intel64"/libmkl_intel_ilp64.a "$MKLROOT/lib/intel64"/libmkl_core.a "$MKLROOT/lib/intel64"/libmkl_intel_thread.a -Wl,--end-group -L "$MKLROOT/../compiler/lib/intel64" -liomp5 -mt_mpi -lm&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;Output -&lt;/P&gt;

&lt;P&gt;ERROR during symbolic factorization: -2&lt;BR /&gt;
	&amp;nbsp;TEST FAILED&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="font-size: 1em;"&gt;Distributed Verison&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="font-size: 1em;"&gt;compile line - &amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-size: 1em;"&gt;mpiicpc -Wall -DMKL_ILP64 -I$MKLROOT/include cl_solver_unsym_distr_c.c &amp;nbsp;-Wl,--start-group &amp;nbsp;"$MKLROOT/lib/intel64"/libmkl_blacs_intelmpi_ilp64.a "$MKLROOT/lib/intel64"/libmkl_intel_ilp64.a "$MKLROOT/lib/intel64"/libmkl_core.a "$MKLROOT/lib/intel64"/libmkl_intel_thread.a -Wl,--end-group -L "$MKLROOT/../compiler/lib/intel64" -liomp5 -mt_mpi -lm&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;Output -&amp;nbsp;&lt;/P&gt;

&lt;P&gt;===================================================================================&lt;BR /&gt;
	= &amp;nbsp; BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;
	= &amp;nbsp; PID 43904 RUNNING AT smic1&lt;BR /&gt;
	= &amp;nbsp; EXIT CODE: 11&lt;BR /&gt;
	= &amp;nbsp; CLEANING UP REMAINING PROCESSES&lt;BR /&gt;
	= &amp;nbsp; YOU CAN IGNORE THE BELOW CLEANUP MESSAGES&lt;BR /&gt;
	===================================================================================&lt;/P&gt;

&lt;P&gt;===================================================================================&lt;BR /&gt;
	= &amp;nbsp; BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;
	= &amp;nbsp; PID 43904 RUNNING AT smic1&lt;BR /&gt;
	= &amp;nbsp; EXIT CODE: 11&lt;BR /&gt;
	= &amp;nbsp; CLEANING UP REMAINING PROCESSES&lt;BR /&gt;
	= &amp;nbsp; YOU CAN IGNORE THE BELOW CLEANUP MESSAGES&lt;BR /&gt;
	===================================================================================&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp;Intel(R) MPI Library troubleshooting guide:&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;A href="https://software.intel.com/node/561764" target="_blank"&gt;https://software.intel.com/node/561764&lt;/A&gt;&lt;BR /&gt;
	===================================================================================&lt;/P&gt;

&lt;P&gt;If this works on you machine then there might be a problem with the underlying system, which I have minimal control over. If that is the case, Does the Intel Compiler use system installed libraries for C++/C/Fortran? If so what versions do you have? Also what version of linux? We are on an Redhat Enterprise Edition.&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 10 Jul 2017 13:47:43 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135711#M25975</guid>
      <dc:creator>William_D_2</dc:creator>
      <dc:date>2017-07-10T13:47:43Z</dc:date>
    </item>
    <item>
      <title>Hi William,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135712#M25976</link>
      <description>&lt;P&gt;Hi William,&lt;/P&gt;

&lt;P&gt;Just let you know, i can reproduce the problems you reported&lt;/P&gt;

&lt;P&gt;For example,&amp;nbsp;&amp;nbsp;&lt;/P&gt;

&lt;OL&gt;
	&lt;LI&gt;
		&lt;P&gt;build the cl_solver_unsym_distr_c.c &amp;nbsp;. &amp;nbsp;&amp;nbsp;get =&amp;nbsp;&amp;nbsp; EXIT CODE: 11&lt;/P&gt;
	&lt;/LI&gt;
	&lt;LI&gt;
		&lt;P&gt;Build the cl_solver_unsym_c.c&amp;nbsp; get&amp;nbsp; ERROR during symbolic factorization: -2&lt;BR /&gt;
			&amp;nbsp;TEST FAILED&lt;/P&gt;
	&lt;/LI&gt;
&lt;/OL&gt;

&lt;P&gt;The issue was escalated to our developer, i will keep you update if any news.&lt;/P&gt;

&lt;P&gt;Best Regards,&lt;/P&gt;

&lt;P&gt;Ying&lt;/P&gt;</description>
      <pubDate>Wed, 26 Jul 2017 03:24:16 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135712#M25976</guid>
      <dc:creator>Ying_H_Intel</dc:creator>
      <dc:date>2017-07-26T03:24:16Z</dc:date>
    </item>
    <item>
      <title>Ying,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135713#M25977</link>
      <description>&lt;P&gt;Ying,&lt;/P&gt;

&lt;P&gt;Sorry to bump this but i am getting to the point of no return on my project. Do you have any updates on the progress of this fix? If it is not going to be repaired soon I will need to switch solvers (and rework a significant portion of my program).&lt;/P&gt;

&lt;P&gt;Thanks,&lt;/P&gt;

&lt;P&gt;Will&lt;/P&gt;</description>
      <pubDate>Tue, 29 Aug 2017 02:21:27 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135713#M25977</guid>
      <dc:creator>William_D_2</dc:creator>
      <dc:date>2017-08-29T02:21:27Z</dc:date>
    </item>
    <item>
      <title>Hi Will,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135714#M25978</link>
      <description>&lt;P&gt;Hi Will,&lt;/P&gt;

&lt;P&gt;The issue is fixed. It&amp;nbsp;was targeted to be&amp;nbsp;release in MKL 2018 update 1. You may note the announcement of release in the forum.&lt;/P&gt;

&lt;P&gt;If any questions, please go to on-line server center &amp;nbsp;&lt;A href="http://www.intel.com/supporttickets"&gt;http://www.intel.com/supporttickets&lt;/A&gt;&amp;nbsp;for more information.&lt;/P&gt;

&lt;P&gt;Thanks&lt;/P&gt;

&lt;P&gt;Ying&lt;/P&gt;</description>
      <pubDate>Fri, 01 Sep 2017 00:54:38 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135714#M25978</guid>
      <dc:creator>Ying_H_Intel</dc:creator>
      <dc:date>2017-09-01T00:54:38Z</dc:date>
    </item>
    <item>
      <title>I will try to join up with</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135715#M25979</link>
      <description>&lt;P&gt;I will try to join up with the beta program. This would save me a large headache.&lt;/P&gt;</description>
      <pubDate>Fri, 01 Sep 2017 20:05:15 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135715#M25979</guid>
      <dc:creator>William_D_2</dc:creator>
      <dc:date>2017-09-01T20:05:15Z</dc:date>
    </item>
    <item>
      <title>Ok last question, related to</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135716#M25980</link>
      <description>&lt;P&gt;Ok last question, related to the beta program. How long before this fix will appear in the beta tests. I just installed the beta compilers and am still getting the same error with your example.&lt;/P&gt;</description>
      <pubDate>Sun, 03 Sep 2017 20:10:28 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135716#M25980</guid>
      <dc:creator>William_D_2</dc:creator>
      <dc:date>2017-09-03T20:10:28Z</dc:date>
    </item>
    <item>
      <title>William,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135717#M25981</link>
      <description>&lt;P&gt;William,&lt;/P&gt;

&lt;P&gt;beta program is finished one month ago. Right now we are final stage of preparation the newest version of MKl 2018 which we are planning to release within a few weeks. We will post an announcement at the top of this forum when this will happen.&lt;/P&gt;

&lt;P&gt;wbr, Gennady&lt;/P&gt;</description>
      <pubDate>Mon, 04 Sep 2017 16:06:53 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135717#M25981</guid>
      <dc:creator>Gennady_F_Intel</dc:creator>
      <dc:date>2017-09-04T16:06:53Z</dc:date>
    </item>
    <item>
      <title>Ok last bump I promise.</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135718#M25982</link>
      <description>&lt;P&gt;Ok last bump I promise.&lt;/P&gt;

&lt;P&gt;Was there any change the patch was released as part of 2017 update 5? I am kinda under the gun with completing my project and I think the Intel solver is the only way to do it.&lt;/P&gt;</description>
      <pubDate>Sun, 08 Oct 2017 17:26:10 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135718#M25982</guid>
      <dc:creator>William_D_2</dc:creator>
      <dc:date>2017-10-08T17:26:10Z</dc:date>
    </item>
    <item>
      <title>Hi William,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135719#M25983</link>
      <description>&lt;P&gt;Hi William,&lt;/P&gt;

&lt;P&gt;i checked again. The issue is supposed to be fixed in 2018 update 1, which should be ready in Nov. Is it ok for your project?&lt;/P&gt;

&lt;P&gt;If very urgent, please create one ticket in official &lt;A href="https://software.intel.com/en-us/support/online-service-center"&gt;https://software.intel.com/en-us/support/online-service-center&lt;/A&gt;.&lt;/P&gt;

&lt;P&gt;Best Regards,&lt;/P&gt;

&lt;P&gt;Ying&lt;/P&gt;</description>
      <pubDate>Wed, 11 Oct 2017 06:00:09 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135719#M25983</guid>
      <dc:creator>Ying_H_Intel</dc:creator>
      <dc:date>2017-10-11T06:00:09Z</dc:date>
    </item>
    <item>
      <title>OK ran the posted examples</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135720#M25984</link>
      <description>&lt;P&gt;OK ran the posted examples that are broken, again using the 2018 update 1 Parallel Studio XE Cluster Edition. It failed on both Windows and Linux. Are we sure the patch got released?&lt;/P&gt;</description>
      <pubDate>Fri, 17 Nov 2017 16:03:24 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Direct-Sparse-Solver-for-Clusters-Pardiso-Memory-Allocation/m-p/1135720#M25984</guid>
      <dc:creator>William_D_2</dc:creator>
      <dc:date>2017-11-17T16:03:24Z</dc:date>
    </item>
  </channel>
</rss>

