<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic I made some minor in Intel® oneAPI Math Kernel Library</title>
    <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1106624#M24176</link>
    <description>&lt;P&gt;I made some minor modifications to the Intel provided example,&amp;nbsp;cl_solver_sym_sp_0_based_c . I added an if statement at lines 50 and 209. It runs fine with np 1. But with np 2 it gives errors.&lt;/P&gt;

&lt;P&gt;Please see the attached file.&lt;/P&gt;</description>
    <pubDate>Wed, 24 Aug 2016 16:30:51 GMT</pubDate>
    <dc:creator>Ferris_H_</dc:creator>
    <dc:date>2016-08-24T16:30:51Z</dc:date>
    <item>
      <title>Guidance on integrating cluster_sparse_solver into my application</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1106620#M24172</link>
      <description>&lt;P&gt;I&amp;nbsp;am trying to integrate &lt;SPAN class="option"&gt;cluster_sparse_solver into my application, however, &amp;nbsp;I am confused by this in the documentation:&lt;/SPAN&gt;&lt;/P&gt;

&lt;H3 class="NoteTipHead"&gt;Note&lt;/H3&gt;

&lt;P&gt;Most of the input parameters (except for the &lt;VAR class="varname"&gt;pt&lt;/VAR&gt;, &lt;VAR class="varname"&gt;phase&lt;/VAR&gt;, and &lt;VAR class="varname"&gt;comm&lt;/VAR&gt; parameters and, for the distributed format, the &lt;VAR class="varname"&gt;a&lt;/VAR&gt;, &lt;VAR class="varname"&gt;ia&lt;/VAR&gt;, and &lt;VAR class="varname"&gt;ja&lt;/VAR&gt; arrays) must be set on the master MPI process only, and ignored on other processes. Other MPI processes get all required data from the master MPI process using the MPI communicator, &lt;VAR class="varname"&gt;comm&lt;/VAR&gt;.&lt;/P&gt;

&lt;P&gt;I interpret this as saying if rank=0, then all input parameters need to be defined. But if rank &amp;gt; 1, then you can input NULL values? I tried doing that as shown in this pseudo code below. But I keep getting "ERROR during symbolic factorization: -1" when I run with np &amp;gt; 1. With np 1 it runs correctly, but only on one host.&lt;/P&gt;

&lt;PRE class="brush:cpp;"&gt;int main()
{
&amp;nbsp;&amp;nbsp;&amp;nbsp; mpi_stat = MPI_Init( &amp;amp;argc, &amp;amp;argv );
&amp;nbsp;&amp;nbsp;&amp;nbsp; mpi_stat = MPI_Comm_rank( MPI_COMM_WORLD, &amp;amp;rank );
&amp;nbsp;&amp;nbsp;&amp;nbsp; comm =&amp;nbsp; MPI_Comm_c2f( MPI_COMM_WORLD );

if ( rank &amp;lt; 1 ) {
read_input_file();
assemble_i_ia_ja();
call_cluster_sparse_solver();
}

else {

int i;
long long pt[64];
for(i=0;i&amp;lt;64;i++){pt&lt;I&gt;=0;}

double *aupardiso=NULL;
ITG *icolpardiso=NULL,*pointers=NULL,iparm[64];
ITG&amp;nbsp; maxfct=1,mnum=1,phase=12,nrhs=1,*perm=NULL,mtype,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; msglvl=0,error=0,*irowpardiso=NULL, neq;

double *b=NULL,*x=NULL;

FORTRAN ( cluster_sparse_solver, ( pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; neq, aupardiso , pointers , icolpardiso, perm, &amp;amp;nrhs, iparm, &amp;amp;msglvl, b, x, &amp;amp;comm, &amp;amp;error ));

}

&lt;/I&gt;&lt;/PRE&gt;

&lt;P&gt;The function call_cluster_sparse_solver contains this code:&lt;/P&gt;

&lt;PRE class="brush:;"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; int&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; mpi_stat = 0;
&amp;nbsp;&amp;nbsp;&amp;nbsp; int&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; comm, rank;
&amp;nbsp;&amp;nbsp;&amp;nbsp; mpi_stat = MPI_Comm_rank( MPI_COMM_WORLD, &amp;amp;rank );
&amp;nbsp;&amp;nbsp;&amp;nbsp; comm =&amp;nbsp; MPI_Comm_c2f( MPI_COMM_WORLD );

&amp;nbsp;&amp;nbsp;&amp;nbsp; FORTRAN ( cluster_sparse_solver, ( pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; neq, aupardiso , pointers , icolpardiso, perm, &amp;amp;nrhs, iparm, &amp;amp;msglvl, b, x, &amp;amp;comm, &amp;amp;error ));
&lt;/PRE&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 22 Aug 2016 18:13:18 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1106620#M24172</guid>
      <dc:creator>Ferris_H_</dc:creator>
      <dc:date>2016-08-22T18:13:18Z</dc:date>
    </item>
    <item>
      <title>Hi,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1106621#M24173</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;

&lt;P&gt;You are correct, if you don't use distributed format then pointer can be set as null. Can you provide iparm data that you use on master process?&lt;/P&gt;

&lt;P&gt;Thanks,&lt;/P&gt;

&lt;P&gt;Alex&lt;/P&gt;</description>
      <pubDate>Tue, 23 Aug 2016 04:49:38 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1106621#M24173</guid>
      <dc:creator>Alexander_K_Intel2</dc:creator>
      <dc:date>2016-08-23T04:49:38Z</dc:date>
    </item>
    <item>
      <title>On master process ( rank = 0</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1106622#M24174</link>
      <description>&lt;P&gt;On master process ( rank = 0 ), iparm is set only as follows:&lt;/P&gt;

&lt;P&gt;ITG iparm[64];&lt;BR /&gt;
	&amp;nbsp;iparm[0]=0;&lt;/P&gt;

&lt;P&gt;then cluster_parse_solver is called as follows:&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; FORTRAN ( cluster_sparse_solver, ( pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,&lt;BR /&gt;
	&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; neq, aupardiso , pointers , icolpardiso, perm, &amp;amp;nrhs, iparm, &amp;amp;msglvl, b, x, &amp;amp;comm, &amp;amp;error ));&lt;/P&gt;

&lt;P&gt;Is this correct, or should iparm be different ? I can try and create a simple example code that reproduces this issue .&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 23 Aug 2016 17:17:03 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1106622#M24174</guid>
      <dc:creator>Ferris_H_</dc:creator>
      <dc:date>2016-08-23T17:17:03Z</dc:date>
    </item>
    <item>
      <title>Hi,</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1106623#M24175</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;

&lt;P&gt;yes, can you provide the example?&lt;/P&gt;

&lt;P&gt;Thanks,&lt;/P&gt;

&lt;P&gt;Alex&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 24 Aug 2016 01:25:04 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1106623#M24175</guid>
      <dc:creator>Alexander_K_Intel2</dc:creator>
      <dc:date>2016-08-24T01:25:04Z</dc:date>
    </item>
    <item>
      <title>I made some minor</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1106624#M24176</link>
      <description>&lt;P&gt;I made some minor modifications to the Intel provided example,&amp;nbsp;cl_solver_sym_sp_0_based_c . I added an if statement at lines 50 and 209. It runs fine with np 1. But with np 2 it gives errors.&lt;/P&gt;

&lt;P&gt;Please see the attached file.&lt;/P&gt;</description>
      <pubDate>Wed, 24 Aug 2016 16:30:51 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1106624#M24176</guid>
      <dc:creator>Ferris_H_</dc:creator>
      <dc:date>2016-08-24T16:30:51Z</dc:date>
    </item>
    <item>
      <title>Just following up if someone</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1106625#M24177</link>
      <description>&lt;P&gt;Just following up if someone had a chance to run my example code that reproduces the issue I am facing?&lt;/P&gt;</description>
      <pubDate>Tue, 06 Sep 2016 20:07:22 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1106625#M24177</guid>
      <dc:creator>Ferris_H_</dc:creator>
      <dc:date>2016-09-06T20:07:22Z</dc:date>
    </item>
    <item>
      <title>Unfortunately, I am still</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1106626#M24178</link>
      <description>&lt;P&gt;Unfortunately, I am still facing difficulty in how to call cluster_sparse_solver when rank &amp;gt; 0 . Can someone share some simple example so I can compare with my code? The documentation states:&lt;/P&gt;

&lt;P&gt;Most of the input parameters (except for the &lt;VAR class="varname"&gt;pt&lt;/VAR&gt;, &lt;VAR class="varname"&gt;phase&lt;/VAR&gt;, and &lt;VAR class="varname"&gt;comm&lt;/VAR&gt; parameters and, for the distributed format, the &lt;VAR class="varname"&gt;a&lt;/VAR&gt;, &lt;VAR class="varname"&gt;ia&lt;/VAR&gt;, and &lt;VAR class="varname"&gt;ja&lt;/VAR&gt; arrays) must be set on the master MPI process only, and ignored on other processes.&lt;/P&gt;

&lt;P&gt;How exactly do all the input parameters need to be initialized for the non-master process? Here is what I tried which seems to keep giving segmentation fault:&lt;/P&gt;

&lt;P&gt;&amp;nbsp;
	&lt;/P&gt;&lt;PRE class="brush:cpp;"&gt;if ( rank &amp;gt; 0 ) {

    comm =  MPI_Comm_c2f( MPI_COMM_WORLD );

    /* Matrix data. */
    MKL_INT n=NULL;
    MKL_INT *ia=NULL;
    MKL_INT *ja=NULL;
    float *a=NULL;
    MKL_INT mtype=NULL;
    MKL_INT nrhs=NULL;
    float *b=NULL, *x=NULL, *bs=NULL, res=0.0, res0=0.0; /* RHS and solution vectors. */
    void *pt[64] = { 0 };

    /* Cluster Sparse Solver control parameters. */
    MKL_INT *iparm = NULL;

    MKL_INT maxfct=NULL, mnum=NULL, phase=NULL, msglvl=NULL, error=NULL;

    /* Auxiliary variables. */
    float ddum=0.0; /* float dummy   */

    MKL_INT idum=NULL; /* Integer dummy. */

phase = 11;

printf ("got here 11");

cluster_sparse_solver ( pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,
                &amp;amp;n, &amp;amp;a, ia, ja, &amp;amp;idum, &amp;amp;nrhs, iparm, &amp;amp;msglvl, &amp;amp;ddum, &amp;amp;ddum,
        &amp;amp;comm, &amp;amp;error );
phase = 22;
printf ("got here 22");

cluster_sparse_solver ( pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,
                &amp;amp;n, a, ia, ja, &amp;amp;idum, &amp;amp;nrhs, iparm, &amp;amp;msglvl, &amp;amp;ddum, &amp;amp;ddum,
        &amp;amp;comm, &amp;amp;error );


phase = 33;
cluster_sparse_solver ( pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,
                &amp;amp;n, a, ia, ja, &amp;amp;idum, &amp;amp;nrhs, iparm, &amp;amp;msglvl, &amp;amp;ddum, &amp;amp;ddum,
        &amp;amp;comm, &amp;amp;error );

phase = -1;

cluster_sparse_solver ( pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype, &amp;amp;phase,
                &amp;amp;n, a, ia, ja, &amp;amp;idum, &amp;amp;nrhs, iparm, &amp;amp;msglvl, &amp;amp;ddum, &amp;amp;ddum,
        &amp;amp;comm, &amp;amp;error );
}&lt;/PRE&gt;
&lt;P&gt;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 24 Sep 2016 05:07:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1106626#M24178</guid>
      <dc:creator>Ferris_H_</dc:creator>
      <dc:date>2016-09-24T05:07:00Z</dc:date>
    </item>
    <item>
      <title>I finally figured it out</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1106627#M24179</link>
      <description>&lt;P&gt;I finally figured it out after isolating and testing each input individually. These have to be set to 1, not NULL:&lt;/P&gt;

&lt;P&gt;&lt;CODE class="plain"&gt;MKL_INT maxfct=NULL, mnum=NULL&lt;/CODE&gt;&lt;/P&gt;

&lt;P&gt;&lt;CODE&gt;I would suggest updating the documentation as this is not accurate:&lt;/CODE&gt;&lt;/P&gt;

&lt;P&gt;Most of the input parameters (except for the pt, phase, and comm parameters and, for the distributed format, the a, ia, and ja arrays) must be set on the master MPI process only, and ignored on other processes. Other MPI processes get all required data from the master MPI process using the MPI communicator, comm.&lt;/P&gt;</description>
      <pubDate>Sat, 24 Sep 2016 19:26:17 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1106627#M24179</guid>
      <dc:creator>Ferris_H_</dc:creator>
      <dc:date>2016-09-24T19:26:17Z</dc:date>
    </item>
    <item>
      <title>Re: Guidance on integrating cluster_sparse_solver into my application</title>
      <link>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1338636#M32335</link>
      <description>&lt;P&gt;It took me many weeks to figure out, but here's what worked for me for the ranks &amp;gt; 0:&lt;/P&gt;
&lt;P&gt;Hopefully Intel can update their provided examples to include something like this. The examples they have are great for tiny matrixes. But to modify it for a real application is quite a confusing process. An example for the dummy_cluster_sparse_solver would save people many weeks of work &lt;LI-EMOJI id="lia_slightly-smiling-face" title=":slightly_smiling_face:"&gt;&lt;/LI-EMOJI&gt;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;////////////////////////////////////

void dummy_cluster_sparse_solver() {

    int     mpi_stat = 0;
    int     comm, rank;

    mpi_stat = MPI_Init( '', 1 );
    mpi_stat = MPI_Comm_rank( MPI_COMM_WORLD, &amp;amp;rank );
    comm =  MPI_Comm_c2f( MPI_COMM_WORLD );

    if ( rank == 0 ) { return; }

   /* Matrix data. */
    MKL_INT n;
    MKL_INT *ia;
    MKL_INT *ja;
    MKL_INT mtype;
    MKL_INT nrhs;

    double *a, *b, *x;
    long int *pt;
    long int pt_real_sym_indefinite[64] = { 0 };
    long int pt_real_symmetric[64] = { 0 };
    long int pt_real_non_symmetric[64] = { 0 };

    pt = pt_real_symmetric;

    MKL_INT iparm[64] = { 0 };
    MKL_INT maxfct, mnum, msglvl, error;
    double ddum; /* float dummy   */
    MKL_INT idum; /* Integer dummy. */
    MKL_INT phase;
    MKL_INT matrix[2] = { 0 };

    MPI_Bcast ( matrix, 2, MPI_LONG_LONG, 0, MPI_COMM_WORLD );
    phase = matrix[1];

    while(( phase != 1 )){

        printf ( "\nEntering phase %i in while loop for matrix %i\n", phase, matrix[0] );

        FORTRAN ( cluster_sparse_solver , ( pt, &amp;amp;maxfct, &amp;amp;mnum, &amp;amp;mtype,
        &amp;amp;phase, &amp;amp;n, a, ia, ja, &amp;amp;idum, &amp;amp;nrhs, iparm, &amp;amp;msglvl,
        &amp;amp;ddum,&amp;amp;ddum, &amp;amp;comm, &amp;amp;error ));

        if ( matrix[0] == -2 ) memcpy( pt_real_sym_indefinite, pt, sizeof(pt_real_sym_indefinite));
        if ( matrix[0] == 1 ) memcpy( pt_real_symmetric, pt, sizeof(pt_real_symmetric));
        if ( matrix[0] == 11 )  memcpy( pt_real_non_symmetric, pt, sizeof(pt_real_non_symmetric));

        MPI_Bcast ( matrix, 2, MPI_LONG_LONG, 0, MPI_COMM_WORLD  );
        phase = matrix[1];

        if ( matrix[0] == -2 ) pt = pt_real_sym_indefinite;
        if ( matrix[0] == 1 ) pt = pt_real_symmetric ;
        if ( matrix[0] == 11 ) pt = pt_real_non_symmetric;

    } // end while

mpi_stat = MPI_Finalize();
exit(0);

} // end function

&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 23 Nov 2021 16:51:46 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Guidance-on-integrating-cluster-sparse-solver-into-my/m-p/1338636#M32335</guid>
      <dc:creator>segmentation_fault</dc:creator>
      <dc:date>2021-11-23T16:51:46Z</dc:date>
    </item>
  </channel>
</rss>

