Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
6957 Discussions

Guidance on integrating cluster_sparse_solver into my application

Ferris_H_
Beginner
1,022 Views

I am trying to integrate cluster_sparse_solver into my application, however,  I am confused by this in the documentation:

Note

Most of the input parameters (except for the pt, phase, and comm parameters and, for the distributed format, the a, ia, and ja arrays) must be set on the master MPI process only, and ignored on other processes. Other MPI processes get all required data from the master MPI process using the MPI communicator, comm.

I interpret this as saying if rank=0, then all input parameters need to be defined. But if rank > 1, then you can input NULL values? I tried doing that as shown in this pseudo code below. But I keep getting "ERROR during symbolic factorization: -1" when I run with np > 1. With np 1 it runs correctly, but only on one host.

int main()
{
    mpi_stat = MPI_Init( &argc, &argv );
    mpi_stat = MPI_Comm_rank( MPI_COMM_WORLD, &rank );
    comm =  MPI_Comm_c2f( MPI_COMM_WORLD );

if ( rank < 1 ) {
read_input_file();
assemble_i_ia_ja();
call_cluster_sparse_solver();
}

else {

int i;
long long pt[64];
for(i=0;i<64;i++){pt=0;}

double *aupardiso=NULL;
ITG *icolpardiso=NULL,*pointers=NULL,iparm[64];
ITG  maxfct=1,mnum=1,phase=12,nrhs=1,*perm=NULL,mtype,
      msglvl=0,error=0,*irowpardiso=NULL, neq;

double *b=NULL,*x=NULL;

FORTRAN ( cluster_sparse_solver, ( pt, &maxfct, &mnum, &mtype, &phase,
                neq, aupardiso , pointers , icolpardiso, perm, &nrhs, iparm, &msglvl, b, x, &comm, &error ));

}

The function call_cluster_sparse_solver contains this code:

    int     mpi_stat = 0;
    int     comm, rank;
    mpi_stat = MPI_Comm_rank( MPI_COMM_WORLD, &rank );
    comm =  MPI_Comm_c2f( MPI_COMM_WORLD );

    FORTRAN ( cluster_sparse_solver, ( pt, &maxfct, &mnum, &mtype, &phase,
                neq, aupardiso , pointers , icolpardiso, perm, &nrhs, iparm, &msglvl, b, x, &comm, &error ));

 

0 Kudos
8 Replies
Alexander_K_Intel2
1,022 Views

Hi,

You are correct, if you don't use distributed format then pointer can be set as null. Can you provide iparm data that you use on master process?

Thanks,

Alex

0 Kudos
Ferris_H_
Beginner
1,022 Views

On master process ( rank = 0 ), iparm is set only as follows:

ITG iparm[64];
 iparm[0]=0;

then cluster_parse_solver is called as follows:

    FORTRAN ( cluster_sparse_solver, ( pt, &maxfct, &mnum, &mtype, &phase,
                neq, aupardiso , pointers , icolpardiso, perm, &nrhs, iparm, &msglvl, b, x, &comm, &error ));

Is this correct, or should iparm be different ? I can try and create a simple example code that reproduces this issue .

 

 

 

0 Kudos
Alexander_K_Intel2
1,022 Views

Hi,

yes, can you provide the example?

Thanks,

Alex 

0 Kudos
Ferris_H_
Beginner
1,022 Views

I made some minor modifications to the Intel provided example, cl_solver_sym_sp_0_based_c . I added an if statement at lines 50 and 209. It runs fine with np 1. But with np 2 it gives errors.

Please see the attached file.

0 Kudos
Ferris_H_
Beginner
1,022 Views

Just following up if someone had a chance to run my example code that reproduces the issue I am facing?

0 Kudos
Ferris_H_
Beginner
1,022 Views

Unfortunately, I am still facing difficulty in how to call cluster_sparse_solver when rank > 0 . Can someone share some simple example so I can compare with my code? The documentation states:

Most of the input parameters (except for the pt, phase, and comm parameters and, for the distributed format, the a, ia, and ja arrays) must be set on the master MPI process only, and ignored on other processes.

How exactly do all the input parameters need to be initialized for the non-master process? Here is what I tried which seems to keep giving segmentation fault:

 

if ( rank > 0 ) {

    comm =  MPI_Comm_c2f( MPI_COMM_WORLD );

    /* Matrix data. */
    MKL_INT n=NULL;
    MKL_INT *ia=NULL;
    MKL_INT *ja=NULL;
    float *a=NULL;
    MKL_INT mtype=NULL;
    MKL_INT nrhs=NULL;
    float *b=NULL, *x=NULL, *bs=NULL, res=0.0, res0=0.0; /* RHS and solution vectors. */
    void *pt[64] = { 0 };

    /* Cluster Sparse Solver control parameters. */
    MKL_INT *iparm = NULL;

    MKL_INT maxfct=NULL, mnum=NULL, phase=NULL, msglvl=NULL, error=NULL;

    /* Auxiliary variables. */
    float ddum=0.0; /* float dummy   */

    MKL_INT idum=NULL; /* Integer dummy. */

phase = 11;

printf ("got here 11");

cluster_sparse_solver ( pt, &maxfct, &mnum, &mtype, &phase,
                &n, &a, ia, ja, &idum, &nrhs, iparm, &msglvl, &ddum, &ddum,
        &comm, &error );
phase = 22;
printf ("got here 22");

cluster_sparse_solver ( pt, &maxfct, &mnum, &mtype, &phase,
                &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl, &ddum, &ddum,
        &comm, &error );


phase = 33;
cluster_sparse_solver ( pt, &maxfct, &mnum, &mtype, &phase,
                &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl, &ddum, &ddum,
        &comm, &error );

phase = -1;

cluster_sparse_solver ( pt, &maxfct, &mnum, &mtype, &phase,
                &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl, &ddum, &ddum,
        &comm, &error );
}

 

 

 

0 Kudos
Ferris_H_
Beginner
1,022 Views

I finally figured it out after isolating and testing each input individually. These have to be set to 1, not NULL:

MKL_INT maxfct=NULL, mnum=NULL

I would suggest updating the documentation as this is not accurate:

Most of the input parameters (except for the pt, phase, and comm parameters and, for the distributed format, the a, ia, and ja arrays) must be set on the master MPI process only, and ignored on other processes. Other MPI processes get all required data from the master MPI process using the MPI communicator, comm.

0 Kudos
segmentation_fault
New Contributor I
920 Views

It took me many weeks to figure out, but here's what worked for me for the ranks > 0:

Hopefully Intel can update their provided examples to include something like this. The examples they have are great for tiny matrixes. But to modify it for a real application is quite a confusing process. An example for the dummy_cluster_sparse_solver would save people many weeks of work

////////////////////////////////////

void dummy_cluster_sparse_solver() {

    int     mpi_stat = 0;
    int     comm, rank;

    mpi_stat = MPI_Init( '', 1 );
    mpi_stat = MPI_Comm_rank( MPI_COMM_WORLD, &rank );
    comm =  MPI_Comm_c2f( MPI_COMM_WORLD );

    if ( rank == 0 ) { return; }

   /* Matrix data. */
    MKL_INT n;
    MKL_INT *ia;
    MKL_INT *ja;
    MKL_INT mtype;
    MKL_INT nrhs;

    double *a, *b, *x;
    long int *pt;
    long int pt_real_sym_indefinite[64] = { 0 };
    long int pt_real_symmetric[64] = { 0 };
    long int pt_real_non_symmetric[64] = { 0 };

    pt = pt_real_symmetric;

    MKL_INT iparm[64] = { 0 };
    MKL_INT maxfct, mnum, msglvl, error;
    double ddum; /* float dummy   */
    MKL_INT idum; /* Integer dummy. */
    MKL_INT phase;
    MKL_INT matrix[2] = { 0 };

    MPI_Bcast ( matrix, 2, MPI_LONG_LONG, 0, MPI_COMM_WORLD );
    phase = matrix[1];

    while(( phase != 1 )){

        printf ( "\nEntering phase %i in while loop for matrix %i\n", phase, matrix[0] );

        FORTRAN ( cluster_sparse_solver , ( pt, &maxfct, &mnum, &mtype,
        &phase, &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl,
        &ddum,&ddum, &comm, &error ));

        if ( matrix[0] == -2 ) memcpy( pt_real_sym_indefinite, pt, sizeof(pt_real_sym_indefinite));
        if ( matrix[0] == 1 ) memcpy( pt_real_symmetric, pt, sizeof(pt_real_symmetric));
        if ( matrix[0] == 11 )  memcpy( pt_real_non_symmetric, pt, sizeof(pt_real_non_symmetric));

        MPI_Bcast ( matrix, 2, MPI_LONG_LONG, 0, MPI_COMM_WORLD  );
        phase = matrix[1];

        if ( matrix[0] == -2 ) pt = pt_real_sym_indefinite;
        if ( matrix[0] == 1 ) pt = pt_real_symmetric ;
        if ( matrix[0] == 11 ) pt = pt_real_non_symmetric;

    } // end while

mpi_stat = MPI_Finalize();
exit(0);

} // end function

 

0 Kudos
Reply