Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
7108 Discussions

ERROR: GLOBAL:COLLECTIVE:OPERATION_MISMATCH: error ( cluster_sparse_solver)

segmentation_fault
New Contributor I
2,896 Views

I've successfully run cluster_sparse_solver on many compute nodes and get good speedup. My application usually only deals with symmetric matrices  ( mtype=-2 ). However, sometimes my application has to also deal with both symmetric and unsymmetric matrices ( mtype=11 ) .

 

I am receiving this error below when hitting phase 33 the second time of my unsymmetric matrix. It's a bit confusing, but here's the pseudo-code first:

 

Unsymmetric matrix

Phase 12 - Success

Phase 33 - Success

Symmetric matrix

Phase 12 - Success

Phase 33 - Success

Phase -1 - Success

Back to Unsymmetric matrix

Phase 33 -> Error below:

 

 

 

[0] ERROR: GLOBAL:COLLECTIVE:OPERATION_MISMATCH: error
[0] ERROR:    Different processes entered different collective operations on the same communicator.
[0] ERROR:    Collective call by local rank [0] (same as global rank):
[0] ERROR:       MPI_Barrier(comm=MPI_COMM_WORLD)
[0] ERROR:       ilp64_Cztrbs2d (/opt/intel/oneapi/mkl/2021.4.0/lib/intel64/libmkl_blacs_intelmpi_ilp64.so.1)
[0] ERROR:       mkl_pds_lp64_sp_pds_slv_fwd_sym_bk_c_single_cmplx (/opt/intel/oneapi/mkl/2021.4.0/lib/intel64/libmkl_core.so.1)
[0] ERROR:       mkl_pds_lp64_sp_pds_slv_bwd_unsym_c_single_vbsr_cmplx (/opt/intel/oneapi/mkl/2021.4.0/lib/intel64/libmkl_core.so.1)
[0] ERROR:       LAPACKE_dgbrfs_work (/opt/intel/oneapi/mkl/2021.4.0/lib/intel64/libmkl_intel_ilp64.so.1)
[0] ERROR:       pardiso_solve_as (/home/feacluster/cluster/CalculiX/ccx_2.18/src/pardiso_as.c:239)
[0] ERROR:       radflowload (/home/feacluster/cluster/CalculiX/ccx_2.18/src/radflowload.c:633)
[0] ERROR:       nonlingeo (/home/feacluster/cluster/CalculiX/ccx_2.18/src/nonlingeo.c:2123)
[0] ERROR:       main (/home/feacluster/cluster/CalculiX/ccx_2.18/src/ccx_2.18.c:1240)
[0] ERROR:       __libc_start_main (/usr/lib64/libc-2.28.so)
[0] ERROR:       _start (/home/feacluster/cluster/CalculiX/ccx_2.18/src/ccx_2.18_MPI)
[0] ERROR:    Collective call by local rank [1] (same as global rank):
[0] ERROR:       MPI_Bcast(*buffer=0x7fffdde77398, count=1, datatype=MPI_LONG_LONG, root=0, comm=MPI_COMM_WORLD)
[0] ERROR:       mpi_calculix (/home/feacluster/cluster/CalculiX/ccx_2.18/src/ccx_2.18.c:1933)
[0] ERROR:       main (/home/feacluster/cluster/CalculiX/ccx_2.18/src/ccx_2.18.c:49)
[0] ERROR:       __libc_start_main (/usr/lib64/libc-2.28.so)
[0] ERROR:       _start (/home/feacluster/cluster/CalculiX/ccx_2.18/src/ccx_2.18_MPI)
[0] INFO: 1 error, limit CHECK-MAX-ERRORS reached => aborting

 

 

In my rank > 0 I have the following while loop for the "dummy" cluster_sparse_solver to receive the phases from rank=0:

 

 

while(( phase != 1 )){

printf ( "Entering phase %i while loop\n", phase );
FORTRAN ( cluster_sparse_solver, ( pt, &maxfct, &mnum, &mtype,
&phase, &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl,
&ddum,&ddum, &comm, &error ));

MPI_Bcast(&phase, 1, MPI_LONG_LONG, 0 , MPI_COMM_WORLD);

} // end while

 

 

My hunch is that calling Phase -1 on the symmetric matrix somehow destroys the unsymmetrix matrix.. I can work on creating an example if nothing comes to mind. It could very well be something wrong with the logic of my while loop on the ranks > 0.  However, that while loop works perfectly as long as the same type of matrix is being used/re-used.

 

Edit: I have also tried creating a new MPI_COMM_WORLD thinking it might be some global name mismatch. After doing this below, the program just hangs at the same phase 33 with similar error from --check-mpi

 

 

//MPI_Comm dup_comm_world;
//MPI_Comm_dup( MPI_COMM_WORLD, &dup_comm_world );
//MPI_Bcast(&phase, 1, MPI_LONG_LONG, 0 , dup_comm_world);

 

 

0 Kudos
11 Replies
VidyalathaB_Intel
Moderator
2,860 Views

Hi,


Thanks for reaching out to us.


>> my application has to also deal with both symmetric and unsymmetric matrices


Could you please provide us with a sample reproducer (and steps to reproduce it if any) so that it would help us to get some more insights regarding your issue?


>>Phase -1 on the symmetric matrix somehow destroys the unsymmetrix matrix


As per the documentation, when the Phase value is -1 it releases all internal memory for all matrices.


Regards,

Vidya.


0 Kudos
segmentation_fault
New Contributor I
2,837 Views

I will work on a simple example. I would expect phase -1 to only release the memory for the specified matrix, not all matrices in the application.  Otherwise how does one work with different matrices concurrently in the same application?

0 Kudos
Kirill_V_Intel
Employee
2,828 Views

Hi!

When the documentation says "all internal memory for all matrices" it means "all" only in the context of a particular handle. If you have separate handles ("pt" in the docs, first argument for the cluster_sparse solver() call) for different matrices, it should not be an issue.

I'd like to see your small reproducer.

If there is a real issue, I'd speculate that maybe there is some mismatch for which MPI processes call which MPI communication, either in your example code or on the side of MKL. Of course, we need to fix it if it is on our side.

Please let us know if you can reproduce it with some example.

Best,
Kirill

segmentation_fault
New Contributor I
2,790 Views

Thanks, I was able to reproduce the issue by modifying the cl_solver_sym_sp_0_based_c.c example. The good news is that it does not have to do with mixing symmetric and unsymmetric matrices. I was able to get it to error out by just using the 8X8 symmetric matrix in the example. First, the pseudo-code:

 

Matrix 1

Phase 12 - Success

Matrix 2  ( same as matrix 1 , but with different pt ( pt_2 ) )

Phase 12 - Success

Matrix 1

Phase 33 - Success

Phase -1 - Success

Matrix 2

Phase 33 -> Error ( segmentation fault )

 

The following example code reproduces this issue. Or you can copy the code from here. The workaround is to uncomment out lines 166-171 . This effectively runs Phase 12 again before calling phase 33 on matrix 2.  Run with:

 

 

mpirun -check-mpi -np 2 ./a.out

 

 

 

#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "mpi.h"
#include "mkl.h"
#include "mkl_cluster_sparse_solver.h"

// mpiicc -g -DMKL_ILP64 -L${MKLROOT}/lib/intel64 -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -lmkl_blacs_intelmpi_ilp64 -liomp5 -lpthread -lm -ldl  cluster_solver_calculix_simple.c

// mpirun -check-mpi -np 2 ./a.out

void  dummy_cluster_sparse_solver();

int main (void)
{

/* -------------------------------------------------------------------- */
/* .. Init MPI.                                                         */
/* -------------------------------------------------------------------- */

    /* Auxiliary variables. */
    int     mpi_stat = 0;
    int     argc = 0;
    int     comm, rank;
    char**  argv;

    mpi_stat = MPI_Init( &argc, &argv );
    mpi_stat = MPI_Comm_rank( MPI_COMM_WORLD, &rank );
    comm =  MPI_Comm_c2f( MPI_COMM_WORLD );

    if ( rank > 0 ) { dummy_cluster_sparse_solver();  }

    /* Matrix data. */

    MKL_INT n = 8;
    MKL_INT ia[9] = { 0, 4, 7, 9, 11, 14, 16, 17, 18 };
    MKL_INT ja[18] = { 0,   2,       5, 6,      /* index of non-zeros in 0 row*/
                         1, 2,    4,            /* index of non-zeros in 1 row*/
                            2,             7,   /* index of non-zeros in 2 row*/
                               3,       6,      /* index of non-zeros in 3 row*/
                                  4, 5, 6,      /* index of non-zeros in 4 row*/
                                     5,    7,   /* index of non-zeros in 5 row*/
                                        6,      /* index of non-zeros in 6 row*/
                                           7    /* index of non-zeros in 7 row*/
    };
   float a[18] = { 7.0, /*0*/ 1.0, /*0*/ /*0*/  2.0,  7.0, /*0*/
                         -4.0, 8.0, /*0*/ 2.0,  /*0*/ /*0*/ /*0*/
                               1.0, /*0*/ /*0*/ /*0*/ /*0*/ 5.0,
                                    7.0,  /*0*/ /*0*/ 9.0,  /*0*/
                                          5.0,  1.0,  5.0,  /*0*/
                                                -1.0, /*0*/ 5.0,
                                                      11.0, /*0*/
                                                            5.0
    };

    MKL_INT mtype = -2;  /* set matrix type to "real symmetric indefinite matrix" */
    MKL_INT nrhs  =  1;  /* number of right hand sides. */
    float b[8], x[8], bs[8], res, res0; /* RHS and solution vectors. */

    /* Internal solver memory pointer pt
     *       32-bit:      int pt[64] or void *pt[64];
     *       64-bit: long int pt[64] or void *pt[64]; */
    void *pt[64] = { 0 };
    void *pt_2[64] = { 0 };

    /* Cluster Sparse Solver control parameters. */
    MKL_INT iparm[64] = { 0 };
    MKL_INT maxfct, mnum, phase, msglvl, error;

    /* Auxiliary variables. */
    float   ddum; /* float dummy   */
    MKL_INT idum; /* Integer dummy. */
    MKL_INT i, j;

/* -------------------------------------------------------------------- */
/* .. Setup Cluster Sparse Solver control parameters.                                 */
/* -------------------------------------------------------------------- */
    iparm[ 0] =  1; /* Solver default parameters overriden with provided by iparm */
    iparm[ 1] =  2; /* Use METIS for fill-in reordering */
    iparm[ 5] =  0; /* Write solution into x */
    iparm[ 7] =  2; /* Max number of iterative refinement steps */
    iparm[ 9] = 13; /* Perturb the pivot elements with 1E-13 */
    iparm[10] =  0; /* Don't use nonsymmetric permutation and scaling MPS */
    iparm[12] =  1; /* Switch on Maximum Weighted Matching algorithm (default for non-symmetric) */
    iparm[17] = -1; /* Output: Number of nonzeros in the factor LU */
    iparm[18] = -1; /* Output: Mflops for LU factorization */
    iparm[27] =  1; /* Single precision mode of Cluster Sparse Solver */
    iparm[34] =  1; /* Cluster Sparse Solver use C-style indexing for ia and ja arrays */
    iparm[39] =  0; /* Input: matrix/rhs/solution stored on master */
    maxfct = 1; /* Maximum number of numerical factorizations. */
    mnum   = 1; /* Which factorization to use. */
    msglvl = 1; /* Print statistical information in file */
    error  = 0; /* Initialize error flag */

/* -------------------------------------------------------------------- */
/* .. Reordering and Symbolic Factorization. This step also allocates   */
/* all memory that is necessary for the factorization.                  */
/* -------------------------------------------------------------------- */
    phase = 12;

    MPI_Send(&phase, 1, MPI_LONG_LONG, 1, 0 , MPI_COMM_WORLD);
    cluster_sparse_solver ( pt, &maxfct, &mnum, &mtype, &phase,
                &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl, &ddum, &ddum, &comm, &error );

    MPI_Send(&phase, 1, MPI_LONG_LONG, 1, 0 , MPI_COMM_WORLD);
    cluster_sparse_solver ( pt_2, &maxfct, &mnum, &mtype, &phase,
                &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl, &ddum, &ddum, &comm, &error );

    if ( error != 0 )
    {
        printf ("\nERROR during symbolic factorization: %lli", (long long int)error);
        mpi_stat = MPI_Finalize();
        return 1;
    }
    printf ("\nReordering completed ... ");

/* -------------------------------------------------------------------- */
/* .. Back substitution and iterative refinement.                       */
/* -------------------------------------------------------------------- */

   /* Set right hand side to one. */
    for ( i = 0; i < n; i++ )
    {
        b[i] = 1.0;
                x[i] = 0.0;
    }
    printf ("\nSolving system...");

    phase = 33;
    MPI_Send(&phase, 1, MPI_LONG_LONG, 1, 0 , MPI_COMM_WORLD);
    cluster_sparse_solver ( pt, &maxfct, &mnum, &mtype, &phase,
                &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl, b, x, &comm, &error );
    if ( error != 0 )
    {
        printf ("\nERROR during solution: %lli", (long long int)error);
        mpi_stat = MPI_Finalize();
        return 4;
    }
    printf ("\nThe solution of the system is: ");
        for ( j = 0; j < n ; j++ )
        {
            printf ( "\n x [%lli] = % f", (long long int)j, x[j] );
        }

/* -------------------------------------------------------------------- */
/* .. Termination and release of memory. */
/* -------------------------------------------------------------------- */

    phase = -1; /* Release internal memory. */
    MPI_Send(&phase, 1, MPI_LONG_LONG, 1, 0 , MPI_COMM_WORLD);

    cluster_sparse_solver ( pt, &maxfct, &mnum, &mtype, &phase,
                &n, &ddum, ia, ja, &idum, &nrhs, iparm, &msglvl, &ddum, &ddum, &comm, &error );
    if ( error != 0 )
    {
        printf ("\nERROR during release memory: %lli", (long long int)error);
        goto final;
    }

/* -------------------------------------------------------------------- */
/* .. Repeat phase 33 for second matrix */
/* -------------------------------------------------------------------- */


/*    phase = 12;

    MPI_Send(&phase, 1, MPI_LONG_LONG, 1, 0 , MPI_COMM_WORLD);
    cluster_sparse_solver ( pt_2, &maxfct, &mnum, &mtype, &phase,
                &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl, &ddum, &ddum, &comm, &error );
*/

    phase=33;

    MPI_Send(&phase, 1, MPI_LONG_LONG, 1, 0 , MPI_COMM_WORLD);
    cluster_sparse_solver ( pt_2, &maxfct, &mnum, &mtype, &phase,
                &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl, b, x, &comm, &error );

    printf ("\nThe solution of the system is: ");
        for ( j = 0; j < n ; j++ )
        {
            printf ( "\n x [%lli] = % f", (long long int)j, x[j] );
        }

    phase = 1; /* Release internal memory. */
    MPI_Send(&phase, 1, MPI_LONG_LONG, 1, 0 , MPI_COMM_WORLD);

/* -------------------------------------------------------------------- */

final:
        if ( error != 0 )
        {
            printf("\n TEST FAILED\n");
        } else {
            printf("\n TEST PASSED\n");
        }
    mpi_stat = MPI_Finalize();
    return error;
}

////////////////////////////////////

void dummy_cluster_sparse_solver() {

    int     mpi_stat = 0;
    int     argc = 0;
    int     comm, rank;
    char**  argv;

    mpi_stat = MPI_Comm_rank( MPI_COMM_WORLD, &rank );
    comm =  MPI_Comm_c2f( MPI_COMM_WORLD );

   /* Matrix data. */
    MKL_INT n;
    MKL_INT *ia;
    MKL_INT *ja;
    MKL_INT mtype;
    MKL_INT nrhs;

void dummy_cluster_sparse_solver() {

    int     mpi_stat = 0;
    int     argc = 0;
    int     comm, rank;
    char**  argv;

    mpi_stat = MPI_Comm_rank( MPI_COMM_WORLD, &rank );
    comm =  MPI_Comm_c2f( MPI_COMM_WORLD );

   /* Matrix data. */
    MKL_INT n;
    MKL_INT *ia;
    MKL_INT *ja;
    MKL_INT mtype;
    MKL_INT nrhs;

    double *a, *b, *x;
    void *pt[64] = { 0 };

    /* Cluster Sparse Solver control parameters. */
    MKL_INT iparm[64] = { 0 };
    MKL_INT maxfct, mnum, msglvl, error;
    double ddum; /* float dummy   */
    MKL_INT idum; /* Integer dummy. */
    MKL_INT phase;

MPI_Recv(&phase, 1, MPI_LONG_LONG, 0 , 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE );

while(( phase != 1 )){

printf ( "\nEntering phase %i while loop\n", phase );

cluster_sparse_solver ( pt, &maxfct, &mnum, &mtype, &phase, &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl, &ddum,&ddum, &comm, &error );

MPI_Recv(&phase, 1, MPI_LONG_LONG, 0 , 0,  MPI_COMM_WORLD, MPI_STATUS_IGNORE );

} // end while

mpi_stat = MPI_Finalize();
exit(0);

} // end function

 

 

0 Kudos
VidyalathaB_Intel
Moderator
2,756 Views

Hi,


Thanks for sharing the reproducer.

We are looking into this issue. we will get back to you soon.


Regards,

Vidya.


0 Kudos
Gennady_F_Intel
Moderator
2,734 Views

I slightly modified your code ( see test_cpardiso_2.cpp  attached) by removing the dummy_cluster_sparse_solver call and linked the code against lp64 API like as follows:

mpiicc -I${MKL_INCL} test_cpardiso_2.cpp -o 2.x  -Wl,--start-group ${MKLROOT}/lib/intel64/libmkl_intel_lp64.a ${MKLROOT}/lib/intel64/libmkl_intel_thread.a ${MKLROOT}/lib/intel64/libmkl_core.a ${MKLROOT}/lib/intel64/libmkl_blacs_intelmpi_lp64.a

-Wl,--end-group -liomp5 -lpthread -lm -ldl.


Launching 2 mpi process (mpirun -n 2 ./2x ) I see the example passed.

the log of this run is attached to this thread as well. see - mpirun_n2_mkl2021u4.log 


Probably, we have some issues when linking against ILP64 libraries with this case. We will investigate the case later.


We noticed you use working arrays ( matrixes) in float datatypes. We recommend using double instead.

--Gennady





segmentation_fault
New Contributor I
2,726 Views

Thanks, but I think you may have forgotten to attach the test_cpardiso_2 and the mpirun_n2_mkl2021u4.log files.

0 Kudos
segmentation_fault
New Contributor I
2,649 Views

Thanks, I was able to run your modified example and it worked correctly. I also compiled it against ILP64 libraries and it also worked . So that meant my issue was not related to LP vs ILP.. 

I did more investigation and found the problem was in my dummy_cluster_sparse_solver() function. If the application is working with different matrix types ( mtype ) then the intenal pointer ( pt ) needs to be "remembered" on all the ranks. It can not be reset each time a new phase is called. I am sharing my code below which should help anyone facing the same issue:

Perhaps the documentation can be updated to make this clearer:

https://www.intel.com/content/www/us/en/develop/documentation/onemkl-developer-reference-fortran/top/sparse-solver-routines/parallel-direct-sp-solver-for-clusters-iface/cluster-sparse-solver.html

 

 

////////////////////////////////////

void dummy_cluster_sparse_solver() {

    int     mpi_stat = 0;
    int     comm, rank;

    mpi_stat = MPI_Comm_rank( MPI_COMM_WORLD, &rank );
    comm =  MPI_Comm_c2f( MPI_COMM_WORLD );

   /* Matrix data. */
    MKL_INT n;
    MKL_INT *ia;
    MKL_INT *ja;
    MKL_INT mtype;
    MKL_INT nrhs;
    double *a, *b, *x;

    long int *pt;
    long int pt1[64] = { 0 };
    long int pt2[64] = { 0 };

    pt = pt1;

    MKL_INT iparm[64] = { 0 };
    MKL_INT maxfct, mnum, msglvl, error;
    double ddum; /* float dummy   */
    MKL_INT idum; /* Integer dummy. */
    MKL_INT phase;
    MKL_INT matrix[2] = { 0 };

    MPI_Bcast ( matrix, 2, MPI_LONG_LONG, 0, MPI_COMM_WORLD );
    phase = matrix[1];

    while(( phase != 1 )){

        printf ( "\nEntering  phase %i in while loop for matrix %i\n", phase, matrix[0] );

        cluster_sparse_solver ( pt, &maxfct, &mnum, &mtype, &phase, &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl, &ddum,&ddum, &comm, &error );

        MPI_Bcast ( matrix, 2, MPI_LONG_LONG, 0, MPI_COMM_WORLD  );
        phase = matrix[1];

        if ( matrix[0] == 0 ) pt = pt1;
           else pt = pt2;

} // end while

mpi_stat = MPI_Finalize();
exit(0);

} // end function

 

 

0 Kudos
segmentation_fault
New Contributor I
2,524 Views

I am attaching a more robust version of dummy_cluster_sparse_solver() which should be able handle upto three matrix types concurrently  in the same run:

 

/////////////////////////////////////

void dummy_cluster_sparse_solver() {

    int     mpi_stat = 0;
    int     comm, rank;

    mpi_stat = MPI_Init( '', 1 );
    mpi_stat = MPI_Comm_rank( MPI_COMM_WORLD, &rank );
    comm =  MPI_Comm_c2f( MPI_COMM_WORLD );

    if ( rank < 1 ) { return; }

   /* Matrix data. */
    MKL_INT n;
    MKL_INT *ia;
    MKL_INT *ja;
    MKL_INT mtype, new_mtype;
    MKL_INT nrhs;

    double *a, *b, *x;

//    long int pt[64] = { 0 };
    long int pt_real_sym_indefinite[64] = { 0 };
    long int pt_real_symmetric[64] = { 0 };
    long int pt_real_non_symmetric[64] = { 0 };

    long int *pt;
    pt = pt_real_symmetric;

    MKL_INT iparm[64] = { 0 };
    MKL_INT maxfct, mnum, msglvl, error;
    double ddum;
    MKL_INT idum;
    MKL_INT phase;
    MKL_INT matrix[2] = { 0 };

    MPI_Bcast ( matrix, 2, MPI_LONG_LONG, 0, MPI_COMM_WORLD );
    phase = matrix[1];
    mtype = matrix[0];

    while(( phase != 1 )){

        printf ( "\nEntering phase %i in while loop for matrix %i\n", phase, matrix[0] );

        cluster_sparse_solver ( pt, &maxfct, &mnum, &mtype,
        &phase, &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl,
        &ddum,&ddum, &comm, &error );

        if ( mtype == -2 ) memcpy( pt_real_sym_indefinite, pt, sizeof(pt_real_sym_indefinite));
        else if ( mtype == 1 ) memcpy( pt_real_symmetric, pt, sizeof(pt_real_symmetric));
        else if( mtype == 11 )  memcpy( pt_real_non_symmetric, pt, sizeof(pt_real_non_symmetric));
            else { printf ( "Invalid matrix type %i found\n" , mtype );  exit(0); }

        MPI_Bcast ( matrix, 2, MPI_LONG_LONG, 0, MPI_COMM_WORLD  );
        phase = matrix[1];  new_mtype =  matrix[0];

        if ( new_mtype != mtype && phase == 12  ) {
            if ( new_mtype == -2 ) memset ( pt_real_sym_indefinite, 0, sizeof(pt_real_sym_indefinite) );
            if ( new_mtype == 1 ) memset ( pt_real_symmetric, 0, sizeof(pt_real_sym_indefinite) ) ;
            if ( new_mtype == 11 ) memset ( pt_real_non_symmetric , 0, sizeof(pt_real_non_symmetric) ) ;
        } // end if

        mtype = new_mtype;

        if ( mtype == -2 ) pt = pt_real_sym_indefinite;
        if ( mtype == 1 ) pt = pt_real_symmetric ;
        if ( mtype == 11 ) pt = pt_real_non_symmetric;

    } // end while

mpi_stat = MPI_Finalize();
exit(0);

} // end function
0 Kudos
Gennady_F_Intel
Moderator
2,717 Views

yes, you are right. The files were attached. the file test_cpardiso_2.cpp has been renamed as test_cpardiso_2.c as there is some problem with the attachment of the files with this extension. 

0 Kudos
Gennady_F_Intel
Moderator
2,622 Views

This issue has been resolved and we will no longer respond to this thread. If you require additional assistance from Intel, please start a new thread. Any further interaction in this thread will be considered community only. 



0 Kudos
Reply