Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.
6956 Discussions

MKL 2021 and MKL 2018 call cluster_sparse_solver() and the calculation results are different?

miao0109
Beginner
1,968 Views
When calling cluster_sparse_solver() to solve the complex matrix equation, I found that the calculation results of MKL-2018 and MKL-2021 are different?
my link is:

lmkl_scalapack_lp64 -Wl,--no-as-needed -lmkl_cdft_core -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -lmkl_blacs_intelmpi_lp64 -liomp5 -lpthread -lm -ldl
My run command is:

mpiexec -np 2 ./mpi_test

Attach my code and data.

A0.txt b0.txt is the data of rank0,A1.txt b1.txt is the data of rank1.

x2018*txt is the calculation result of MKL2018,x2021* is the calculation result of MKL2021.

mycode:

#include <iostream>
#include <string>
#include <fstream>
#include <complex>
#include <vector>

#include "Eigen/Sparse"
#include "Eigen/SparseCore"

#include "mpi.h"
#include "mkl_pardiso.h"
#include "mkl_cluster_sparse_solver.h"

int main() {
    MPI_Init(nullptr, nullptr);

    std::string file_A("A");
    std::string file_b("b");
    std::string file_x("x2021");

    using Complex = std::complex<double>;

    std::vector<Complex> b;

    int rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);

    file_A = file_A+std::to_string(rank)+".txt";
    file_b = file_b+std::to_string(rank)+".txt";
    file_x = file_x+std::to_string(rank)+".txt";

    std::vector<Eigen::Triplet<Complex>> tripletList;
    std::FILE *file = std::fopen(file_A.c_str(), "r");
    if (!file)
    {
        printf("can't open file %s\n", file_A.c_str());
        exit(1);
    }
    int glbRows, glbCols, lclRows, rowstart, rowEnd, nonzero;
    std::fscanf(file, " %d, %d , %d , %d , %d , %d\n", &glbRows, &glbCols, &lclRows, &rowstart, &rowEnd, &nonzero);

    Eigen::SparseMatrix<Complex, Eigen::RowMajor> matrix;
    matrix.resize(lclRows, glbCols);
    tripletList.reserve(nonzero);
    int row, col;
    double real, imag;
    int num = 0;
    while (std::fscanf(file, "[ %d ] , [ %d ] = (%lf, %lf)\n", &row, &col, &real, &imag) != EOF)
    {
        tripletList.push_back(Eigen::Triplet<Complex>(row, col, Complex(real, imag)));
    }
    matrix.setFromTriplets(tripletList.begin(), tripletList.end());
    matrix.makeCompressed();
    std::fclose(file);

    file = std::fopen(file_b.c_str(), "r");
    if (!file)
    {
        printf("can't open file %s\n", file_b.c_str());
        exit(1);
    }
    std::fscanf(file, " %d , %d , %d , %d\n", &glbRows, &lclRows, &rowstart, &rowEnd);
    b.reserve(lclRows);


    while (std::fscanf(file, "[%d] (%lf, %lf)\n", &row, &real, &imag) != EOF)
    {
        b.push_back(Complex(real, imag));
    }

    void *pt_[64] = {nullptr};  // Internal solver memory pointer
    MKL_INT iparm_[64] = {0};   // Pardiso control parameters
    int mtype = 6;
    pardisoinit(pt_, &mtype, iparm_);

    iparm_[ 1] =  3;
    iparm_[ 7] =  3; // Max number of iterative refinement steps
    iparm_[ 9] =  8; // Perturb the pivot elements with 1E-13
    iparm_[34] =  1; // PARDISO use C-style indexing for ia and ja arrays
    iparm_[39] =  2; // Input: matrix/rhs/solution are distributed between MPI processes
    iparm_[40] = rowstart;
    iparm_[41] = rowEnd;

    Eigen::VectorXcd x(lclRows);

    int one = 1;
    int phase = 13;
    int error;
    int comm = MPI_COMM_WORLD;
    cluster_sparse_solver(pt_, &one, &one, &mtype, &phase, &glbRows, matrix.valuePtr(), matrix.outerIndexPtr(), matrix.innerIndexPtr(),
                          nullptr, &one, iparm_, &one, b.data(), x.data(), &comm, &error);

    if (rank == 0) {
        std::cout << "# of iterative refinement steps: " << iparm_[6] << '\n';
        std::cout << "# of perturbed pivots: " << iparm_[13] << '\n';
        std::cout << "peak memory on symbolic factorization: " << iparm_[14] << '\n';
        std::cout << "permanent memory on symbolic factorization: " << iparm_[15] << '\n';
        std::cout << "size of memory on numerical factorization and solution: " << iparm_[16] << '\n';
        std::cout << "# of non-zeros elements in the factors: " << iparm_[17] << '\n';
        std::cout << "# of floating point operations: " << iparm_[18] << '\n';
        std::cout << "error = " << error << std::endl;
    }

    std::ofstream xout(file_x);
    xout << x;

    MPI_Finalize();
}
Labels (1)
0 Kudos
4 Replies
Gennady_F_Intel
Moderator
1,949 Views

I didn't check, but do you expect to see identical results?

0 Kudos
miao0109
Beginner
1,926 Views

It should be roughly the same, but the calculation result of MKL2021 is completely different. I think it should be a bug.

0 Kudos
Gennady_F_Intel
Moderator
1,896 Views

yes, I confirmed the problem You reported. We will investigate the cause of the issue.



0 Kudos
Gennady_F_Intel
Moderator
1,843 Views

As a temporary workaround, please set iparm_[12] = 1 and the results would be as expected.


0 Kudos
Reply