Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

error with blacs_get and blacs_gridinit

BrianZHANG
Beginner
398 Views

I am trying to call ScaLAPACK's function in Fortran using Visual Studio 2019  under Windows.
But I have a problem initializing the blas environment. The program always reports an error at blacs_gridinit.

Below is my test code and the command is "C:\Program Files (x86)\Intel\oneAPI\mpi\2021.12\bin\mpiexec.exe" -n 2 $(TargetPath)

 

 

    program ScaLAPACKExample
    use mpi
    implicit none

    integer :: myrank, nprocs, ierr
    integer :: ictxt, myrow, mycol, nprow, npcol
    integer, parameter :: n = 1000
    integer :: nb = 100
    integer :: info, i, j
    integer :: desca(9), descA_local(9)
    double precision, allocatable :: A(:,:), A_local(:,:)
    integer, allocatable :: ipiv(:)

    call blacs_pinfo(myrank, nprocs)

    call blacs_get(-1,0,ictxt)

    nprow = 5
    npcol = 5

    call blacs_gridinit(ictxt, 'R', nprow, npcol)

    call blacs_gridinfo(ictxt, nprow, npcol, myrow, mycol)

    ! Allocate global matrix A at root and local matrix A_local at each process
    if (myrank == 0) then
        allocate(A(n, n))
    endif

    allocate(A_local(nb, nb))
    allocate(ipiv(n + mod(n, nprow*npcol)))

    ! Step 1: Initialize matrix A at root
    if (myrank == 0) then
        call random_seed()
        do i = 1, n
            do j = 1, n
                call random_number(A(i, j))
            end do
        end do
    endif

    ! Step 2: Set up descriptor for A and A_local
    if (myrank == 0) then
        call descinit(desca, n, n, nb, nb, 0, 0, ictxt, n, info)
    endif
    call descinit(descA_local, n, n, nb, nb, 0, 0, ictxt, nb, info)

    ! Step 3: Distribute the matrix A
    call pdgemr2d(n, n, A, 1, 1, desca, A_local, 1, 1, descA_local, ictxt)

    ! Step 4: Perform LU decomposition
    call pdgetrf(n, n, A_local, 1, 1, descA_local, ipiv, info)

    if (myrank == 0) then
        if (info == 0) then
            print *, "LU decomposition successful."
        else
            print *, "LU decomposition failed with info =", info
        endif
    endif

    ! Clean up
    call blacs_gridexit(ictxt)
    call MPI_Finalize(ierr)
    if (allocated(A)) deallocate(A)
    deallocate(A_local, ipiv)

    end program ScaLAPACKExample

 

 

 When I run the program with the following settings linking the LP library, I get the error:

 

 

Fortran-Additional Include Directories:
C:\Program Files (x86)\Intel\oneAPI\mkl\2024.1\include\mkl\intel64\lp64;
C:\Program Files (x86)\Intel\oneAPI\mkl\2024.1\include;
C:\Program Files (x86)\Intel\oneAPI\mpi\2021.12\include;
C:\Program Files (x86)\Intel\oneAPI\mpi\2021.12\include\mpi\;

Liner-Additional Library Directories:
C:\Program Files (x86)\Intel\oneAPI\mpi\2021.12\lib;

Liner-Input:
mkl_blas95_lp64.lib mkl_lapack95_lp64.lib mkl_scalapack_lp64.lib mkl_intel_lp64.lib mkl_intel_thread.lib mkl_core.lib mkl_blacs_intelmpi_lp64.lib libiomp5md.lib impi.lib

Error Information:
Abort(739350022) on node 1 (rank 1 in comm 0): Fatal error in PMPI_Group_incl: Unknown error class, error stack:
PMPI_Group_incl(166).............: MPI_Group_incl(group=0x88000001, n=25, ranks=000002A1FF104980, new_group=000000D74DCFF680) failed
MPIR_Group_check_valid_ranks(240): Invalid rank in rank array at index 2; value is 2 but must be in the range 0 to 1

 

 

 When linking to iLP library, I still got a different error:

 

 

Fortran-Additional Include Directories:
C:\Program Files (x86)\Intel\oneAPI\mkl\2024.1\include\mkl\intel64\ilp64;
C:\Program Files (x86)\Intel\oneAPI\mkl\2024.1\include;
C:\Program Files (x86)\Intel\oneAPI\mpi\2021.12\include;
C:\Program Files (x86)\Intel\oneAPI\mpi\2021.12\include\mpi\;

Liner-Additional Library Directories:
C:\Program Files (x86)\Intel\oneAPI\mpi\2021.12\lib;

Liner-Input:
 mkl_blas95_ilp64.lib mkl_lapack95_ilp64.lib mkl_scalapack_ilp64.lib mkl_intel_ilp64.lib mkl_intel_thread.lib mkl_core.lib mkl_blacs_intelmpi_ilp64.lib libiomp5md.lib impi.lib

Error Information:
Abort(739331589) on node 1 (rank 1 in comm 0): Fatal error in PMPI_Comm_group: Unknown error class, error stack:
PMPI_Comm_group(160): MPI_Comm_group(comm=0xcccccccc, group=000000DC2FF4F910) failed
PMPI_Comm_group(116): Invalid communicator

 

 

I also compiled this example on Linux and got the same error.

 

 

mpiifort -g -O0 -I${MKLROOT}/include/intel64/ilp64 -i8 -I"${MKLROOT}/include" *.f90 ${MKLROOT}/lib/intel64/libmkl_blas95_ilp64.a ${MKLROOT}/lib/intel64/libmkl_lapack95_ilp64.a -L${MKLROOT}/lib/intel64 -lmkl_scalapack_ilp64 -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -lmkl_blacs_intelmpi_ilp64 -liomp5 -lpthread -lm -ldl -o a.out

 

 

These instructions were written based on the oneMKL Link Line Advisor. I have no idea what the problem is. If anyone can identify the problem I would be very grateful.

0 Kudos
2 Replies
Gennady_F_Intel
Moderator
320 Views

You might look at the pdgetrfx.f  example of ScaLAPACK's example from the official version of oneMKL:

e.x - $MKLROOT/share/doc/mkl/examples/example_cluster_f.zip  and learn how it will work there.


0 Kudos
Gennady_F_Intel
Moderator
274 Views

we were unable to hear back from you.

If you have any further queries, please post a new question, as this thread will no longer be monitored by MKL


0 Kudos
Reply