- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi everyone,
I wrote the attached function that is called by some (not all) MPI processes grouped in different disjoint MPI communicators (COMM). The function simply multiplies input matrices A^t *B =C. The SubMatrix type is the following struct:
struct SubMatrix
{
Matrix* M;
integer beg_row;
integer beg_col;
integer num_rows;
integer num_cols;
};
typedef struct SubMatrix SubMatrix;
whereas a Matrix is:
struct Matrix
{
real *A;
integer num_rows;
integer num_cols;
};
typedef struct Matrix Matrix;
real is either double or float (in this case it is double). integer is either unsigned int or unsigned long.
The main function correctly creates communicators COMM, int *IDs is an array containing the ids in MPI_COMM_WORLD of the processes that need to perform the matrix multiplication. They are required in order to initialize the imap array inside blacs_gridmap.
I am running my code on a cluster and I compile as follows:
mpiicc -std=c99 -O3 -qopenmp -DMKL_ILP64 -I${MKLROOT}/include main.c matrices.c -o mainDis -L${MKLROOT}/lib/intel64 -mkl -lmkl_scalapack_ilp64 -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -lmkl_blacs_intelmpi_ilp64 -liomp5 -lpthread -lm -ldl
The function that I am attaching is inside file matrices.c. I am trying to run main.c with 8 processes and by doing so I expect processes 0 and 1 to create one communicator and to call the attached function, and similarly for processes 2 and 3. They correctly create communicators and assign values to nprow and npcol by calling MPI_Dims_create. imap has the values that I expect (0 and 1 in one case, 2 and 3 in the other one). The program gets stuck after printing "before gridmap", when it should be executing blacs_gridmap().
Can you please help me see what am I doing wrong?
Link Copied
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page