Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

BLACS and multiple MPI communicators

Pilkyung_Moon
Beginner
451 Views

Dear all,

I am trying to use parallel version of ?heevr (i.e., p?heevr of SCALAPACK) to get eigenvalues. To do so, it seems that I need to initialize BLACS.

There are N cores in the system, which I divided into n subgroups with m cores each (N = n x m) by using MPI_Comm_split(). The matrices assigned to each subgroup are all different.

The question is that, is it possible to initialize BLACS in each subgroup independently? I am asking this because, although MPI can manage the cores by MPI communicators (MPI_COMM_WORLD for the entire N cores and extra communicators for each subgroup), I cannot find such 'tag' in BLACS.

Thank you in advance.

Pilkyung

 

 

 

0 Kudos
2 Replies
Konstantin_A_Intel
451 Views

Hi Pilkyung,

There's a way in BLACS allowing to create multiple grids. Please look at blacs_gridmap(..) routine.

https://software.intel.com/en-us/node/522199#E8F8F80B-B8BC-44F3-8253-00C5D011CEAE

Here's the part related to your question:

If you are an experienced user, blacs_gridmap allows you to take advantage of your system's actual layout. That is, you can map nodes that are physically connected to be neighbors in the BLACS grid, etc. The blacs_gridmap routine also opens the way for multigridding: you can separate your nodes into arbitrary grids, join them together at some later date, and then re-split them into new grids. blacs_gridmap also provides the ability to make arbitrary grids or subgrids (for example, a "nearest neighbor" grid), which can greatly facilitate operations among processes that do not fall on a row or column of the main process grid.

Regards,

Konstantin

0 Kudos
Pilkyung_Moon
Beginner
451 Views

Thank you Konstantin!

I will try it.

Best,

Pilkyung

0 Kudos
Reply