Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

Intel MPI 2019 with Mellanox ConnectX-5 / provider=ofi_rxm

Eric_W_3
Beginner
1,804 Views

Hi, we are currently standing up a new cluster with Mellanox ConnectX-5 adapters. I have found that using openMPI, mvapich2, and intel2018-mpi, we can run MPI jobs on all 960 cores in the cluster, however, using intel2019-mpi we can't get beyond ~300 mpi ranks. If we do, we get the following error for every rank: 

Abort(273768207) on node 650 (rank 650 in comm 0): Fatal error in PMPI_Comm_split: Other MPI error, error stack: 
PMPI_Comm_split(507)...................: MPI_Comm_split(MPI_COMM_WORLD, color=0, key=650, new_comm=0x7911e8) failed 
PMPI_Comm_split(489)...................: 
MPIR_Comm_split_impl(167)..............: 
MPIR_Allgather_intra_auto(145).........: Failure during collective 
MPIR_Allgather_intra_auto(141).........: 
MPIR_Allgather_intra_brucks(115).......: 
MPIC_Sendrecv(344).....................: 
MPID_Isend(662)........................: 
MPID_isend_unsafe(282).................: 
MPIDI_OFI_send_lightweight_request(106): 
(unknown)(): Other MPI error 
---------------------------------------------------------------------------------------------------------- 
This is using the default FI_PROVIDER of ofi_rxm. If we switch to using "verbs", we can run all 960 cores, but tests show an order of magnitude increase in latency and much longer run times. 

We have tried installing our own libfabrics (from the git repo ; also we verified with verbose debugging that we are using this libfabrics) and this behavoir does not change

Is there anything I can change to allow all 960 cores using the default ofi_rxm provider?  Or, is there a way to improve performance using the verbs provider?

For completeness: 
Using MLNX_OFED_LINUX-4.6-1.0.1.1-rhel7.6-x86_64 ofed 
CentOS 7.6.1810 (kernel = 3.10.0-957.21.3.el7.x86_64) 
Intel Parallel studio version 19.0.4.243 
Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5] 


Thanks! 

Eric
 

0 Kudos
3 Replies
ferrao
Novice
1,804 Views

Erick, have you ever figured this out? Intel MPI 2020.1 still have the same issues when running with more than 640 cores.

0 Kudos
DrAmarpal_K_Intel
1,804 Views

Hi Vinicius,

Could you please share your full command line with which you invoke your application (along with any environment variables that you might separately set)? I am assuming you are using the default mlx provider. Please confirm (run with I_MPI_DEBUG=5). Please also refer to the following article as a general guideline when running the latest versions of Intel MPI Library on IB,

https://software.intel.com/en-us/articles/improve-performance-and-stability-with-intel-mpi-library-on-infiniband ;

0 Kudos
Reply