Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

Default switch-over limits for Intel MPI Library 2019

Afzal__Ayesha
Beginner
822 Views

Hi,
The Intel MPI library 2019 uses PSM 2.x interface for Omni-Path fabric and PSM 1.x interface for QDR fabric. For very smaller message size, there is a switch-over in MPI implementation for Omni-Path network, but not for Infiniband interconnect network? Further, what are the default eager limits for intra-node and inter-node communication for Intel 2019? And specific control variables to tune these values? Thanks

0 Kudos
2 Replies
Dmitry_D_Intel
Employee
822 Views

Hi,

Starting IMPI 2019 we rely on OFI/libfabric infrastructure for low level transport: https://github.com/ofiwg/libfabric/

There are several knobs available for devices with verbs interface.

We use fi_rxm + fi_verbs combination for verbs RC path so you may check all available knobs on corresponding man pages:

- https://ofiwg.github.io/libfabric/v1.7.1/man/fi_rxm.7.html

- https://ofiwg.github.io/libfabric/v1.7.1/man/fi_verbs.7.html

 

The most relevant knobs to your question are:

FI_VERBS_INLINE_SIZE, FI_OFI_RXM_BUFFER_SIZE, FI_OFI_RXM_SAR_LIMIT, FI_VERBS_MR_CACHE_ENABLE

 

Would be great if you could clarify what kind of problem you would like to address. What kind of scenario you would like to tune?

BR,

Dmitry

 

0 Kudos
Afzal__Ayesha
Beginner
822 Views

Hi Dmitry,

Thanks for your quick reply. To be more precise, there are two questions.

1. Why there are two different default values of 16k and 256 Kb for messages transmitted via a rendezvous protocol (FI_OFI_RXM_BUFFER_SIZE, FI_OFI_RXM_SAR_LIMIT)?

2. For message size greater than 4096 byte, there seems an implementation difference only for Omni-Path networking system, and I'm unable to find this as a default value of any knob to know the real reason?

Regards,

 

0 Kudos
Reply