Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2159 Discussions

Specify a shared memory transport

Jervie
Novice
1,258 Views

Hello,

 

I would like to know which transport the I_MPI_SHM use when it is specified with auto by default when running a parallel application using oneAPI, and could I specify it with like cma used in UCX. 

 

When I look into I_MPI_SHM using impi_info -v I_MPI_SHM, it said 

---

I_MPI_SHM
MPI Datatype:
MPI_CHAR
Description:
Select a shared memory transport to be used.
Syntax
I_MPI_SHM=<transport>
Arguments
<transport> - Define a shared memory transport solution.
-----------------------------------------------------------------------
> disable | no | off | 0 - Do not use shared memory transport.
> auto - Select a shared memory transport solution automatically.
> bdw_sse - The shared memory transport solution tuned for Intel(R)
microarchitecture code name Broadwell. The SSE/SSE2/SSE3 instruction
set is used.
> bdw_avx2 - The shared memory transport solution tuned for Intel(R)
microarchitecture code name Broadwell. The AVX2 instruction set is used.
> skx_sse - The shared memory transport solution tuned for Intel(R)
Xeon(R) processors based on Intel(R) microarchitecture code name Skylake.
The SSE/SSE2/SSE3 instruction set is used.
> skx_avx2 - The shared memory transport solution tuned for Intel(R)
Xeon(R) processors based on Intel(R) microarchitecture code name Skylake.
The AVX2 instruction set is used.
> skx_avx512 - The shared memory transport solution tuned for Intel(R)
Xeon(R) processors based on Intel(R) microarchitecture code name Skylake.
The AVX512 instruction set is used.
> knl_ddr - The shared memory transport solution tuned for Intel(R)
microarchitecture code name Knights Landing.
> knl_mcdram - The shared memory transport solution tuned for Intel(R)
microarchitecture code name Knights Landing. Shared memory buffers
may be partially located in the MultiChannel DRAM (MCDRAM).

---

 

I wonder when I_MPI_SHM  is specified with auto by default, does it mean I_MPI_SHM use one of the transport above? Like for Intel(R) Xeon(R) processors based on Intel(R) microarchitecture code name Skylake, it uses skx_sse, bdw_avx2 or skx_avx512? And it prefers skx_avx512 when the AVX512 instruction set is used.

 

And could I use cma used in UCX by  setting export I_MPI_SHM=cma?

 

Thank you!

Labels (1)
0 Kudos
3 Replies
SantoshY_Intel
Moderator
1,203 Views

Hi,


Thanks for reaching out to us.


The value of I_MPI_SHM depends on the value of I_MPI_FABRICS as follows:

  1. if I_MPI_FABRICS is ofi, I_MPI_SHM is disabled.
  2. If I_MPI_FABRICS is shm:ofi, I_MPI_SHM defaults to "auto" or takes the specified value.


>>"I wonder when I_MPI_SHM is specified with auto by default, does it mean I_MPI_SHM use one of the transport above?"

Yes.

If I_MPI_SHM defaults to auto, it takes any specified values depending on their Intel microarchitecture code name. For more information please refer to the below link:

https://www.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-linux/top/environment-variable-reference/environment-variables-for-fabrics-control/shared-memory-control.html


>>"And could I use cma used in UCX by setting export I_MPI_SHM=cma?"

No, you could not set cma to I_MPI_SHM.

Only the specified values ( disable | no | off | 0 / bdw_sse / bdw_avx2 / skx_sse / skx_avx2 / skx_avx512 / knl_ddr / knl_mcdram / clx_sse /clx_avx2 / clx_avx512 /clx-ap / icx ) can be explicitly specified using I_MPI_SHM.


Thanks & Regards,

Santosh


0 Kudos
SantoshY_Intel
Moderator
1,164 Views

Hi,


We haven't heard back from you. Is there anything else that we could help you with? If not, then could you please confirm whether to close this thread from our end?


Thanks & Regards,

Santosh


0 Kudos
SantoshY_Intel
Moderator
1,115 Views

Hi,


We have not heard back from you. This thread will no longer be monitored by Intel. If you need further assistance, please post a new question.


Thanks & Regards,

Santosh



0 Kudos
Reply