Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

I_MPI_WAIT_MODE replacement in Intel MPI?

B__C
Beginner
3,456 Views

It looks like I_MPI_WAIT_MODE has been removed from Intel MPI 2019 (https://software.intel.com/en-us/articles/intel-mpi-library-release-notes-linux). Are there any suggestions on what to replace it with if we previously used it to avoid polling in our code?

0 Kudos
10 Replies
Yury_K_Intel
Employee
3,456 Views

Hello,

In Intel MPI 2019 Update 3 there is one new undocumented feature which may help you (I_MPI_THREAD_YIELD={2|3}). Here is a short description:

I_MPI_THREAD_YIELD=0

No back off [default]

 

I_MPI_THREAD_YIELD=1

Use PAUSE instruction for back off

 

I_MPI_THREAD_YIELD=2

Use sched_yield for back off

 

I_MPI_THREAD_YIELD=3

Use usleep(0) for back off

--

Best regards, Yury

0 Kudos
B__C
Beginner
3,456 Views

Thank you for your response.

I tried setting I_MPI_THREAD_YIELD to 0,1,2,3, and didn't see a timing difference between any of them, and all are ~1.6x slower than with I_MPI_WAIT_MODE in the case where we oversubscribe the cores.

However, I don't think I have Update 3 yet, so maybe these weren't implemented until Update 3? The latest version I can find on our system is in "intel-2019/compilers_and_libraries_2019.0.117/", and I'm not sure how "2019.0.117" maps to the different Updates.

0 Kudos
Yury_K_Intel
Employee
3,456 Views

You are using 2019 Gold release. You can download 2019 Update 3 and it will be compilers_and_libraries_2019.3.<#pack_num> in your path.
I_MPI_THREAD_YIELD is available starting 2019 Update 3.

0 Kudos
B__C
Beginner
3,459 Views

Thank you for the information! I'll try again when 2019 Update 3 is on the system I run on.

0 Kudos
B__C
Beginner
3,459 Views

I am also curious: why is I_MPI_THREAD_YIELD undocumented? Do you think it will be documented in the future? (I ask because I am wondering if this means it might be removed like I_MPI_WAIT_MODE.)

0 Kudos
Yury_K_Intel
Employee
3,459 Views

We would like to test how this feature may help on real applications and then definitely we will document it. No plans to remove it but we need positive feedbacks from customers first to document it.

0 Kudos
B__C
Beginner
3,459 Views

Thank you, the explanation is very much appreciated. I'll update this when 2019 Update 3 installed on the system I use.

0 Kudos
B__C
Beginner
3,459 Views

The admins installed Update 3 (so I link with "compilers_and_libraries_2019.3.199"), but unfortunately trying I_MPI_THREAD_YIELD=2 or 3 did not make a difference. To be sure, after running tests with Intel MPI 2019, I ran again with 2017 Intel MPI, and put timings below.

For running 112 MPI ranks on a 56-core dual socket Skylake (oversubscribing cores, since this was our standard way of running).

The results are:

Intel 2017 MPI, with I_MPI_WAIT_MODE enabled, ~80 s

Intel 2017 MPI, with I_MPI_WAIT_MODE unset, ~125 s

Intel 2019 MPI, with I_MPI_THREAD_YIELD unset, ~128 s

Intel 2019 MPI, with I_MPI_THREAD_YIELD=1, ~127 s

Intel 2019 MPI, with I_MPI_THREAD_YIELD=2, ~128 s

Intel 2019 MPI, with I_MPI_THREAD_YIELD=3, ~127 s

 

To be sure of linking, for the 2019 version, the linked libraries are: ```

The dynamically linked libraries for this binary are
skylake/test_2019/test/test.00.2019.x :

        linux-vdso.so.1 =>  (0x00007f564311f000)
        libmpi.so.12 => /soft/compilers/intel-2019/compilers_and_libraries_2019.3.199/linux/mpi/intel64/lib/release/libmpi.so.12 (0x00007f5641d72000)
        librt.so.1 => /lib64/librt.so.1 (0x00007f5641b6a000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f564194e000)
        libm.so.6 => /lib64/libm.so.6 (0x00007f564164c000)
        libc.so.6 => /lib64/libc.so.6 (0x00007f564127f000)
        libgcc_s.so.1 => /soft/compilers/gcc/5.5.0/linux-rhel7-x86_64/lib64/libgcc_s.so.1 (0x00007f5641068000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007f5640e64000)
        libfabric.so.1 => /soft/compilers/intel-2019/compilers_and_libraries_2019.3.199/linux/mpi/intel64/libfabric/lib/libfabric.so.1 (0x00007f5640c2c000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f5642f00000)

```

and my LD_LIBRARY_PATH started with:

```/soft/compilers/intel-2019/compilers_and_libraries_2019.3.199/linux/mpi/intel64/lib:/soft/compilers/intel-2019/compilers_and_libraries_2019.3.199/linux/mpi/intel64/libfabric/lib:/soft/compilers/intel-2019/compilers_and_libraries_2019.3.199/linux/mpi/intel\
64/lib/release:/soft/compilers/intel-2019/compilers_and_libraries_2019.3.199/linux/mpi/intel64/lib:/soft/compilers/intel-2019/compilers_and_libraries_2019.3.199/linux/compiler/lib/intel64_lin:/soft/compilers/intel-2019/compilers_and_libraries_2019.3.199/l\
inux/mpi/intel64/libfabric/lib:/soft/compilers/intel-2019/compilers_and_libraries_2019.3.199/linux/mpi/intel64/lib/release:...```

For the 2017 linked code, the dynamically linked libraries are:

```

The dynamically linked libraries for this binary are
skylake/test_2019/test/test.00.2017.x :

        linux-vdso.so.1 =>  (0x00007fff43d9b000)
        libmpi.so.12 => /soft/compilers/intel/compilers_and_libraries_2017.4.196/linux/mpi/intel64/lib/libmpi.so.12 (0x00007f1a9a17a000)
        libmpifort.so.12 => /soft/compilers/intel/compilers_and_libraries_2017.4.196/linux/mpi/intel64/lib/libmpifort.so.12 (0x00007f1a99dd1000)
        librt.so.1 => /lib64/librt.so.1 (0x00007f1a99bc9000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f1a999ad000)
        libm.so.6 => /lib64/libm.so.6 (0x00007f1a996ab000)
        libc.so.6 => /lib64/libc.so.6 (0x00007f1a992de000)
        libgcc_s.so.1 => /soft/compilers/gcc/5.5.0/linux-rhel7-x86_64/lib64/libgcc_s.so.1 (0x00007f1a990c7000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00007f1a98ec3000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f1a9aea2000)

```

and the start of LD_LIBRARY_PATH is

```

/soft/compilers/intel/compilers_and_libraries_2017.4.196/linux/mpi/intel64/lib:/soft/compilers/intel/compilers_and_libraries_2017.4.196/linux/compiler/lib/intel64:/soft/compilers/intel/compilers_and_libraries_2017.4.196/linux/compiler/lib/intel64_lin:/sof\
t/compilers/intel/compilers_and_libraries_2017.4.196/linux/mpi/intel64/lib:/soft/compilers/intel/compilers_and_libraries_2017.4.196/linux/mpi/mic/lib:...

```

0 Kudos
j0e
New Contributor I
3,459 Views

Looks like I_MPI_WAIT_MODE has been restored in 2019 Update 5 (see link of original post).  Both wait modes are now documented here https://software.intel.com/en-us/mpi-developer-reference-linux-other-environment-variables

0 Kudos
B__C
Beginner
3,459 Views

This is great news! Thank you very much for the update!

0 Kudos
Reply