Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

Intel MPI v2019 options responding v2018

youn__kihang
Novice
1,578 Views

Hi all,

I am posting to ask the Intel MPI v2019 options corresponding to v2018.
I have tested the various versions of Intel compiler and MPI libraries to evaluate the performance for weather forecasting model.
The testes versions are below(BTW, it is not important).

Intel Compiler: 18u5, 19u2, 19u4, 19u5, 20u0, 20u1
Inel MPI library: 17u4, 18u4, 19u6, 19u7

The best performance pair is .
I think one of the reasons is that the mpi option is different between v18 and v19 and the mpi option in the 2019 that I used is less than the 2018 version.
Here is my options. There are so many difference between v18 and v19.
As many of the options used in 2018 disappeared in 2019, the number of options decreased a lot.
Could you tell me if there's anything I took out wrong?
(It means that when I have another option, I don't know it and just take it out.)

2018 version
export I_MPI_FALLBACK=0
export I_MPI_JOB_FAST_STARTUP=enable
export I_MPI_SCALABLE_OPTIMIZATION=enable
export I_MPI_TIMER_KIND=rdtsc
export I_MPI_PLATFORM_CHECK=0
export I_MPI_HYDRA_PMI_CONNECT=alltoall
export I_MPI_THREAD_LEVEL_DEFAULT=FUNNELED
export I_MPI_EXTRA_FILESYSTEM=on
export I_MPI_EXTRA_FILESYSTEM_LIST=gpfs
export I_MPI_FABRICS=shm:dapl
export I_MPI_DAPL_UD=on
export I_MPI_DAPL_UD_RDMA_MIXED=on
export DAPL_IB_MTU=4096
export I_MPI_DAPL_TRANSLATION_CACHE=1
export I_MPI_DAPL_TRANSLATION_CACHE_AVL_TREE=1
export I_MPI_DAPL_UD_TRANSLATION_CACHE=1
export I_MPI_DAPL_UD_TRANSLATION_CACHE_AVL_TREE=1
export I_MPI_DAPL_UD_EAGER_DYNAMIC_CONNECTION=off
export I_MPI_DAPL_UD_MAX_MSG_SIZE=4096


2019 version
export I_MPI_EXTRA_FILESYSTEM=on
export I_MPI_EXTRA_FILESYSTEM_FORCE=gpfs


Thank you in advance

0 Kudos
1 Solution
AbhishekD_Intel
Moderator
1,578 Views

Hi Kihang,

If you are using different versions of the compiler then for every new release you will get the support of previous versions features unless there is depreciation and if some features are removed you will get it notified in Release Notes. So most of the feature of the 2018 version is supported in 2019 version though there are some removals and change in naming conventions that you can find in the Release notes.

There are also some features as mentioned by the in 2018 version which are improved in 2019 like I_MPI_FABRICS=shm in update 5.

There is one option as mentioned by you in the 2018 version, I_MPI_TIMER_KIND which is not yet implemented in 2019.

Though there are unexpected behavior in certain (special file size and number of ranks) cases during MPI IO operations on GPFS filesystem in case of I_MPI_EXTRA_FILESYSTEM=1. You may disable filesystem recognition as a workaround: I_MPI_EXTRA_FILESYSTEM=0.

There is  DAPL Depreciation in 2017 u1 and I_MPI_DAPL_TRANSLATION_CACHEI_MPI_DAPL_UD_TRANSLATION_CACHE are now disabled by default in MPI 2018.

For more details regarding Environment Variables of 2019 update 7, you can visit https://software.intel.com/content/www/us/en/develop/documentation/mpi-d... 
And for details regarding the new features, limitations, removals you can refer          https://software.intel.com/content/www/us/en/develop/articles/intel-mpi-...

 

Warm Regards,

Abhishek

View solution in original post

0 Kudos
6 Replies
AbhishekD_Intel
Moderator
1,579 Views

Hi Kihang,

If you are using different versions of the compiler then for every new release you will get the support of previous versions features unless there is depreciation and if some features are removed you will get it notified in Release Notes. So most of the feature of the 2018 version is supported in 2019 version though there are some removals and change in naming conventions that you can find in the Release notes.

There are also some features as mentioned by the in 2018 version which are improved in 2019 like I_MPI_FABRICS=shm in update 5.

There is one option as mentioned by you in the 2018 version, I_MPI_TIMER_KIND which is not yet implemented in 2019.

Though there are unexpected behavior in certain (special file size and number of ranks) cases during MPI IO operations on GPFS filesystem in case of I_MPI_EXTRA_FILESYSTEM=1. You may disable filesystem recognition as a workaround: I_MPI_EXTRA_FILESYSTEM=0.

There is  DAPL Depreciation in 2017 u1 and I_MPI_DAPL_TRANSLATION_CACHEI_MPI_DAPL_UD_TRANSLATION_CACHE are now disabled by default in MPI 2018.

For more details regarding Environment Variables of 2019 update 7, you can visit https://software.intel.com/content/www/us/en/develop/documentation/mpi-d... 
And for details regarding the new features, limitations, removals you can refer          https://software.intel.com/content/www/us/en/develop/articles/intel-mpi-...

 

Warm Regards,

Abhishek

0 Kudos
youn__kihang
Novice
1,577 Views

Hi Abhishek,

Thank you for kind reply.
I totally understand what you said, but I want to more detailed explaination for I_MPI_EXTRA_FILESYSTEM

Though there are unexpected behavior in certain (special file size and number of ranks) cases during MPI IO operations on GPFS filesystem in case of I_MPI_EXTRA_FILESYSTEM=1. You may disable filesystem recognition as a workaround: I_MPI_EXTRA_FILESYSTEM=0.

In the reference software.intel.com/content/www/us/en/develop/documentation/mpi-developer..., I_MPI_EXTRA_FILESYSTEM option is below.I_MPI_EXTRA_FILESYSTEM: Control native support for parallel file systems.
Syntax: I_MPI_EXTRA_FILESYSTEM=<arg>
Description: Use this environment variable to enable or disable native support for parallel file systems.

I think I_MPI_EXTRA_FILESYSTEM=1 is more considering the GPFS situation, but you recommend to diable filesystem recognition.
Do you mean I_MPI_EXTRA_FILESYSTEM=0 is more stable?

0 Kudos
AbhishekD_Intel
Moderator
1,577 Views

Hi Kihang,

Actually this is a known issue in MPI Library 2019 Update 5. At certain file sizes and the number of ranks, your application may crash or show unexpected behavior during MPI IO operations on the GPFS filesystem. So for the workaround, you may disable filesystem recognition by I_MPI_EXTRA_FILESYSTEM=0. This is already given in the Known Issues of 2019 update 5.

For more details, you can refer to the below link and can check Intel® MPI Library 2019 Update 5 in Known Issues and Limitation section.

https://software.intel.com/content/www/us/en/develop/articles/intel-mpi-library-release-notes-linux.html

I hope this will be helpful.

 

Warm Regards,

Abhishek

0 Kudos
AbhishekD_Intel
Moderator
1,577 Views

Hi Kihang,

Please let us know if the provided details helped you.

 

Warm Regards,

Abhishek

0 Kudos
youn__kihang
Novice
1,578 Views

Hi Abhishek,

Your answers helped our team a lot.
Thank you.

Best Regards,
Kihang

0 Kudos
AbhishekD_Intel
Moderator
1,578 Views

Hi Kihang,

Thank you for the confirmation. And do post a new thread if you have any issues.

 

 

Warm Regards,

Abhishek

0 Kudos
Reply