Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2153 Discussions

Intel MPI file system environment variables for MPI-IO

wadud_miah
Novice
1,605 Views

Hi,

I am using Intel(R) MPI Library, Version 2019 Update 3, and wanted to know what MPI-IO environment variables there are for GPFS. I came across the I_MPI_EXTRA_FILE_SYSTEM  - what values do I set this to for GPFS? I understand that the variables:

I_MPI_EXTRA_FILESYSTEM_LIST
I_MPI_EXTRA_FILESYSTEM

are deprecated. Are there any other Intel MPI environment variables for filesystems such as GPFS?

Thanks in advance.

Labels (2)
0 Kudos
1 Solution
PrasanthD_intel
Moderator
1,574 Views

Hi Wadud,


There is no need to set any extra environmental variable for GPFS file system as from IMPI 2019 GPFS file system is natively supported, as mentioned in release notes(Intel® MPI Library Release Notes for Linux* OS).


Also can you update to the latest version of IMPI as some fixes regarding GPFS were added now.


Regards

Prasanth


View solution in original post

0 Kudos
4 Replies
wadud_miah
Novice
1,604 Views

I also came across this environment variable I_MPI_EXTRA_FILE_SYSTEM_LIST but this doesn't seem to support GPFS. If I set this variable, the Intel MPI runtime system says that this variable is not supported.

0 Kudos
PrasanthD_intel
Moderator
1,575 Views

Hi Wadud,


There is no need to set any extra environmental variable for GPFS file system as from IMPI 2019 GPFS file system is natively supported, as mentioned in release notes(Intel® MPI Library Release Notes for Linux* OS).


Also can you update to the latest version of IMPI as some fixes regarding GPFS were added now.


Regards

Prasanth


0 Kudos
wadud_miah
Novice
1,558 Views
0 Kudos
PrasanthD_intel
Moderator
1,540 Views

Hi Wadud,


Thanks for the confirmation.

As your question has been answered, we are closing this thread. We will no longer respond to this thread. If you require additional assistance from Intel, please start a new thread.


Regards

Prasanth


0 Kudos
Reply