Intel® Distribution for Python*
Engage in discussions with community peers related to Python* applications and core computational packages.

mpi4py: ImportError: libfabric.so.1

ciaron
Beginner
5,374 Views

I've created a new conda environment on our compute cluster thus:

conda create -n mpi_py3 -c intel mpi4py

and a regular "import mpi4py" works fine. (conda list output at the end of this message)

However, using python -m mpi4py.futures results in the following exception:

Traceback (most recent call last):
  File "/p/tmp/linstead/envs/mpi_py3/lib/python3.6/runpy.py", line 183, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "/p/tmp/linstead/envs/mpi_py3/lib/python3.6/runpy.py", line 142, in _get_module_details
    return _get_module_details(pkg_main_name, error)
  File "/p/tmp/linstead/envs/mpi_py3/lib/python3.6/runpy.py", line 109, in _get_module_details
    __import__(pkg_name)
  File "/p/tmp/linstead/envs/mpi_py3/lib/python3.6/site-packages/mpi4py/futures/__init__.py", line 31, in <module>
    from .pool import MPIPoolExecutor
  File "/p/tmp/linstead/envs/mpi_py3/lib/python3.6/site-packages/mpi4py/futures/pool.py", line 14, in <module>
    from . import _lib
  File "/p/tmp/linstead/envs/mpi_py3/lib/python3.6/site-packages/mpi4py/futures/_lib.py", line 16, in <module>
    from .. import MPI
ImportError: libfabric.so.1: cannot open shared object file: No such file or directory


libfabric.so.1 exists in the <env>/lib/libfabric subdirectory, but this is not in the library path. I can fix this by manually setting LD_LIBRARY_PATH to include lib/libfabric.

Similarly at runtime, MPI fails since it can't find the provider libraries in <env>/lib/libfabric/prov unless I set FI_PROVIDER_PATH

Should these manual steps be necessary, or is this a bug?

Best regards

Ciaron

# packages in environment at /p/tmp/linstead/envs/mpi_py3:
#
# Name                    Version                   Build  Channel
bzip2                     1.0.6                        17    intel
certifi                   2018.1.18                py36_2    intel
impi_rt                   2019.0                intel_117    intel
intelpython               2019.0                        2    intel
mpi4py                    3.0.0                    py36_3    intel
openssl                   1.0.2o                        3    intel
pip                       9.0.3                    py36_1    intel
python                    3.6.5                        11    intel
setuptools                39.0.1                   py36_0    intel
sqlite                    3.23.1                        1    intel
tcl                       8.6.4                        20    intel
tk                        8.6.4                        28    intel
wheel                     0.31.0                   py36_2    intel
xz                        5.2.3                         2    intel
zlib                      1.2.11                        5    intel

 

 

0 Kudos
1 Solution
Todd_T_Intel
Employee
5,374 Views

Hello,

This was caused by a change in the Intel MPI library for 2019. Most users have custom libfabric implementations, and the mpivars script that accounts for those while falling back to the included reference implementation does not work in a conda environment.

Adding <env>/lib/libfabric to the LD_LIBRARY_PATH and adding <env>/lib/libfabric/prov to FI_PROVIDER_PATH is correct. Alternately, if you have Intel Parallel Studio Cluster Edition, you can source the mpivars.sh script to setup the environment for MPI (using the version of MPI installed with Cluster Edition).

We will implement a reasonable fallback to those default implementations in a future release.

Sorry for the trouble,

Todd

View solution in original post

0 Kudos
2 Replies
Todd_T_Intel
Employee
5,375 Views

Hello,

This was caused by a change in the Intel MPI library for 2019. Most users have custom libfabric implementations, and the mpivars script that accounts for those while falling back to the included reference implementation does not work in a conda environment.

Adding <env>/lib/libfabric to the LD_LIBRARY_PATH and adding <env>/lib/libfabric/prov to FI_PROVIDER_PATH is correct. Alternately, if you have Intel Parallel Studio Cluster Edition, you can source the mpivars.sh script to setup the environment for MPI (using the version of MPI installed with Cluster Edition).

We will implement a reasonable fallback to those default implementations in a future release.

Sorry for the trouble,

Todd

0 Kudos
ciaron
Beginner
5,374 Views

Hi Todd

Thanks for the reply. This sounds reasonable, and we're currently in the process of updating to the 2019 version. For now, I'll add/update these environment variables in the Python modulefile.

Best regards

Ciaron

0 Kudos
Reply