Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

Re: Intel oneAPI with Podman

ugiwgh
Beginner
654 Views

Hi,

I have send email to supportreplies@intel.com with my wugh@paratera.com. Do you recieve it?

 
The email msg as following .
--------------------------------
I'm sorry. In intel/oneapi-hpckit:devel-ubuntu20.04 container, I cd to /opt/intel/oneapi/mpi/2021.6.0/test directory. And mpicc test.c, but when I execute ./a.out, it's Segmentation fault.
So the image has something wrong. I could provide the dockerfile.
 
centos.oneapi.dockerfile
---------- FILE START ----------
RUN yum install -y wget make rsync gcc gcc-c++
WORKDIR /root
ADD l_BaseKit_p_2022.1.2.146_offline.sh .
RUN bash l_BaseKit_p_2022.1.2.146_offline.sh -a -s --eula accept --components intel.oneapi.lin.mkl.devel && rm -rf l_BaseKit_p_2022.1.2.146_offline.sh
ADD l_HPCKit_p_2022.1.2.117_offline.sh .
RUN bash l_HPCKit_p_2022.1.2.117_offline.sh -a -s --eula accept --components intel.oneapi.lin.ifort-compiler:intel.oneapi.lin.dpcpp-cpp-compiler-pro:intel.oneapi.lin.mpi.devel && rm -rf l_HPCKit_p_2022.1.2.117_offline.sh
ENV MKLROOT=/opt/intel/oneapi/mkl/2022.0.2
ENV PATH=/opt/intel/oneapi/mpi/2021.5.1//libfabric/bin:/opt/intel/oneapi/mpi/2021.5.1//bin:/opt/intel/oneapi/mkl/2022.0.2/bin/intel64:/opt/intel/oneapi/dev-utilities/2021.5.2/bin:/opt/intel/oneapi/debugger/2021.5.0/gdb/intel64/bin:/opt/intel/oneapi/compiler/2022.0.2/linux/lib/oclfpga/bin:/opt/intel/oneapi/compiler/2022.0.2/linux/bin/intel64:/opt/intel/oneapi/compiler/2022.0.2/linux/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/vasp.5.4.4/bin
ENV LIBRARY_PATH=/opt/intel/oneapi/tbb/2021.5.1/env/../lib/intel64/gcc4.8:/opt/intel/oneapi/mpi/2021.5.1//libfabric/lib:/opt/intel/oneapi/mpi/2021.5.1//lib/release:/opt/intel/oneapi/mpi/2021.5.1//lib:/opt/intel/oneapi/mkl/2022.0.2/lib/intel64:/opt/intel/oneapi/compiler/2022.0.2/linux/compiler/lib/intel64_lin:/opt/intel/oneapi/compiler/2022.0.2/linux/lib
ENV LD_LIBRARY_PATH=/opt/intel/oneapi/tbb/2021.5.1/env/../lib/intel64/gcc4.8:/opt/intel/oneapi/mpi/2021.5.1//libfabric/lib:/opt/intel/oneapi/mpi/2021.5.1//lib/release:/opt/intel/oneapi/mpi/2021.5.1//lib:/opt/intel/oneapi/mkl/2022.0.2/lib/intel64:/opt/intel/oneapi/debugger/2021.5.0/gdb/intel64/lib:/opt/intel/oneapi/debugger/2021.5.0/libipt/intel64/lib:/opt/intel/oneapi/debugger/2021.5.0/dep/lib:/opt/intel/oneapi/compiler/2022.0.2/linux/lib:/opt/intel/oneapi/compiler/2022.0.2/linux/lib/x64:/opt/intel/oneapi/compiler/2022.0.2/linux/lib/oclfpga/host/linux64/lib:/opt/intel/oneapi/compiler/2022.0.2/linux/compiler/lib/intel64_lin
ENV I_MPI_ROOT=/opt/intel/oneapi/mpi/2021.5.1
ENV FI_PROVIDER_PATH=/opt/intel/oneapi/mpi/2021.5.1//libfabric/lib/prov:/usr/lib64/libfabric
CMD source /opt/intel/oneapi/setvars.sh
---------- FILE END ----------
 
On my host Intel MPI version is
$ mpirun --version
Intel(R) MPI Library for Linux* OS, Version 2021.5 Build 20211102 (id: 9279b7d62)
 
$ mpirun -np 2 ./mpitest
Hello world: rank 0 of 2 running on ga2210.para.bscc
Hello world: rank 1 of 2 running on ga2210.para.bscc
 
$ mpirun -np 2 podman run --env-host --env-file envfile --userns=keep-id --network=host --pid=host --ipc=host -v .:/export:z 273b79f14177 /export/mpitest
[cli_0]: write_line error; fd=9 buf=:cmd=init pmi_version=1 pmi_subversion=1
:
system msg for write_line failure : Bad file descriptor
[cli_0]: Unable to write to PMI_fd
[cli_0]: write_line error; fd=9 buf=:cmd=get_appnum
:
system msg for write_line failure : Bad file descriptor
Abort(1090575) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(143):
MPID_Init(1221)......:
MPIR_pmi_init(130)...: PMI_Get_appnum returned -1
[cli_0]: write_line error; fd=9 buf=:cmd=abort exitcode=1090575
:
system msg for write_line failure : Bad file descriptor
Attempting to use an MPI routine before initializing MPICH
[cli_1]: write_line error; fd=10 buf=:cmd=init pmi_version=1 pmi_subversion=1
:
system msg for write_line failure : Bad file descriptor
[cli_1]: Unable to write to PMI_fd
[cli_1]: write_line error; fd=10 buf=:cmd=get_appnum
:
system msg for write_line failure : Bad file descriptor
Abort(1090575) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(143):
MPID_Init(1221)......:
MPIR_pmi_init(130)...: PMI_Get_appnum returned -1
[cli_1]: write_line error; fd=10 buf=:cmd=abort exitcode=1090575
:
system msg for write_line failure : Bad file descriptor
Attempting to use an MPI routine before initializing MPICH
 
 
Thanks & Regards,
GHui
 
0 Kudos
2 Replies
HemanthCH_Intel
Moderator
614 Views

Hi,


Thank you for posting in Intel Communities.


As you are facing issues to respond to the existing post in the Intel communities. So, we have contacted you privately, please check your inbox and respond back to us.


Thanks & Regards,

Hemanth.


0 Kudos
HemanthCH_Intel
Moderator
607 Views

Hi,


As we are communicating internally through emails, Since this is a duplicate thread of https://community.intel.com/t5/Intel-oneAPI-HPC-Toolkit/Re-Intel-oneAPI-with-Podman/m-p/1394023#M9611, we will no longer monitor this thread. We will continue addressing this issue in the other thread


Thanks & Regards,

Hemanth


0 Kudos
Reply