Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

Intel oneAPI with Podman

ugiwgh
Beginner
1,491 Views

I have ran Intel MPI with Podman. There is something wrong.

 

$ mpirun -np 1 podman run --env-host --env-file envfile --userns=keep-id --network=host --pid=host --ipc=host -w /exports -v .:/exports:z dptech/vasp:5.4.4 /exports/mpitest
[cli_0]: write_line error; fd=9 buf=:cmd=init pmi_version=1 pmi_subversion=1
:
system msg for write_line failure : Bad file descriptor
[cli_0]: Unable to write to PMI_fd
[cli_0]: write_line error; fd=9 buf=:cmd=get_appnum
:
system msg for write_line failure : Bad file descriptor
Abort(1090575) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(143):
MPID_Init(1221)......:
MPIR_pmi_init(130)...: PMI_Get_appnum returned -1
[cli_0]: write_line error; fd=9 buf=:cmd=abort exitcode=1090575
:
system msg for write_line failure : Bad file descriptor
Attempting to use an MPI routine before initializing MPICH
^C[mpiexec@ja0911.para.bscc] Sending Ctrl-C to processes as requested
[mpiexec@ja0911.para.bscc] Press Ctrl-C again to force abort
^C

 

 

0 Kudos
3 Replies
HemanthCH_Intel
Moderator
1,461 Views

Hi,


Thank you for posting in Intel Communities.


Could you please provide us with the below details to investigate more on your issue?

1)Operating system details.

2)Intel MPI version.

3)Podman version.

4)Sample reproducer code and steps to reproduce your issue at our end.


Thanks & Regards,

Hemanth


0 Kudos
HemanthCH_Intel
Moderator
1,358 Views

Hi,


As you are facing issues to respond to the existing post in the Intel communities. So, we have contacted you privately, please check your inbox and respond back to us.


Thanks & Regards,

Hemanth.


0 Kudos
HemanthCH_Intel
Moderator
1,336 Views

Hi,


As we are communicating internally through emails, Since this is a duplicate thread of https://community.intel.com/t5/Intel-oneAPI-HPC-Toolkit/Re-Intel-oneAPI-with-Podman/m-p/1394023#M9611, we will no longer monitor this thread. We will continue addressing this issue in the other thread


Thanks & Regards,

Hemanth


0 Kudos
Reply