Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2211 Discussions

mpirun failed for oneAPI on Linux Mint

Camps
Beginner
200 Views

Hello.

I recently installed Intel oneAPI HPC Toolkit in my Linux Mint box.

(Intel(R) MPI Library, Version 2021.13 Build 20240701 (id: 179630a))

Sourced the environmental variables like: 

source /opt/intel/oneapi/setvars.sh

And then compiled the GULP program.

Run an example using:

mpirun -n 2 gulp example1

and got the errors:

Abort(2139023) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Unknown error class, error stack:
MPIR_Init_thread(192)........: 
MPID_Init(1665)..............: 
MPIDI_OFI_mpi_init_hook(1625): 
open_fabric(2726)............: 
find_provider(2904)..........: OFI fi_getinfo() failed (ofi_init.c:2904:find_provider:No data available)
Abort(2139023) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Unknown error class, error stack:
MPIR_Init_thread(192)........: 
MPID_Init(1665)..............: 
MPIDI_OFI_mpi_init_hook(1625): 
open_fabric(2726)............: 
find_provider(2904)..........: OFI fi_getinfo() failed (ofi_init.c:2904:find_provider:No data available)

Then run as:

mpirun -np 2 gulp example1

and the error was:

Abort(2139023) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Unknown error class, error stack:
MPIR_Init_thread(192)........: 
MPID_Init(1665)..............: 
MPIDI_OFI_mpi_init_hook(1625): 
open_fabric(2726)............: 
find_provider(2904)..........: OFI fi_getinfo() failed (ofi_init.c:2904:find_provider:No data available)

 

The command mpirun --version​ returned:

[mpiexec@VARADERO-LIN] match_arg (../../../../../src/pm/i_hydra/libhydra/arg/hydra_arg.c:82): unrecognized argument version​
[mpiexec@VARADERO-LIN] Similar arguments:
[mpiexec@VARADERO-LIN] 	 version
[mpiexec@VARADERO-LIN] HYD_arg_parse_array (../../../../../src/pm/i_hydra/libhydra/arg/hydra_arg.c:106): argument matching returned error
[mpiexec@VARADERO-LIN] mpiexec_get_parameters (../../../../../src/pm/i_hydra/mpiexec/mpiexec_params.c:1190): error parsing input array
[mpiexec@VARADERO-LIN] main (../../../../../src/pm/i_hydra/mpiexec/mpiexec.c:1725): error parsing parameters

and mpirun -info returned:

HYDRA build details:
    Version:                                 2021.13
    Release Date:                            20240701 (id: 179630a)
    Process Manager:                         pmi
    Bootstrap servers available:             ssh slurm rsh ll sge pbs pbsdsh pdsh srun lsf blaunch qrsh fork
    Resource management kernels available:   slurm ll lsf sge pbs cobalt

All the calculations are run in my notebook:

OS: Linux Mint 22 x86_64 
Host: Dell G15 5530 
Kernel: 6.8.0-40-generic 
Uptime: 2 hours, 21 mins 
Packages: 3115 (dpkg), 7 (flatpak) 
Shell: bash 5.2.21 
Resolution: 1920x1080 
DE: Cinnamon 6.2.9 
WM: Mutter (Muffin) 
WM Theme: Mint-L-Dark (Mint-Y) 
Theme: Mint-L-Dark [GTK2/3] 
Icons: Mint-L [GTK2/3] 
Terminal: gnome-terminal 
CPU: 13th Gen Intel i7-13650HX (20) @ 4.700GHz 
GPU: Intel Raptor Lake-S UHD Graphics 
GPU: NVIDIA GeForce RTX 4050 Max-Q / Mobile 
Memory: 11651MiB / 31778MiB 

lscpu returned:

Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        39 bits physical, 48 bits virtual
Byte Order:                           Little Endian
CPU(s):                               20
On-line CPU(s) list:                  0-19
Vendor ID:                            GenuineIntel
Model name:                           13th Gen Intel(R) Core(TM) i7-13650HX
CPU family:                           6
Model:                                183
Thread(s) per core:                   2
Core(s) per socket:                   14
Socket(s):                            1
Stepping:                             1
CPU(s) scaling MHz:                   28%
CPU max MHz:                          4900.0000
CPU min MHz:                          800.0000
BogoMIPS:                             5606.40
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization:                       VT-x
L1d cache:                            544 KiB (14 instances)
L1i cache:                            704 KiB (14 instances)
L2 cache:                             11.5 MiB (8 instances)
L3 cache:                             24 MiB (1 instance)
NUMA node(s):                         1
NUMA node0 CPU(s):                    0-19
Vulnerability Gather data sampling:   Not affected
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Not affected
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

and fi_info -l returned:

psm2:
    version: 120.10
mlx:
    version: 1.4
psm3:
    version: 706.0
verbs:
    version: 120.10
verbs:
    version: 120.10
ofi_rxm:
    version: 120.10
tcp:
    version: 120.10
shm:
    version: 120.10
ofi_hook_noop:
    version: 120.10
off_coll:
    version: 120.10

 

Any help is highly appreciated.

0 Kudos
2 Replies
TobiasK
Moderator
105 Views

Can you please provide the output of 

I_MPI_DEBUG=10 I_MPI_HYDRA_DEBUG=1 mpirun -np 2 IMB-MPI1

?

0 Kudos
Camps
Beginner
76 Views

Here it is:

 

[mpiexec@VARADERO-LIN] Launch arguments: /opt/intel/oneapi/mpi/2021.13/bin//hydra_bstrap_proxy --upstream-host VARADERO-LIN --upstream-port 41533 --pgid 0 --launcher ssh --launcher-number 0 --base-path /opt/intel/oneapi/mpi/2021.13/bin/ --topolib hwloc --tree-width 16 --tree-level 1 --time-left -1 --launch-type 2 --debug --proxy-id 0 --node-id 0 --subtree-size 1 --upstream-fd 7 /opt/intel/oneapi/mpi/2021.13/bin//hydra_pmi_proxy --usize -1 --auto-cleanup 1 --abort-signal 9 
[proxy:0:0@VARADERO-LIN] pmi cmd from fd 9: cmd=init pmi_version=1 pmi_subversion=1
[proxy:0:0@VARADERO-LIN] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0
[proxy:0:0@VARADERO-LIN] pmi cmd from fd 9: cmd=get_maxes
[proxy:0:0@VARADERO-LIN] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=4096
[proxy:0:0@VARADERO-LIN] pmi cmd from fd 9: cmd=get_appnum
[proxy:0:0@VARADERO-LIN] PMI response: cmd=appnum appnum=0
[proxy:0:0@VARADERO-LIN] pmi cmd from fd 9: cmd=get_my_kvsname
[proxy:0:0@VARADERO-LIN] PMI response: cmd=my_kvsname kvsname=kvs_26985_0
[proxy:0:0@VARADERO-LIN] pmi cmd from fd 6: cmd=init pmi_version=1 pmi_subversion=1
[proxy:0:0@VARADERO-LIN] PMI response: cmd=response_to_init pmi_version=1 pmi_subversion=1 rc=0
[proxy:0:0@VARADERO-LIN] pmi cmd from fd 6: cmd=get_maxes
[proxy:0:0@VARADERO-LIN] PMI response: cmd=maxes kvsname_max=256 keylen_max=64 vallen_max=4096
[proxy:0:0@VARADERO-LIN] pmi cmd from fd 6: cmd=get_appnum
[proxy:0:0@VARADERO-LIN] PMI response: cmd=appnum appnum=0
[proxy:0:0@VARADERO-LIN] pmi cmd from fd 6: cmd=get_my_kvsname
[proxy:0:0@VARADERO-LIN] PMI response: cmd=my_kvsname kvsname=kvs_26985_0
[proxy:0:0@VARADERO-LIN] pmi cmd from fd 9: cmd=get kvsname=kvs_26985_0 key=PMI_process_mapping
[proxy:0:0@VARADERO-LIN] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,1,2))
[proxy:0:0@VARADERO-LIN] pmi cmd from fd 6: cmd=get kvsname=kvs_26985_0 key=PMI_process_mapping
[proxy:0:0@VARADERO-LIN] PMI response: cmd=get_result rc=0 msg=success value=(vector,(0,1,2))
[0] MPI startup(): Intel(R) MPI Library, Version 2021.13  Build 20240701 (id: 179630a)
[0] MPI startup(): Copyright (C) 2003-2024 Intel Corporation.  All rights reserved.
[0] MPI startup(): library kind: release
[proxy:0:0@VARADERO-LIN] pmi cmd from fd 9: cmd=barrier_in
[proxy:0:0@VARADERO-LIN] pmi cmd from fd 6: cmd=put kvsname=kvs_26985_0 key=-bcast-1-0 value=2F6465762F73686D2F496E74656C5F4D50495F716D514C4B66
[proxy:0:0@VARADERO-LIN] PMI response: cmd=put_result rc=0 msg=success
[proxy:0:0@VARADERO-LIN] pmi cmd from fd 6: cmd=barrier_in
[proxy:0:0@VARADERO-LIN] PMI response: cmd=barrier_out
[proxy:0:0@VARADERO-LIN] PMI response: cmd=barrier_out
[proxy:0:0@VARADERO-LIN] pmi cmd from fd 9: cmd=get kvsname=kvs_26985_0 key=-bcast-1-0
[proxy:0:0@VARADERO-LIN] PMI response: cmd=get_result rc=0 msg=success value=2F6465762F73686D2F496E74656C5F4D50495F716D514C4B66
[0] MPI startup(): libfabric loaded: libfabric.so.1 
[0] MPI startup(): libfabric version: 1.20.1-impi
[proxy:0:0@VARADERO-LIN] pmi cmd from fd 6: cmd=abort exitcode=2139023
[proxy:0:0@VARADERO-LIN] pmi cmd from fd 9: cmd=abort exitcode=2139023

 

0 Kudos
Reply