Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

mpiexec.hydra legacy suppress variable is not supported.

Abo__Gavin
Beginner
2,224 Views

Hi,

I downloaded a 30 day trial to try out the new 2019.  Currently, I'm using version_info.c as a test case, which is from the Intel webpage:

https://software.intel.com/en-us/articles/using-intelr-mpi-library-50-with-mpich3-based-applications

Is there an "environmental variable" that I can export or something else I can set to suppress the warnings like "I_MPI_HYDRA_UUID environment variable is not supported" as shown below in Update 4 so that it has output without those messages like Update 2?  I experienced a memory error (i.e. core dump) similar to the segmentation fault reported for non-legacy mpiexec.hydra in the post at:

https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/807359

Thanks,

Gavin

Update 2

username@computername:~/Desktop/test$ source /opt/intel/parallel_studio_xe_2019.2.057/compilers_and_libraries_2019/linux/bin/compilervars.sh intel64
username@computername:~/Desktop/test$ mpiicc -v
mpiicc for the Intel(R) MPI Library 2019 Update 2 for Linux*
Copyright 2003-2019, Intel Corporation.
icc version 19.0.2.187 (gcc version 7.4.0 compatibility)
username@computername:~/Desktop/test$ ls -l
total 4
-rw-r--r-- 1 username username 373 May 29 21:46 version_info.c
username@computername:~/Desktop/test$ mpiicc ./version_info.c -o version_info
username@computername:~/Desktop/test$ mpiexec.hydra -n 2 ./version_info
Hello world: MPI implementation:
 Intel(R) MPI Library 2019 Update 2 for Linux* OS

Update 4

username@computername:~/Desktop/test$ source /opt/intel/parallel_studio_xe_2019.4.070/compilers_and_libraries_2019/linux/bin/compilervars.sh intel64
username@computername:~/Desktop/test$ mpiicc -v
mpiicc for the Intel(R) MPI Library 2019 Update 4 for Linux*
Copyright 2003-2019, Intel Corporation.
icc version 19.0.4.243 (gcc version 7.4.0 compatibility)
username@computername:~/Desktop/test$ ls -l
total 4
-rw-r--r-- 1 username username 373 May 29 21:46 version_info.c
username@computername:~/Desktop/test$ mpiicc ./version_info.c -o version_info
username@computername:~/Desktop/test$ mpiexec.hydra -n 2 ./version_info
Floating point exception (core dumped)
username@computername:~/Desktop/test$ export PATH=${I_MPI_ROOT}/intel64/bin/legacy:${PATH}
username@computername:~/Desktop/test$ mpiexec.hydra -n 2 ./version_info
[0] MPI startup(): I_MPI_HYDRA_UUID environment variable is not supported.
[0] MPI startup(): Similar variables:
     I_MPI_HYDRA_ENV
     I_MPI_HYDRA_RMK
[0] MPI startup(): I_MPI_PM environment variable is not supported.
[0] MPI startup(): Similar variables:
     I_MPI_PMI_LIBRARY
[0] MPI startup(): I_MPI_RANK_CMD environment variable is not supported.
[0] MPI startup(): I_MPI_CMD environment variable is not supported.
[0] MPI startup(): To check the list of supported variables, use the impi_info utility or refer to https://software.intel.com/en-us/mpi-library/documentation/get-started.
Hello world: MPI implementation:
 Intel(R) MPI Library 2019 Update 4 for Linux* OS

0 Kudos
9 Replies
Anatoliy_R_Intel
Employee
2,224 Views

Hi, Gavin.

Yes, there is I_MPI_VAR_CHECK_SPELLING=0 variable, that will suppress those warnings. 

 

Could you run the command lines bellow and provide me full output to investigate the issue with floating point exception?

1. mpiexec.hydra -v -n 2 hostname

2. I_MPI_HYDRA_TOPOLIB=ipl mpiexec.hydra -n 2 hostname

3. HYDRA_BSTRAP_XTERM=1 mpiexec.hydra -n 2 hostname. After that you will see xterm windows with launched gdb. Then type `run` in each windows and you will see Floating point exception in one of the windows. Then type `bt`, it will show backtrace. Please send me this backtrace.

 

--

Best regards, Anatoliy.

0 Kudos
Abo__Gavin
Beginner
2,224 Views

Anatoliy,

The requested output is below.

1.

username@computername:~/Desktop/test$ mpiexec.hydra -v -n 2 hostname
Floating point exception (core dumped)

2.

username@computername:~/Desktop/test$ I_MPI_HYDRA_TOPOLIB=ipl mpiexec.hydra -n 2 hostname
Floating point exception (core dumped)

3. No xterm windows launch and it gives:

username@computername:~/Desktop/test$ HYDRA_BSTRAP_XTERM=1 mpiexec.hydra -n 2 hostname
Floating point exception (core dumped)

The debugger gdb says it cannot find a stack:

username@computername:~/Desktop/test$ export I_MPI_HYDRA_TOPOLIB=ipl
username@computername:~/Desktop/test$ gdb mpiexec.hydra -n 2 hostname
Excess command line arguments ignored. (hostname)
GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from mpiexec.hydra...done.
Attaching to program: /opt/intel/compilers_and_libraries_2019.4.243/linux/mpi/intel64/bin/mpiexec.hydra, process 2
Could not attach to process.  If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user.  For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
/home/username/Desktop/test/2: No such file or directory.
(gdb) bt
No stack.

Thanks, the env variable works to suppress the messages:

username@computername:~/Desktop/test$ export I_MPI_VAR_CHECK_SPELLING=0
username@computername:~/Desktop/test$ mpiexec.hydra -n 2 ./version_info
Hello world: MPI implementation:
 Intel(R) MPI Library 2019 Update 4 for Linux* OS

Best Regards,

Gavin

0 Kudos
Maksim_B_Intel
Employee
2,224 Views

You need to run

gdb --args mpiexec.hydra -n 2 hostname

then typing

run

to gdb, and bt command after you get a crash.

0 Kudos
Abo__Gavin
Beginner
2,224 Views

The requested output:

username@computername:~/Desktop/test$ gdb --args mpiexec.hydra -n 2 hostname
GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from mpiexec.hydra...done.
(gdb) r
Starting program: /opt/intel/compilers_and_libraries_2019.4.243/linux/mpi/intel64/bin/mpiexec.hydra -n 2 hostname
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".

Program received signal SIGFPE, Arithmetic exception.
IPL_MAX_CORE_per_package ()
    at ../../../../../src/pm/i_hydra/../../intel/ipl/include/../src/ipl_processor.c:336
336    ../../../../../src/pm/i_hydra/../../intel/ipl/include/../src/ipl_processor.c: No such file or directory.
(gdb) bt
#0  IPL_MAX_CORE_per_package ()
    at ../../../../../src/pm/i_hydra/../../intel/ipl/include/../src/ipl_processor.c:336
#1  ipl_processor_info (info=0x700f01, pid=0x0, detect_platform_only=0)
    at ../../../../../src/pm/i_hydra/../../intel/ipl/include/../src/ipl_processor.c:1901
#2  0x000000000044e682 in ipl_entrance (detect_platform_only=7343873)
    at ../../../../../src/pm/i_hydra/../../intel/ipl/include/../src/ipl_main.c:19
#3  0x0000000000422651 in i_read_default_env ()
    at ../../../../../src/pm/i_hydra/mpiexec/intel/i_mpiexec_params.h:239
#4  0x000000000042010e in mpiexec_get_parameters (t_argv=0x700f01)
    at ../../../../../src/pm/i_hydra/mpiexec/mpiexec_params.c:1350
#5  0x0000000000404a7d in main (argc=7343873, argv=0x0)
    at ../../../../../src/pm/i_hydra/mpiexec/mpiexec.c:1718

0 Kudos
Anatoliy_R_Intel
Employee
2,224 Views

Thank you for the backtrace. I have created an internal bug ticket for this issue. 

--

Best regards, Anatoliy.

0 Kudos
Anatoliy_R_Intel
Employee
2,224 Views

Hi, Gavin

 

Could you also run lscpu?

--

Best regards, Anatoliy.

0 Kudos
Abo__Gavin
Beginner
2,224 Views

If it is a specific processor related issue, I realize my processor is not that suitable compared to those used for the high performance computing clusters targeted for mpiexec.hydra such that I can understand if it is not cost effective to fix.  Here is the output:

username@computername:~/Desktop/test$ lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              4
On-line CPU(s) list: 0-3
Thread(s) per core:  1
Core(s) per socket:  4
Socket(s):           1
NUMA node(s):        1
Vendor ID:           AuthenticAMD
CPU family:          22
Model:               0
Model name:          AMD A4-5000 APU with Radeon(TM) HD Graphics
Stepping:            1
CPU MHz:             1054.792
CPU max MHz:         1500.0000
CPU min MHz:         800.0000
BogoMIPS:            2994.15
Virtualization:      AMD-V
L1d cache:           32K
L1i cache:           32K
L2 cache:            2048K
NUMA node0 CPU(s):   0-3
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt topoext perfctr_nb bpext perfctr_llc hw_pstate proc_feedback ssbd vmmcall bmi1 xsaveopt arat npt lbrv svm_lock nrip_save tsc_scale flushbyasid decodeassists pausefilter pfthreshold overflow_recov

0 Kudos
Anatoliy_R_Intel
Employee
2,224 Views

Thank you, Gavin. I guess it will be fixed in one of the next releases.

--

Best regards, Anatoliy.

0 Kudos
Abo__Gavin
Beginner
2,224 Views

Anatoliy,

Sounds good.

Thanks,

Gavin

0 Kudos
Reply