Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2161 Discussions

Problem using the Intel Debugger with Intel MPI Library

roboto
Beginner
883 Views
Hi,

I am trying to debug a code (vat_3d_59_debug) using the Intel Debugger with the Intel MPI Library using the following command:

mpiexec -idb -genv MPIEXEC_DEBUG 2 -n 2 -host dragon ./vat_3d_59_debug

and I got the following error message:

mpiexec_dragon: mpd_uncaught_except_tb handling:
(type 'exceptions.ImportError'): /opt/intel/impi/4.0.0.028/intel64/bin/mtv.so: undefined symbol: Py_InitModule4
/opt/intel/impi/4.0.0.028/bin64/mpiexec 1196 mpiexec
import mtv
/opt/intel/impi/4.0.0.028/bin64/mpiexec 3205
mpiexec()

The environmental variables associated to the Intel Cluster Toolkit Compiler Edition are set in the .bash_profile as:

export INTEL_LICENSE_FILE=/opt/intel/licenses
. /opt/intel/ictce/4.0.0.020/ictvars.sh
. /opt/intel/Compiler/11.1/072/bin/intel64/iccvars_intel64.sh
. /opt/intel/Compiler/11.1/072/bin/intel64/ifortvars_intel64.sh
. /opt/intel/Compiler/11.1/072/bin/intel64/idbvars.sh
export PATH=/opt/intel/impi/4.0.0.028/bin64:${PATH}
export PATH=/usr/lib64/python2.6:${PATH}
export LD_LIBRARY_PATH=/opt/intel/Compiler/11.1/072/lib/intel64:${LD_LIBRARY_PATH}
export I_MPI_FABRICS=shm
export PATH=/home/roboto/multi_purpose_daemon:${PATH}
export JRE=/usr/lib64/jvm/java-1.6.0-openjdk-1.6.0/jre
export PATH=${PATH}:${JRE}/bin
. /opt/intel/Compiler/11.1/072/bin/iccvars.sh intel64
export IDB_HOME=/opt/intel/Compiler/11.1/072/bin/intel64
export LD_LIBRARY_PATH=/opt/intel/Compiler/11.1/072/idb/lib/intel64:${LD_LIBRARY_PATH}

It is worth to mention that the same code runs with no problems using mpiexec when the Intel Debugger is not invoked using:

mpiexec -n 2 -host dragon ./vat_3d_59_debug

In both cases (with and without the global argument -idb), the mpd was booted using:

mpdboot --totalnum=1 --rsh=ssh --ncpus=2

Regards,

Roboto
0 Kudos
6 Replies
Gergana_S_Intel
Employee
883 Views

Hi Roboto,

Most likely, what you're experiencing is a known issue when running with the Intel Debugger under the Intel MPI Library daemons. The root cause is pretty simple: by default, IDB starts up in GUI-mode on Linux. You just need to make sure you use the command line interface instead. The following article describes the issue in more detail and provides a workaround.

I hope this helps.

Regards,
~Gergana

0 Kudos
roboto
Beginner
883 Views
Hi Gergana,

As the article in your post recommended, I did rename idb to idbgui, and then made idb a symbolic link to idbc, the command line IDB. So when I issued the command:

mpiexec -idb -genv MPIEXEC_DEBUG 1 -n 2 -host dragon ./vat_3d_59_debug

mpiexec should start running under idbc. But unfortunately, the error message is still the same that I got before:

mpiexec_dragon: mpd_uncaught_except_tb handling:
: /opt/intel/impi/4.0.0.028/intel64/bin/mtv.so: undefined symbol: Py_InitModule4 /opt/intel/impi/4.0.0.028/bin64/mpiexec 1196 mpiexec
import mtv /opt/intel/impi/4.0.0.028/bin64/mpiexec 3205
mpiexec()

I also modified the .bash_profile to include the line:

exportIDB_PARALLEL_SHELL=/usr/bin/ssh

since the idb documentation stated that by default, the debugger uses rsh to create the leaf debugger and aggregator processes in the tree structure, but I am still getting the previous error message.

Thanks in advance,

Roboto
0 Kudos
Dmitry_K_Intel2
Employee
883 Views
Hi Roboto,

Looks like you are using Python 2.5 when we are using 2.4.x.
Please read this article - it might help.

Regards!
Dmitry
0 Kudos
roboto
Beginner
883 Views
Hi Dmitry,

I am really using Python 2.6. Following your advice, I installed Python 2.4.4 and the problem is gone! I can finally run mpiexec under idb (really idbc opened in a xterm). I want to thank you and Gergana for the very useful and timely thecnical advices.

Best regards,

Roboto
0 Kudos
Dmitry_K_Intel2
Employee
883 Views
You are welcome, Roboto.

Regards!
Dmitry
0 Kudos
Dmitry_K_Intel2
Employee
883 Views
The issue with Py_InitModule4 has been fixed in Intel MPI Library version 4.0.1

Regards!
Dmitry
0 Kudos
Reply