Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

Bad mpirun behaviour

Giacomo_R_
Beginner
761 Views

I've installed intel mpi, use mpi-selector to choose intel mpi (installed in impi folder in /opt/intel/), but when I launch a job with

mpirun -n 8 ./HEART_MR

where HEART_MR is the executable, I obtain 8 processes running each one on a core, when I want  a single process running on 8 cores. If I use openmpi, things happen how I expect to be.

Can you help me?

My machine has an eight core intel Xeon processor.

Giacomo Rossi
PhD Student, Space Engineer
Mechanical and Aerospace Engineering Department
University of Rome "Sapienza"

0 Kudos
9 Replies
James_T_Intel
Moderator
762 Views
Hi Giacomo, Using -n specifies the number of processes to launch. Is your application using threading? You can use I_MPI_PIN_DOMAIN to select which cores are available to the process. Sincerely, James Tullos Technical Consulting Engineer Intel® Cluster Tools
0 Kudos
Giacomo_R_
Beginner
762 Views
Maybe I explained not so well my problem: when I use mpirun -n 8 .HEART_MR, I expect to launch a parallel job on eight cores, and single process knows how many processes are running and its rank. Indeed, I obtain eight processes, that are inpidendent from each other. In other words, my executable apply a domain decomposition over the number of the cores that I choose (if I use mpirun from openmpi). In the wrong case (with intel mpi), each process is associated to the whole computational domain. Giacomo Rossi PhD Student, Space Engineer Mechanical and Aerospace Engineering Department University of Rome "Sapienza"
0 Kudos
James_T_Intel
Moderator
762 Views
Hi Giacomo, I see. Please send me the output from the following commands: env | grep I_MPI mpirun -n 8 ./hello hello can be any MPI hello world application Sincerely, James Tullos Technical Consulting Engineer Intel® Cluster Tools
0 Kudos
Giacomo_R_
Beginner
762 Views
Here you have the results of the commands that you have requested to me: nunzio@ALTAIR:~> mpirun -n 8 ./hello_world Hello World :-) Hello World :-) Hello World :-) Hello World :-) Hello World :-) Hello World :-) Hello World :-) Hello World :-) nunzio@ALTAIR:~> env | grep I_MPI I_MPI_ROOT=/opt/intel2/impi/4.1.0.024 Here some additional informations; this is a simple "hello world2 program with mpi program hello include 'mpif.h' integer rank, size, ierror, tag, status(MPI_STATUS_SIZE) call MPI_INIT(ierror) call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierror) call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror) print*, 'node', rank, ': Hello world' call MPI_FINALIZE(ierror) end and this is the result that I obtained when launch mpirun -n 8 prova_mpi: nunzio@ALTAIR:~> mpirun -n 8 ./prova_mpi node 0 : Hello world node 0 : Hello world node 0 : Hello world node 0 : Hello world node 0 : Hello world node 0 : Hello world node 0 : Hello world node 0 : Hello world Thank you! Giacomo Rossi
0 Kudos
James_T_Intel
Moderator
762 Views
Are you running in a job scheduler? Please try the following. There is a test program included with Intel® MPI, in /opt/intel2/mpi/4.1.0.024/test. Compile any of the files in that folder, and run it with mpirun -n 8 -verbose -genv I_MPI_DEBUG 5 ./a.out > output.txt Please attach the output.txt file. James.
0 Kudos
Giacomo_R_
Beginner
762 Views
Unfortunately I haven't this directory... nunzio@ALTAIR:/opt/intel2/impi/4.1.0.024> lt totale 152 -rw-r--r-- 1 root root 9398 31 ago 16.48 README.txt -rw-r--r-- 1 root root 28556 31 ago 16.48 Doc_Index.html -rw-r--r-- 1 root root 2770 31 ago 16.48 redist-rt.txt -rw-r--r-- 1 root root 28728 31 ago 16.48 mpi-rtEULA.txt -rw-r--r-- 1 root root 491 7 set 13.12 mpi-rtsupport.txt -rwxr-xr-x 1 root root 41314 7 set 13.12 uninstall.sh -rw-r--r-- 1 root root 3036 8 nov 11.51 uninstall.log lrwxrwxrwx 1 root root 8 8 nov 11.55 etc -> ia32/etc lrwxrwxrwx 1 root root 8 8 nov 11.55 bin -> ia32/bin drwxr-xr-x 5 root root 4096 8 nov 11.55 ia32 lrwxrwxrwx 1 root root 8 8 nov 11.55 lib -> ia32/lib drwxr-xr-x 3 root root 4096 8 nov 11.55 data drwxr-xr-x 4 root root 4096 8 nov 11.55 doc lrwxrwxrwx 1 root root 11 8 nov 11.55 etc64 -> intel64/etc lrwxrwxrwx 1 root root 11 8 nov 11.55 bin64 -> intel64/bin drwxr-xr-x 5 root root 4096 8 nov 11.55 intel64 lrwxrwxrwx 1 root root 11 8 nov 11.55 lib64 -> intel64/lib drwxr-xr-x 5 root root 4096 8 nov 11.55 mic -rw-r--r-- 1 root root 340 8 nov 11.55 impi.uninstall.config
0 Kudos
James_T_Intel
Moderator
762 Views
Do you have the full SDK, or only the runtime?
0 Kudos
Giacomo_R_
Beginner
762 Views
I've only the runtime.
0 Kudos
James_T_Intel
Moderator
762 Views
Ok, the program you run will need to be compiled with the Intel® MPI Library to run with the Intel® MPI Library. We have some binary compatibility with MPICH2, so you might be able to use our MPI to run an MPICH2 compiled program as well.
0 Kudos
Reply