Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2153 Discussions

How to pin application on the second processor?

Pavel_Mezentsev
Beginner
568 Views

I've got a machine with 2 processors 8 cores each which gives me a total of 16 physical cores.I want to launch an application on the second processor, cores 8-15. The application uses one mpi process and 8 omp threads.Documentation suggests using I_MPI_PIN_DOMAIN to controll threads distribution. The value omp:compact pinned all threads on the first processor. But I didn't manage to find the way to move them on the second.I have also tried launching the program without any pinning options for MPI but using numactl instead. I've tried numactl both on mpiexec.hydra and the application itself but the threads seem to ignore numactl.So is there a way to solve my problem? Also is there a way to specify what cores can be used by each process?

0 Kudos
7 Replies
James_T_Intel
Moderator
568 Views
Hi Pavel,

Have you tried using I_MPI_PIN_PROCESSOR_LIST?  This should allow you to specify by core number.  If your second processor is numbered with cores 8-15, then I_MPI_PIN_PROCESSOR_LIST=8-15 should pin the process to those cores.  I would recommend using cpuinfo to check the core numbering, as it isn't always sequential.

Sincerely,
James Tullos
Technical Consulting Engineer
Intel® Cluster Tools
0 Kudos
Pavel_Mezentsev
Beginner
568 Views
Yeah, but it doesn't work for OpenMP threads because all the threads are pinned to that one core. No matter how many cores I specify OpenMP threads are pinned to the same core allocated for the process.
May be there is an option which allows to specify how many cores are allocated per process? Like --cpus-per-proc in OpenMPI.
0 Kudos
TimP
Honored Contributor III
568 Views
I suspect if you wish to use numactl or taskset, you must set I_MPI_PIN=off.  In the past, we used mpiexec .... taskset .... app.....

The suggestion about setting separate PROCESSOR_LIST strings should work with -env option of mpiexec, or inside a script.

If you have a cluster resource manager which allows you to specify a job to get a subset of the cores on a node, that would seem a solution.  As far as I know, after some debate, choice of resource manager has been left entirely up to the cluster provider and sysadmins.
0 Kudos
James_T_Intel
Moderator
568 Views
Hi Pavel,

Everything is controlled through the process pinning mechanism.  And yes, you will need to use I_MPI_PIN_DOMAIN, as this allows a process to be pinned to multiple cores.  Using the masklist option will let you specify which cores are available to each process.  In your case, try I_MPI_PIN_DOMAIN=[FF00].

Sincerely,
James Tullos
Technical Consulting Engineer
Intel® Cluster Tools
0 Kudos
Pavel_Mezentsev
Beginner
568 Views
Yep, explicitly turning off I_MPI_PIN did the trick. I missed that the default was on. Thank you!
How should the PROCESSOR_LIST option work for my case? Could you give an example please. As far as I understand it requires I_MPI_PIN to be on which excludes the usage of numactl/taskset but to specify cores for threads I would need to use I_MPI_PIN_DOMAIN option which would ignore PROCESSOR_LIST...
And how should script work? Something like this:
mpiexec script
script:
export PROCESSOR_LIST=$P
./app
right? But how would I determine P? It should depend on the rank, are there any means to find out the rank?
0 Kudos
Pavel_Mezentsev
Beginner
568 Views
Thank you, that mask worked!
As far as I understand I can specify several masks for each subset of the cores on the node and the order of pinning the processes on them, right? In my case [FF00,00FF] would give me to domains: 2-nd and 1-st processors.
And what should I do in case with different nodes? For example I have some nodes with 16 cores and some nodes with 8 cores. So I would like to make the domains on the node with 16 cores two times bigger than on the nodes with 8 cores. Is there a way to say: 'if num_cores == 16: domain=[ff00,00ff]; else domain=[f0,0f] ?
0 Kudos
James_T_Intel
Moderator
568 Views
Hi Pavel,

That depends on if you're in Windows* or Linux*  In Linux* you can use something like this:

[plain]mpirun -n 2 -host 16corehost -env I_MPI_DOMAIN [FF00,00FF] ./a.out : -n 2 -host 8corehost -env I_MPI_PIN_DOMAIN [F0,0F] ./a.out[/plain]

That should get the behavior you are seeking.  If you are in Windows* (and this also works in Linux*) you will need to use a configuration file with a similar setup, using a different I_MPI_PIN_DOMAIN for each host type.

Sincerely,
James Tullos
Technical Consulting Engineer
Intel® Cluster Tools
0 Kudos
Reply