- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi everyone,
I have a simple issue, which must have a solution. Is it possible to assign several MPI processes to several nodes, such that first MPI process occupies full node, whereas other MPI processes are distributed on cores of the other nodes?
I have an example below:
On a cluster with 4 cores per node, to assign 2 MPI process to 2 nodes I do the following:
#PBS -l nodes=2:ppn=4
mpirun -pernode -np 2 ./hybprog
The question is how to assign 8 MPI processes to 3 nodes, such that first MPI process occupies first node, whereas other 7 MPI processes are distributed on 7 cores of the other two nodes?
Best Regards,
Dmitry
- Tags:
- Parallel Computing
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It might be interesting to others, so I write the solution below:
by creating hostfile and rankfile with explicit distribution of the processes among cores of the available cluster nodes.
Best regards,
Dmitry
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This is not the best forum for such a question. I suspect you will need to enumerate the stuff explicitly in your mpirun or mpiexec and ask your site PBS expert.
If you want to ask specifically about your version of PBS in combination with a specific implementation of mpiexec or mpirun, you might start with an appropriate FAQ
examples:
OSU mpiexec: https://www.osc.edu/~djohnson/mpiexec/faq.php ;
openMPI: https://www.open-mpi.org/faq/?category=tm
and there are help forums associated with those.
Intel MPI could be asked on the companion HPC/cluster forum.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am not too familiar with PBS, but I did something similar with SLURM several years ago. I learned how to do this by reading the documentation for SLURM where I found that it is possible to assign MPI processes to sockets and cores within specific nodes. I did have to work with the SLURM admins to enable a few things that had not been set initially. Hopefully you can also do this with PBS.
Regards,
-- Rashawn
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It might be interesting to others, so I write the solution below:
by creating hostfile and rankfile with explicit distribution of the processes among cores of the available cluster nodes.
Best regards,
Dmitry

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page