- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I am trying to run an mpi application using paralle studio 2018 and centos 7. I have following 2 interfaces -
eth0, ib0
I would like to use the eth0 interface. Is there a way to force the mpirun to use eth0 only ( or exclude ib0)?
Here is what i am using (an hoping eth0 is being used) -
export I_MPI_FALLBACK=0
export I_MPI_FABRICS=shm:tcp
Please let me know if more information is required from my end.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Puneet,
To select a particular network interface you have to use -iface <net_interface> with mpirun.
So if you want to use TCP/IP capable network fabrics you have to set the environment variable I_MPI_OFI_PROVIDER=tcp and then I_MPI_FABRICS=<shm/ofi>.
If you want to launch mpirun job over different nodes then make sure that the network interface you are using should be in the same network as that of nodes you specified otherwise, it will give an error like "unable to find interface <name>".
So for selecting the network interface you can use the following command:
$ I_MPI_OFI_PROVIDER=tcp I_MPI_DEBUG=4 mpirun -genv I_MPI_FABRICS=shm:ofi -iface eth0 -n <no> -ppn <no> -f hostfile ./executable
For more details please refer below links.
Warm Regards,
Abhishek
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I tried using following variables -
export I_MPI_FALLBACK=0
#export I_MPI_FABRICS=shm:tcp
export FI_SOCKETS_IFACE=eth0
export I_MPI_HYDRA_IFACE=eth0
export I_MPI_DEBUG=4
along with -
mpirun -iface eth0 -np $NTASKS -ppn $NTASKS_PER_NODE ./app.sh
but i saw message -
[51] DAPL startup(): trying to open DAPL provider from I_MPI_DAPL_PROVIDER: ofa-v2-mlx5_0-1u
[50] MPI startup(): DAPL provider ofa-v2-mlx5_0-1u
[50] MPI startup(): shm and dapl data transfer modes
i guess this means that the eth0 was not used.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Puneet,
To select a particular network interface you have to use -iface <net_interface> with mpirun.
So if you want to use TCP/IP capable network fabrics you have to set the environment variable I_MPI_OFI_PROVIDER=tcp and then I_MPI_FABRICS=<shm/ofi>.
If you want to launch mpirun job over different nodes then make sure that the network interface you are using should be in the same network as that of nodes you specified otherwise, it will give an error like "unable to find interface <name>".
So for selecting the network interface you can use the following command:
$ I_MPI_OFI_PROVIDER=tcp I_MPI_DEBUG=4 mpirun -genv I_MPI_FABRICS=shm:ofi -iface eth0 -n <no> -ppn <no> -f hostfile ./executable
For more details please refer below links.
Warm Regards,
Abhishek
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
We have not heard back from you. Please give us an update if the provided details have helped you in solving your issue.
Warm Regards,
Abhishek
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for the reply.
I am in process of testing the suggestion.
I will test & update soon.
- Tags:
- Thank you for
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you, Puneet,
Update us as soon as you will try it.
Warm Regards,
Abhishek
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Please give us an update on your issue. Let us know if the provided solution helped you.
Warm Regards,
Abhishek
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Puneet,
We are assuming that the solution provided helped and would no longer be monitoring this issue. Please raise a new thread if you have further issues.
Warm Regards,
Abhishek
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page