- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Hi,
We have a little cluster:
- master
- 8 nodes with 12 cores pro nodes
- Infiniband connectx DDR 2X
- linux CentOS 5.5
- Infiniband stack from OFED
- Intel MPI
At the moment we have a normal basic install of the infiniband packages. So we are in connected mode with a MTU of ~65k. Has anybody info, documentation, links on the optimization of the MTU ? We are using it for Computational Fluid Dynamics computations.
Thx in advance,
Best regards,
Guillaume
We have a little cluster:
- master
- 8 nodes with 12 cores pro nodes
- Infiniband connectx DDR 2X
- linux CentOS 5.5
- Infiniband stack from OFED
- Intel MPI
At the moment we have a normal basic install of the infiniband packages. So we are in connected mode with a MTU of ~65k. Has anybody info, documentation, links on the optimization of the MTU ? We are using it for Computational Fluid Dynamics computations.
Thx in advance,
Best regards,
Guillaume
Link kopiert
4 Antworten
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Hi Guillaume,
It seems to me that you'd better ask this question on a forum dedicated to CFD, might be people from MTU can help you. Here you rather ask engineers from Intel familiar with an Intel product.
You can use Intel MPI Library with default settings - that should be OK. But to be sure that you are using fast fabrics you can set I_MPI_FALLBACK=0 and I_MPI_FABRICS=shm:dapl
Regards!
Dmitry
It seems to me that you'd better ask this question on a forum dedicated to CFD, might be people from MTU can help you. Here you rather ask engineers from Intel familiar with an Intel product.
You can use Intel MPI Library with default settings - that should be OK. But to be sure that you are using fast fabrics you can set I_MPI_FALLBACK=0 and I_MPI_FABRICS=shm:dapl
Regards!
Dmitry
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Hi Dmitry,
A lot of CFD engineers are using intel mpi...so perhaps one or two will look at this thread ;)
thx for your answer.
regards
Guillaume
A lot of CFD engineers are using intel mpi...so perhaps one or two will look at this thread ;)
thx for your answer.
regards
Guillaume
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Hi Dmitry,
I have another question: I'm suing the OFED stack for infiniband: Which I_MPI_FABRICS should I choose ? shm:dapl or shm:ofa ?
Thx,
best regards,
Guillaume
I have another question: I'm suing the OFED stack for infiniband: Which I_MPI_FABRICS should I choose ? shm:dapl or shm:ofa ?
Thx,
best regards,
Guillaume
- Als neu kennzeichnen
- Lesezeichen
- Abonnieren
- Stummschalten
- RSS-Feed abonnieren
- Kennzeichnen
- Anstößigen Inhalt melden
Hi Guillaume,
You can use any.
OFA fabric gives you ability to use multi-rail feature (if you have 2 IB cards on a node or cards with 2 ports). Just compare performance of dapl and ofa with your application and use the fastest.
Regards!
Dmitry
You can use any.
OFA fabric gives you ability to use multi-rail feature (if you have 2 IB cards on a node or cards with 2 ports). Just compare performance of dapl and ofa with your application and use the fastest.
Regards!
Dmitry

Antworten
Themen-Optionen
- RSS-Feed abonnieren
- Thema als neu kennzeichnen
- Thema als gelesen kennzeichnen
- Diesen Thema für aktuellen Benutzer floaten
- Lesezeichen
- Abonnieren
- Drucker-Anzeigeseite