Intel® HPC Toolkit
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2153 Discussions

issue using INTEL MPI and OFED dapl layer.

Bernie_B_
Beginner
837 Views
We are trying to run INTEL MPI using the DAPL layer to test infiniband.
We are using the OFED software to create the DAPL layer. This is the original /etc/dat.conf generated using the software :

OpenIB-cma u1.2 nonthreadsafe default libdaplcma.so.1 dapl.1.2 "ib0 0" ""
OpenIB-cma-1 u1.2 nonthreadsafe default libdaplcma.so.1 dapl.1.2 "ib1 0" ""
OpenIB-cma-2 u1.2 nonthreadsafe default libdaplcma.so.1 dapl.1.2 "ib2 0" ""
OpenIB-cma-3 u1.2 nonthreadsafe default libdaplcma.so.1 dapl.1.2 "ib3 0" ""
OpenIB-bond u1.2 nonthreadsafe default libdaplcma.so.1 dapl.1.2 "bond0 0" ""
ofa-v2-ib0 u2.0 nonthreadsafe default libdaplofa.so.2 dapl.2.0 "ib0 0" ""
ofa-v2-ib1 u2.0 nonthreadsafe default libdaplofa.so.2 dapl.2.0 "ib1 0" ""
ofa-v2-ib2 u2.0 nonthreadsafe default libdaplofa.so.2 dapl.2.0 "ib2 0" ""
ofa-v2-ib3 u2.0 nonthreadsafe default libdaplofa.so.2 dapl.2.0 "ib3 0" ""
ofa-v2-bond u2.0 nonthreadsafe default libdaplofa.so.2 dapl.2.0 "bond0 0" ""

I then created a much shorter /etc/dat.conf that contains one device :

ofa-v2-ib0 u2.0 nonthreadsafe default libdaplofa.so.2 dapl.2.0 "ib0 0" ""

When running the program it generates errors like :

[3] MPI startup(): DAPL provider
g> on rank 3:hsd766
[6] MPI startup(): DAPL provider
g> on rank 6:hsd766
[7] MPI startup(): DAPL provider
g> on rank 7:hsd766
[2] DAPL provider is not found and fallback device is not enabled
[1] DAPL provider is not found and fallback device is not enabled
[cli_2]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(264): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(183)..: generic failure with errno = -1
(unknown)():
[0] DAPL provider is not found and fallback device is not enabled
[4] DAPL provider is not found and fallback device is not enabled
[0] MPI startup(): Intel MPI Library, Version 3.1 Build 20080331
[0] MPI startup(): Copyright (C) 2003-2008 Intel Corporation. All rights reserv
ed.
[5] DAPL provider is not found and fallback device is not enabled
[6] DAPL provider is not found and fallback device is not enabled
[3] DAPL provider is not found and fallback device is not enabled
[7] DAPL provider is not found and fallback device is not enabled
[cli_0]: aborting job:

Hope someone can help.

Bernie B.
the Boeing Company
0 Kudos
7 Replies
Dmitry_K_Intel2
Employee
837 Views
I then created a much shorter /etc/dat.conf that contains one device :

ofa-v2-ib0 u2.0 nonthreadsafe default libdaplofa.so.2 dapl.2.0 "ib0 0" ""

When running the program it generates errors like :

[3] MPI startup(): DAPL provider
g> on rank 3:hsd766
[6] MPI startup(): DAPL provider
g> on rank 6:hsd766


Hi Bernie,

Thanks for posting here.
Are you sure that your nodes have the same configuration and /etc/dat.conf is the same on all nodes?
You can try to run mpiexec (or mpirun) with "-genv I_MPI_DEBUG 2" option to get more debug information.
Also, you don't need to change dat.conf - just use "genv I_MPI_DEVICE rdma:ofa-v2-ib0".

Are you able to start an MPI job with original dat.conf?

Best wishes,
Dmitry

0 Kudos
Vanush__Misha__Patur
837 Views

Hi Bernie,

Thanks for posting here.
Are you sure that your nodes have the same configuration and /etc/dat.conf is the same on all nodes?
You can try to run mpiexec (or mpirun) with "-genv I_MPI_DEBUG 2" option to get more debug information.
Also, you don't need to change dat.conf - just use "genv I_MPI_DEVICE rdma:ofa-v2-ib0".

Are you able to start an MPI job with original dat.conf?

Best wishes,
Dmitry


Hi Bernie, Dmitry
I am seeing exactly the same behaviour. An attempt to specify device ad rdma:ofa-v2-ib0 fails with the same "DAPL provider on rank 1:othernodename" errors.
I'm using Intel MPI version 3.1.
My /etc/dat.conf is similar to the one Bernie reported and it is the same on all nodes.

Any advice will be highly appreciated!

Misha.
0 Kudos
Vanush__Misha__Patur
837 Views
Quoting - ektich

Hi Bernie, Dmitry
I am seeing exactly the same behaviour. An attempt to specify device ad rdma:ofa-v2-ib0 fails with the same "DAPL provider on rank 1:othernodename" errors.
I'm using Intel MPI version 3.1.
My /etc/dat.conf is similar to the one Bernie reported and it is the same on all nodes.

Any advice will be highly appreciated!

Misha.

Turns out the problem is in the version of my Intel MPI library: it does not support DAPL v2.0, and it is the only version I have installed. Following popped up in the log files after adding -env I_MPI_DEBUG 50 to my command prompt line:
I_MPI_dat_ia_openv_wrap(): DAPL version compatibility requirement check failed; required DAPL 1.2, provided DAPL 2.0
0 Kudos
Gergana_S_Intel
Employee
837 Views

Hi Misha,

DAPL 2.0 support was added in Intel MPI Library 3.2. I would suggest you updgrade by visiting the Intel Registration Center. Here are instructions on how to do that with a valid license. The latest version available is Intel MPI Library 3.2 Update 2.

Or, your other option is to simply select the OpenIB driver that implements DAPL 1.2, and not the one using DAPL 2.0. Do you have any entries in your dat.conf which include either "u1.2" or "dapl.1.2"? If yes, you can go ahead and use those instead via the I_MPI_DEVICE env variable (as described in Dmitry's post).

Let us know how this helps.

Regards,
~Gergana

0 Kudos
harrc
Beginner
837 Views

Hi Misha,

DAPL 2.0 support was added in Intel MPI Library 3.2. I would suggest you updgrade by visiting the Intel Registration Center. Here are instructions on how to do that with a valid license. The latest version available is Intel MPI Library 3.2 Update 2.

Or, your other option is to simply select the OpenIB driver that implements DAPL 1.2, and not the one using DAPL 2.0. Do you have any entries in your dat.conf which include either "u1.2" or "dapl.1.2"? If yes, you can go ahead and use those instead via the I_MPI_DEVICE env variable (as described in Dmitry's post).

Let us know how this helps.

Regards,
~Gergana


Gergana, Dmitry,
I'm fairly new to Intel MPI, but have received similar DAPL errors depending on what I try to run. If I specify a device of rdma:ofa-v2-ib0 or rdma:ofa-v2-mthca0-1, things fail, as shown below. If I just use a device such as rdma or rdssm, things work, but I'm not sure if it's the right device - it shows RDMA, but I'm suspicious because it says it's using "OpenIB-cma specified in DAPL configuration file /etc/dat.conf" yet that entry doesn't exist. I've used dapltest successfully with both ofa-v2-ib0 and ofa-v2-mthca0-1. For MPI, I want to use native IB (not IPoIB) so should I specify rdma:ofa-v2-mthca0-1 and what should I be looking at to tell it's really using native IB vs. something else?

Using ofa-v2-ib0:
[n313 ~]$ mpirun -r ssh -f $PBS_NODEFILE -genv I_MPI_DEVICE rdma:ofa-v2-ib0 -genv I_MPI_DEBUG 2 -ppn 8 -n 2 ./ctest
[1] MPIDI_CH3I_SHM_recv_alarm_msg(): enable generic copy routine for short messages
[1] MPIDI_CH3_Init(): number of shm buffers = 16
[1] MPIDI_CH3_Init(): size of shm buffer = 16384
[1] MPIDI_CH3_Init(): size of shm buffer structure = 16400
[1] MPIDI_CH3_Init(): size of shm queue structure = 262408
[1] MPIDI_CH3_Init(): can not use fallback device
[1] MPIDI_CH3_Init(): failover flags = 0x5
[1] MPIDI_CH3_Init(): wait timeout = 0
[1] MPIDI_CH3I_RDMA_init(): entering
[0] MPIDI_CH3I_SHM_recv_alarm_msg(): enable generic copy routine for short messages
[0] MPIDI_CH3_Init(): number of shm buffers = 16
[0] MPIDI_CH3_Init(): size of shm buffer = 16384
[0] MPIDI_CH3_Init(): size of shm buffer structure = 16400
[0] MPIDI_CH3_Init(): size of shm queue structure = 262408
[0] MPIDI_CH3_Init(): can not use fallback device
[0] MPIDI_CH3_Init(): failover flags = 0x5
[0] MPIDI_CH3_Init(): wait timeout = 0
[0] MPIDI_CH3I_RDMA_init(): entering
[1] MPI startup(): DAPL provider on rank 1:n313
[1] MPIDI_CH3I_RDMA_init(): exiting
[1] DAPL provider is not found and fallback device is not enabled
rank 1 in job 1 n313_41274 caused collective abort of all ranks
exit status of rank 1: return code 13
[0] DAPL provider is not found and fallback device is not enabled
[cli_0]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[cli_1]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[0] MPIDI_CH3I_RDMA_init(): exiting
[0] MPID_Abort(): entering
[1] MPID_Abort(): entering


Snippet using rdma (successful):
[harrc@npp-n313 ~]$ mpirun -r ssh -f $PBS_NODEFILE -genv I_MPI_DEVICE rdma -genv I_MPI_DEBUG 2 -genv I_MPI_FALLBACK_DEVICE 0 -n 2 ./ctest
...
[0] MPIDI_CH3I_RDMA_init(): entering
...
[1] MPIDI_CH3I_RDMA_init(): entering
[0] MPI startup(): DAPL provider OpenIB-cma specified in DAPL configuration file /etc/dat.conf
[1] MPI startup(): DAPL provider OpenIB-cma specified in DAPL configuration file /etc/dat.conf
...
[0] MPIDI_CH3I_RDMA_init(): DAPL connection timeout = 4294 sec
[0] MPIDI_CH3I_RDMA_init(): DAPL disconnection timeout = 10 sec
...
[1] MPIDI_CH3I_RDMA_init(): DAPL connection timeout = 4294 sec
[1] MPIDI_CH3I_RDMA_init(): DAPL disconnection timeout = 10 sec
[1] MPIDI_CH3I_RDMA_init(): rdma_vbuf_total_size=16640
[1] MPIDI_CH3I_RDMA_init(): rdma_vbuf_threshold=16456
[1] MPIDI_CH3I_RDMA_init(): exiting
...
[0] MPI startup(): RDMA data transfer mode
...
[1] MPI startup(): RDMA data transfer mode


My /etc/dat.conf:
[n313 NOGAPS.intel-new]$ cat /etc/dat.conf
ofa-v2-ib0 u2.0 nonthreadsafe default libdaplofa.so.2 dapl.2.0 "ib0 0" ""
ofa-v2-ib1 u2.0 nonthreadsafe default libdaplofa.so.2 dapl.2.0 "ib1 0" ""
ofa-v2-mthca0-1 u2.0 nonthreadsafe default libdaploscm.so.2 dapl.2.0 "mthca0 1" ""
ofa-v2-mthca0-2 u2.0 nonthreadsafe default libdaploscm.so.2 dapl.2.0 "mthca0 2" ""
ofa-v2-mlx4_0-1 u2.0 nonthreadsafe default libdaploscm.so.2 dapl.2.0 "mlx4_0 1" ""
ofa-v2-mlx4_0-2 u2.0 nonthreadsafe default libdaploscm.so.2 dapl.2.0 "mlx4_0 2" ""
ofa-v2-ipath0-1 u2.0 nonthreadsafe default libdaploscm.so.2 dapl.2.0 "ipath0 1" ""
ofa-v2-ipath0-2 u2.0 nonthreadsafe default libdaploscm.so.2 dapl.2.0 "ipath0 2" ""
ofa-v2-ehca0-2 u2.0 nonthreadsafe default libdaploscm.so.2 dapl.2.0 "ehca0 1" ""
ofa-v2-iwarp u2.0 nonthreadsafe default libdaplofa.so.2 dapl.2.0 "eth2 0" ""

Thanks!
Cameron
0 Kudos
Rafał_Błaszczyk
837 Views
harrc:

Don't believe if it's saying"OpenIB-cma specified in DAPL configuration file /etc/dat.conf". Intel MPI doesn't check which dat.conf DAPL is really used. DAPL is using at first dat.conf from env DAT_OVERRIDE then the one specified at compile time. If it's saying it's using other provider it'sprobably using other dat.conf (find /etc -name dat.conf).
Looking at you /etc/dat.conf - looks like it's only for dapl v2. From my experience Intel MPI at first tries to use DAPL v1 to read dat.conf so you should have other dat.conf from dapl v1 (dapl-compat package in RH).
My suggestion is to use export DAT_OVERRIDE=/etc/dat.conf and create dat.conf only with providers you want to use (you can mix DAPL v1 and v2 providers in one file).
Propably you are using RedHat or it's clone, beware of RH 5.4 as it has some nasty bugs in DAPL and strange DAPL config. My suggestion - use OFED from openfabrics.org. It works in most cases.
0 Kudos
Andrey_D_Intel
Employee
837 Views
Hi,

It looks like cluster nodes have different configuration. Anyway, to get better performance I suggest use the I_MPI_DEVICE=rdssm instead of I_MPI_DEVICE=rdma. Or does not set it at all.

Best regards,
Andrey

0 Kudos
Reply