Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

DAPL debug under Intel MPI

itayberm
Beginner
970 Views

Hello all.
does any one know how to set DAPL debug under Intel MPI?
i used -env DAPL_DBG_TYPE=0xffff in mpiexe cmd, but didn't get any output (the DAPL_DGB_TYPE was set to stdout).

Thank,
Itay.

0 Kudos
3 Replies
Dmitry_K_Intel2
Employee
970 Views
Hi Itay,

It works! I've just checked it.
Might be the problem in the syntax? You need to use space instead of '='.
% mpiexec -env DAPL_DBG_TYPE 0xfff ...

Regards!
Dmitry
0 Kudos
itayberm
Beginner
970 Views

I used the proper syntax. Is it possible that the output here is the only output of DAPL?? I am not sure, especially since we see Intel MPI reporting to use the correct DAPL devices.

[root@dodly0 dapl-2.0.29]# mpiexec -ppn 1 -n 2 -env I_MPI_FABRICS dapl:dapl -env I_MPI_DEBUG 2 -env I_MPI_CHECK_DAPL_PROVIDER_MISMATCH none -env DAPL_DBG_TYPE 0xffff -env DAPL_IB_PKEY 0x8001 /tmp/osu

dodly0:27654: dapl_init: dbg_type=0xffff,dbg_dest=0x1

dodly4:6322: dapl_init: dbg_type=0xffff,dbg_dest=0x1

dodly4:6322: open_hca: device mthca0 not found

dodly4:6322: open_hca: device mthca0 not found

[1] MPI startup(): DAPL provider OpenIB-mlx4_0-1

[1] MPI startup(): dapl data transfer mode

[0] MPI startup(): DAPL provider OpenIB-mthca0-1

[0] MPI startup(): dapl data transfer mode

[0] MPI startup(): static connections storm algo

# OSU MPI Bandwidth Test v3.1.1

# Size Bandwidth (MB/s)

1 0.42

2 0.85

4 1.69

8 3.37

16 6.76

32 13.45

64 26.67

128 52.58

256 102.56

512 196.50

1024 349.47

2048 567.88

4096 684.47

8192 750.29

16384 779.12

32768 678.73

65536 797.59

131072 875.71

262144 916.17

524288 940.29

1048576 955.39

2097152 963.28

4194304 967.39

Thanks,
Itay

0 Kudos
Dmitry_K_Intel2
Employee
970 Views
Itay,

I've got the following:

bash-3.2$ mpiexec -nolocal -ppn 1 -n 2 -env DAPL_DBG_TYPE 0xfff ./hello_c
node210.isv.intel.com:5967: dapl_init: dbg_type=0xfff,dbg_dest=0x1
node210.isv.intel.com:5967: dapl_query_hca: ib0 AF_INET 192.168.2.20
node210.isv.intel.com:5967: dapl_query_hca: (ver=a0) ep's 260032 ep_q 16351 evd's 65408 evd_q 4194303
node210.isv.intel.com:5967: dapl_query_hca: msg 1073741824 rdma 1073741824 iov's 32 lmr 524272 rmr 0 rd_in,out 16,128 inline=200
node210.isv.intel.com:5967: dapl_query_hca: ib0 AF_INET 192.168.2.20
node210.isv.intel.com:5967: dapl_query_hca: (ver=a0) ep's 260032 ep_q 16351 evd's 65408 evd_q 4194303
node210.isv.intel.com:5967: dapl_query_hca: msg 1073741824 rdma 1073741824 iov's 32 lmr 524272 rmr 0 rd_in,out 16,128 inline=200
node211.isv.intel.com:10430: dapl_init: dbg_type=0xfff,dbg_dest=0x1
node211.isv.intel.com:10430: dapl_query_hca: ib0 AF_INET 192.168.2.21
node211.isv.intel.com:10430: dapl_query_hca: (ver=a0) ep's 260032 ep_q 16351 evd's 65408 evd_q 4194303
node211.isv.intel.com:10430: dapl_query_hca: msg 1073741824 rdma 1073741824 iov's 32 lmr 524272 rmr 0 rd_in,out 16,128 inline=200
node211.isv.intel.com:10430: dapl_query_hca: ib0 AF_INET 192.168.2.21
node211.isv.intel.com:10430: dapl_query_hca: (ver=a0) ep's 260032 ep_q 16351 evd's 65408 evd_q 4194303
node211.isv.intel.com:10430: dapl_query_hca: msg 1073741824 rdma 1073741824 iov's 32 lmr 524272 rmr 0 rd_in,out 16,128 inline=200
node211.isv.intel.com:10430: dapl_query_hca: MAX msg 1073741824 dto 16351 iov 32 rdma i16,o128
node210.isv.intel.com:5967: dapl_query_hca: MAX msg 1073741824 dto 16351 iov 32 rdma i16,o128
Hello world: rank 0 of 2 running on node211.isv.intel.com
Hello world: rank 1 of 2 running on node210.isv.intel.com

Do you expect this information?

Debug output shows:
[0] MPI startup(): DAPL provider ib0-v1
[1] MPI startup(): DAPL provider ib0-v1

Might be this is an issue of your DAPL build?

Regards!
Dmitry
0 Kudos
Reply