Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2154 Discussions

Infiniband-Intel MPI Performance MM5

jriocaton_es
Beginner
1,237 Views
Dear colleagues,

we are working in an Infiniband DDR cluster with MM5. We are using the latest Intel MPI and Fortran, and our mm5.mpp has been compiled with the configuration suggested in this website.

This is the way we launch :
[c2@ Run]$ time /intel/impi/3.2.1.009/bin64/mpiexec -genv I_MPI_PIN_PROCS 0-7 -np 32 -env I_MPI_DEVICE rdma ./mm5.mpp

Everything seems to be ok, but when we launch np 16 with mpiexec it's 75% better performance than gigabit, but whe we are using more than 16 np, the scaling is worst. We have noticed the main difference between the way of processing of gigabit and infiniband is :

- Infiniband only uses all the cores when the np is 16 or lower, when it grows, it only uses 3 cores in a machine
- Gigabit always uses all the cores in all machines.

We have try a lot Intel MMPI variables in the execution , for example I_MPI_PIN, but there is no way to manage the situation. The MPI universe is working ok with the infiniband networks, and we use the MPI_DEVICE rdma. The infiniband network is working ok ( performance and so on) because we have passed some benchmarks and the results are working fine.

What do you think about it ? Could it be consequence of the model we are using to compare performance ?

Thanks a lot and best regards
0 Kudos
1 Solution
Dmitry_K_Intel2
Employee
1,237 Views
Hi Julio,

Seems that something wrong with cluster settings. I'm afraid that rdma didn't work at all.
It's very strange that there is no /etc/dat.conf file. I'm not familiar with Qlogic devices but they should provide drives for their cards. Please try to find dat.conf file - it has to exist. Move it (or create a link) to /etc directory on all nodes.
dat.conf file should contain lines like:
OpenIB-cma-1 u1.2 nonthreadsafe default /usr/lib/libdaplcma.so dapl.1.2 "ib1 0" ""

Qlogic should provide an utility which can prove that InfiniBand card works correct. Try to check that devices work as expected on all nodes.

mpd.hosts should look like:
infi1:8

Be sure that there are no I_MPI env variablesset from previuos attemps.

Start mpdring:
mpdboot -n 8 -f mpd.hosts -r ssh --verbose

Start your application:
mpiexec-np16 -env I_MPI_DEBUG2 -env I_MPI_DEVICE rdma ./mm5.mpp

And please attach the ouptut both for InfiniBand and for gigabit ethernet. This debug level(2) will show what device has been chosen.

Best wishes,
Dmitry

View solution in original post

0 Kudos
14 Replies
jriocaton_es
Beginner
1,237 Views
Sorry, I forgot this :

[c2@Run]$ more ../../../mpd.hosts
infi1:8 ifhn=192.168.10.1
infi6:8 ifhn=192.168.10.250
infi7:8 ifhn=192.168.10.249
infi8:8 ifhn=192.168.10.248
infi9:8 ifhn=192.168.10.247
infi10:8 ifhn=192.168.10.246
infi4:8 ifhn=192.168.10.252
infi5:8 ifhn=192.168.10.251

[c2@Run]$ /intel/impi/3.2.1.009/bin64/mpdboot -n 8 --ifhn=192.168.10.1 -f mpd.hosts -r ssh --verbose

[c2@Run]$ /intel/impi/3.2.1.009/bin64/mpdtrace -l
infi1_33980 (192.168.10.1)
infi8_50855 (192.168.10.248)
infi9_39762 (192.168.10.247)
infi7_44185 (192.168.10.249)
infi6_37134 (192.168.10.250)
infi4_55533 (192.168.10.252)
infi5_42161 (192.168.10.251)
infi10_33666 (192.168.10.246)

[c2@Run]$ /intel/impi/3.2.1.009/bin64/mpiexec -genv I_MPI_PIN_PROCS 0-7 -np 32 -env I_MPI_DEVICE rdma ./mm5.mpp



0 Kudos
TimP
Honored Contributor III
1,237 Views
You would be more likely to get the attention of cluster computing experts if you discussed this on the HPC forum.
The I_MPI_PIN_PROCS setting probably isn't useful, although it would have been required by earlier MPI versions. In fact, it over-rides the built-in optimizations which Intel MPI has for platforms such as Harpertown, where the cores aren't numbered in sequence. That might make a difference if you enabled shared memory message passing.
Usually, a combined Infiniband/shared memory option (rdms should be the default) scales to larger number of nodes and processes. I wonder if you allowed shared memory in your gigabit choice.
You didn't tell enough about your hardware (CPU type, how much RAM) for much assistance to be given, not that anyone such as myself who isn't familar with mm5 would know its memory requirement.
0 Kudos
jriocaton_es
Beginner
1,237 Views


Hi Tim, I've just change the options and rdsm hasnt solved the problem. The behaviour is the same. Talking about memory, is not a limitation, and the procs are Xeon 54XX. The comparison gigabit vs infiniband is the following :
16 proc gigabit -> 2'
16 proc infi -> 20"'
32 proc gigabit -> 4'
32 proc infi -> 5'20"
Do you think I should change the forum ? Is there any var I could use or anything I could test to improve the performance ?
Thanks

PD. This is the top in a node when we launch 32 proc :

Cpu0 : 20.7%us, 79.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 0.0%us,100.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 0.3%us, 99.7%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 0.3%us, 99.7%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu4 : 0.3%us, 99.7%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu5 : 3.0%us, 97.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu6 : 49.5%us, 50.5%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu7 : 17.0%us, 83.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8170628k total, 1038224k used, 7132404k free, 246276k buffers
Swap: 1020116k total, 0k used, 1020116k free, 352364k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6853 caton2 25 0 77456 33m 5624 R 100 0.4 17:00.01 mm5.mpp
6854 caton2 25 0 78664 34m 5880 S 100 0.4 16:59.99 mm5.mpp
6855 caton2 25 0 78212 34m 5764 S 100 0.4 16:59.23 mm5.mpp
6852 caton2 25 0 77388 33m 5224 S 100 0.4 16:59.95 mm5.mpp
6856 caton2 25 0 79232 34m 6068 R 100 0.4 16:58.75 mm5.mpp
6857 caton2 25 0 78152 34m 5800 R 100 0.4 17:00.00 mm5.mpp
6858 caton2 25 0 77620 33m 5684 S 100 0.4 16:59.99 mm5.mpp
6859 caton2 25 0 77268 33m 5136 R 100 0.4 16:59.71 mm5.mpp

As you can see, CPU0,6,7 are running user proc, while the CPU1,2,3,4,5 are always busy with the system ones. Always is the samen situation. Is there anyway to get down the system ones at CPU1,2,3,4,5 ?
0 Kudos
Dmitry_K_Intel2
Employee
1,237 Views
Hi jriocaton,

I'd suggest simplifying your mpd.hosts and command lines.
1) Could you try to remove ifhn=192.168.10.xxx from mpd.hosts file?
2) remove '--ifhn=192.168.10.1' from mpdboot command.
3) remove '-genv I_MPI_PIN_PROCS 0-7' from mpiexec command line
4) try to use rdssm istead of rdma in '-env I_MPI_DEVICE rdma'

Of cause you need to kill all mpd process before ('mpdallexit').

Intel MPI can so called fallback technique - to disable it set I_MPI_FALLBACK_DEVICE to 0.

Compare 16 and 32 processes performance.
If performancewith 32 processes is worse, please run it with I_MPI_DEBUG=5 and let me know the output. This debug level will give pinning table and can give you a clue.

Best wishes,
Dmitry
0 Kudos
jriocaton_es
Beginner
1,237 Views
Dear Dmitry,

thanks a lot for your interest.

I've done the the changes you told me and the results are the same. This is the debug :

[c2@quijote Run]$ time /intel/impi/3.2.1.009/bin64/mpiexec -genv I_MPI_FALLBACK_DEVICE 1 -np 32 -env I_MPI_DEBUG 5 -env I_MPI_DEVICE rdssm ./mm5.mpp
[1] MPI startup(): DAPL provider on rank 1:infi1
[2] MPI startup(): DAPL provider on rank 2:infi1
[4] MPI startup(): DAPL provider on rank 4:infi1
[5] MPI startup(): DAPL provider on rank 5:infi1
[6] MPI startup(): DAPL provider on rank 6:infi1
[3] MPI startup(): DAPL provider on rank 3:infi1
[9] MPI startup(): DAPL provider on rank 9:infi9
[10] MPI startup(): DAPL provider on rank 10:infi9
[7] MPI startup(): DAPL provider on rank 7:infi1
[8] MPI startup(): DAPL provider on rank 8:infi9
[12] MPI startup(): DAPL provider on rank 12:infi9
[15] MPI startup(): DAPL provider on rank 15:infi9
[13] MPI startup(): DAPL provider on rank 13:infi9
[14] MPI startup(): DAPL provider on rank 14:infi9
[17] MPI startup(): DAPL provider on rank 17:infi8
[19] MPI startup(): DAPL provider on rank 19:infi8
[18] MPI startup(): DAPL provider on rank 18:infi8
[20] MPI startup(): DAPL provider on rank 20:infi8
[21] MPI startup(): DAPL provider on rank 21:infi8
[22] MPI startup(): DAPL provider on rank 22:infi8
[11] MPI startup(): DAPL provider on rank 11:infi9
[24] MPI startup(): DAPL provider on rank 24:infi7
[16] MPI startup(): DAPL provider on rank 16:infi8
[27] MPI startup(): DAPL provider on rank 27:infi7
[29] MPI startup(): DAPL provider on rank 29:infi7
[30] MPI startup(): DAPL provider on rank 30:infi7
[31] MPI startup(): DAPL provider on rank 31:infi7
[23] MPI startup(): DAPL provider on rank 23:infi8
[25] MPI startup(): DAPL provider on rank 25:infi7
[26] MPI startup(): DAPL provider on rank 26:infi7
[28] MPI startup(): DAPL provider on rank 28:infi7
[0] MPI startup(): shared memory and socket data transfer modes
[1] MPI startup(): shared memory and socket data transfer modes
[2] MPI startup(): shared memory and socket data transfer modes
[3] MPI startup(): shared memory and socket data transfer modes
[4] MPI startup(): shared memory and socket data transfer modes
[5] MPI startup(): shared memory and socket data transfer modes
[6] MPI startup(): shared memory and socket data transfer modes
[8] MPI startup(): shared memory and socket data transfer modes
[7] MPI startup(): shared memory and socket data transfer modes
[9] MPI startup(): shared memory and socket data transfer modes
[10] MPI startup(): shared memory and socket data transfer modes
[11] MPI startup(): shared memory and socket data transfer modes
[13] MPI startup(): shared memory and socket data transfer modes
[12] MPI startup(): shared memory and socket data transfer modes
[14] MPI startup(): shared memory and socket data transfer modes
[15] MPI startup(): shared memory and socket data transfer modes
[17] MPI startup(): shared memory and socket data transfer modes
[19] MPI startup(): shared memory and socket data transfer modes
[16] MPI startup(): shared memory and socket data transfer modes
[18] MPI startup(): shared memory and socket data transfer modes
[20] MPI startup(): shared memory and socket data transfer modes
[22] MPI startup(): shared memory and socket data transfer modes
[21] MPI startup(): shared memory and socket data transfer modes
[23] MPI startup(): shared memory and socket data transfer modes
[24] MPI startup(): shared memory and socket data transfer modes
[25] MPI startup(): shared memory and socket data transfer modes
[27] MPI startup(): shared memory and socket data transfer modes
[28] MPI startup(): shared memory and socket data transfer modes
[29] MPI startup(): shared memory and socket data transfer modes
[26] MPI startup(): shared memory and socket data transfer modes
[30] MPI startup(): shared memory and socket data transfer modes
[31] MPI startup(): shared memory and socket data transfer modes
[5] MPI Startup(): process is pinned to CPU05 on node quijote.cluster
[2] MPI Startup(): process is pinned to CPU02 on node quijote.cluster
[6] MPI Startup(): process is pinned to CPU03 on node quijote.cluster
[1] MPI Startup(): process is pinned to CPU04 on node quijote.cluster
[4] MPI Startup(): process is pinned to CPU01 on node quijote.cluster
[0] MPI Startup(): process is pinned to CPU00 on node quijote.cluster
[9] MPI Startup(): process is pinned to CPU04 on node compute-0-7.local
[10] MPI Startup(): [8] MPI Startup(): process is pinned to CPU00 on node compute-0-7.local
quijote.cluster -- rsl_nproc_all 32, rsl_myproc 1
[3] MPI Startup(): process is pinned to CPU02 on node compute-0-7.localprocess is pinned to CPU06 on node quijote.cluster[12] MPI Startup(): process is pinned to CPU01 on node compute-0-7.local
[14] MPI Startup(): process is pinned to CPU03 on node compute-0-7.local


compute-0-7.local -- rsl_nproc_all 32, rsl_myproc 9
[11] MPI Startup(): process is pinned to CPU06 on node compute-0-7.local[13] MPI Startup(): process is pinned to CPU05 on node compute-0-7.local
[15] MPI Startup(): quijote.cluster -- rsl_nproc_all 32, rsl_myproc 5
compute-0-7.local -- rsl_nproc_all 32, rsl_myproc 13
compute-0-7.local -- rsl_nproc_all 32, rsl_myproc 14
quijote.cluster -- rsl_nproc_all 32, rsl_myproc 3
compute-0-7.local -- rsl_nproc_all 32, rsl_myproc 10
compute-0-7.local -- rsl_nproc_all 32, rsl_myproc 11
compute-0-7.local -- rsl_nproc_all 32, rsl_myproc 12

process is pinned to CPU07 on node compute-0-7.local
compute-0-7.local -- rsl_nproc_all 32, rsl_myproc 8
compute-0-7.local -- rsl_nproc_all 32, rsl_myproc 15
quijote.cluster -- rsl_nproc_all 32, rsl_myproc 2
[7] MPI Startup(): process is pinned to CPU07 on node quijote.cluster
quijote.cluster -- rsl_nproc_all 32, rsl_myproc 6
quijote.cluster -- rsl_nproc_all 32, rsl_myproc 7
quijote.cluster -- rsl_nproc_all 32, rsl_myproc 4
[19] MPI Startup(): process is pinned to CPU06 on node compute-0-6.local
compute-0-6.local -- rsl_nproc_all 32, rsl_myproc 19
[17] MPI Startup(): process is pinned to CPU04 on node compute-0-6.local
[20] MPI Startup(): process is pinned to CPU01 on node compute-0-6.local
[23] MPI Startup(): process is pinned to CPU07 on node compute-0-6.local
compute-0-6.local -- rsl_nproc_all 32, rsl_myproc 17
[25] MPI Startup(): process is pinned to CPU04 on node compute-0-5.local
[29] MPI Startup(): process is pinned to CPU05 on node compute-0-5.local
compute-0-5.local -- rsl_nproc_all 32, rsl_myproc 25
[27] MPI Startup(): process is pinned to CPU06 on node compute-0-5.local
[24] MPI Startup(): process is pinned to CPU00 on node compute-0-5.local
[30] MPI Startup(): process is pinned to CPU03 on node compute-0-5.local
compute-0-5.local -- rsl_nproc_all 32, rsl_myproc 27
compute-0-5.local -- rsl_nproc_all 32, rsl_myproc 29
[28] MPI Startup(): process is pinned to CPU01 on node compute-0-5.local
[26] MPI Startup(): process is pinned to CPU02 on node compute-0-5.local
compute-0-5.local -- rsl_nproc_all 32, rsl_myproc 28
compute-0-5.local -- rsl_nproc_all 32, rsl_myproc 30
[31] MPI Startup(): process is pinned to CPU07 on node compute-0-5.local
compute-0-5.local -- rsl_nproc_all 32, rsl_myproc 31
compute-0-5.local -- rsl_nproc_all 32, rsl_myproc 24
compute-0-5.local -- rsl_nproc_all 32, rsl_myproc 26
compute-0-6.local -- rsl_nproc_all 32, rsl_myproc 23
[16] MPI Startup(): process is pinned to CPU00 on node compute-0-6.local
compute-0-6.local -- rsl_nproc_all 32, rsl_myproc 20
[0] Rank Pid Node name Pin cpu
[0] 0 31791 quijote.cluster 0
[0] 1 31784 quijote.cluster 4
[0] 2 31785 quijote.cluster 2
[0] 3 31786 quijote.cluster 6
[0] 4 31787 quijote.cluster 1
[0] 5 31788 quijote.cluster 5
[0] 6 31789 quijote.cluster 3
[0] 7 31790 quijote.cluster 7
[0] 8 23649 compute-0-7.local 0
[0] 9 23656 compute-0-7.local 4
[0] 10 23650 compute-0-7.local 2
[0] 11 23652 compute-0-7.local 6
[0] 12 23651 compute-0-7.local 1
[0] 13 23654 compute-0-7.local 5
[0] 14 23653 compute-0-7.local 3
[0] 15 23655 compute-0-7.local 7
[0] 16 10775 compute-0-6.local 0
[0] 17 10776 compute-0-6.local 4
[0] 18 10777 compute-0-6.local 2
[0] 19 10778 compute-0-6.local 6
[0] 20 10779 compute-0-6.local 1
[0] 21 10780 compute-0-6.local 5
[0] 22 10781 compute-0-6.local 3
[0] 23 10782 compute-0-6.local 7
[0] 24 20680 compute-0-5.local 0
[0] 25 20681 compute-0-5.local 4
[0] 26 20682 compute-0-5.local 2
[0] 27 20683 compute-0-5.local 6
[0] 28 20684 compute-0-5.local 1
[0] 29 20685 compute-0-5.local 5
[0] 30 20686 compute-0-5.local 3
[0] 31 20687 compute-0-5.local 7
[0] Init(): I_MPI_DEBUG=5
[0] Init(): I_MPI_DEVICE=rdssm
[0] Init(): I_MPI_FALLBACK_DEVICE=1
[0] Init(): MPICH_INTERFACE_HOSTNAME=192.168.10.1
[22] MPI Startup(): process is pinned to CPU03 on node compute-0-6.local
[21] MPI Startup(): process is pinned to CPU05 on node compute-0-6.local
compute-0-6.local -- rsl_nproc_all 32, rsl_myproc 16
compute-0-6.local -- rsl_nproc_all 32, rsl_myproc 22
compute-0-6.local -- rsl_nproc_all 32, rsl_myproc 21
compute-0-6.local -- rsl_nproc_all 32, rsl_myproc 18
[18] MPI Startup(): process is pinned to CPU02 on node compute-0-6.local
quijote.cluster -- rsl_nproc_all 32, rsl_myproc 0


When I try to set the "I_MPI_FALLBACK_DEVICE 0", I get these errors :

[c2@Run]$ /intel/impi/3.2.1.009/bin64/mpiexec -genv I_MPI_FALLBACK_DEVICE 0 -np 32 -env I_MPI_DEVICE rdssm ./mm5.mpp
[1] DAPL provider is not found and fallback device is not enabled
[cli_1]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[3] DAPL provider is not found and fallback device is not enabled
[0] DAPL provider is not found and fallback device is not enabled
[cli_0]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[2] DAPL provider is not found and fallback device is not enabled
[cli_2]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[4] DAPL provider is not found and fallback device is not enabled
[cli_4]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[5] DAPL provider is not found and fallback device is not enabled
[cli_5]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[6] DAPL provider is not found and fallback device is not enabled
[cli_6]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[9] DAPL provider is not found and fallback device is not enabled
[cli_9]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[8] DAPL provider is not found and fallback device is not enabled
[13] DAPL provider is not found and fallback device is not enabled
[cli_13]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[10] DAPL provider is not found and fallback device is not enabled
[cli_10]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[12] DAPL provider is not found and fallback device is not enabled
[cli_12]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[11] DAPL provider is not found and fallback device is not enabled
[cli_11]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[14] DAPL provider is not found and fallback device is not enabled
[cli_14]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[15] DAPL provider is not found and fallback device is not enabled
[cli_15]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[16] DAPL provider is not found and fallback device is not enabled
[cli_16]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[20] DAPL provider is not found and fallback device is not enabled
[cli_20]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[19] DAPL provider is not found and fallback device is not enabled
[cli_19]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[21] DAPL provider is not found and fallback device is not enabled
[cli_21]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[22] DAPL provider is not found and fallback device is not enabled
[cli_22]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[25] DAPL provider is not found and fallback device is not enabled
[24] DAPL provider is not found and fallback device is not enabled
[cli_24]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[26] DAPL provider is not found and fallback device is not enabled
[cli_26]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[27] DAPL provider is not found and fallback device is not enabled
[cli_27]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[cli_25]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[28] DAPL provider is not found and fallback device is not enabled
[cli_28]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
[29] DAPL provider is not found and fallback device is not enabled
[cli_29]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
rank 27 in job 2 infi1_60665 caused collective abort of all ranks
exit status of rank 27: return code 13
[30] DAPL provider is not found and fallback device is not enabled
[cli_30]: aborting job:
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(283): Initialization failed
MPIDD_Init(98).......: channel initialization failed
MPIDI_CH3_Init(163)..: generic failure with errno = -1
(unknown)():
rank 25 in job 2 infi1_60665 caused collective abort of all ranks
exit status of rank 25: return code 13
rank 20 in job 2 infi1_60665 caused collective abort of all ranks
exit status of rank 20: return code 13
rank 19 in job 2 infi1_60665 caused collective abort of all ranks
exit status of rank 19: return code 13
rank 14 in job 2 infi1_60665 caused collective abort of all ranks
exit status of rank 14: return code 13
rank 13 in job 2 infi1_60665 caused collective abort of all ranks
exit status of rank 13: return code 13
rank 12 in job 2 infi1_60665 caused collective abort of all ranks
exit status of rank 12: return code 13
rank 11 in job 2 infi1_60665 caused collective abort of all ranks
exit status of rank 11: return code 13
rank 10 in job 2 infi1_60665 caused collective abort of all ranks
exit status of rank 10: return code 13
rank 4 in job 2 infi1_60665 caused collective abort of all ranks
exit status of rank 4: killed by signal 9
rank 1 in job 2 infi1_60665 caused collective abort of all ranks
exit status of rank 1: return code 13
rank 0 in job 2 infi1_60665 caused collective abort of all ranks
exit status of rank 0: return code 13

Thanks a lot and best regards
0 Kudos
Dmitry_K_Intel2
Employee
1,237 Views
jriocation,

could you provide your /etc/dat.conf file as well?

Cheers
Dmitry

0 Kudos
jriocaton_es
Beginner
1,237 Views

Thanks again Dmitry,

we have not ever worked with /etc/dat.cof. I have read something about it, but we havent work ...

This is our /usr/bin/lib64

[root@quijote lib64]# ls -la | grep cma
lrwxrwxrwx 1 root root 19 mar 31 19:11 libdaplcma.so.1 -> libdaplcma.so.1.0.2
-rwxr-xr-x 1 root root 98560 may 25 2008 libdaplcma.so.1.0.2

Thanks.

Julio.

0 Kudos
Dmitry_K_Intel2
Employee
1,237 Views
Hi Julio,

The output:
[1] DAPL provider is not found and fallback device is not enabled
when I_MPI_FALLBACK_DEVICE=0 shows that something wrong with your DAPL settings.

How did you switch between Infiniband and Gigabit Ethernet?

Set I_MPI_FALLBACK_DEVICE=1 and try to run 'mpiexec' for both devices with I_MPI_DEBUG=2.
Message like:
[0] MPI startup(): shared memory and socket data transfer modes
Shows you real transfer mode.

Iwould recommend to use OFED version 1.4.1.

Best wishes,
Dmitry
0 Kudos
Dmitry_K_Intel2
Employee
1,237 Views
I meant a command line like:
mpiexec -genv I_MPI_FALLBACK_DEVICE 1 -np16 -env I_MPI_DEBUG2 -env I_MPI_DEVICE rdma ./mm5.mpp

Both for IB and GigaEth
0 Kudos
jriocaton_es
Beginner
1,237 Views
Thanks again Dmitry,

we have downloaded the OFED from the QLOGIC website ( we are using CentOS 5.1 as O.S.). What do you recommend to install ?? There are a lot of packages.

Thanks a lot and best regards
0 Kudos
Dmitry_K_Intel2
Employee
1,238 Views
Hi Julio,

Seems that something wrong with cluster settings. I'm afraid that rdma didn't work at all.
It's very strange that there is no /etc/dat.conf file. I'm not familiar with Qlogic devices but they should provide drives for their cards. Please try to find dat.conf file - it has to exist. Move it (or create a link) to /etc directory on all nodes.
dat.conf file should contain lines like:
OpenIB-cma-1 u1.2 nonthreadsafe default /usr/lib/libdaplcma.so dapl.1.2 "ib1 0" ""

Qlogic should provide an utility which can prove that InfiniBand card works correct. Try to check that devices work as expected on all nodes.

mpd.hosts should look like:
infi1:8

Be sure that there are no I_MPI env variablesset from previuos attemps.

Start mpdring:
mpdboot -n 8 -f mpd.hosts -r ssh --verbose

Start your application:
mpiexec-np16 -env I_MPI_DEBUG2 -env I_MPI_DEVICE rdma ./mm5.mpp

And please attach the ouptut both for InfiniBand and for gigabit ethernet. This debug level(2) will show what device has been chosen.

Best wishes,
Dmitry
0 Kudos
jriocaton_es
Beginner
1,237 Views
Dear Dmitry,

im sorry for the delay, but I was travelling out of the office.

We could solve the problem updating the drivers and libraries. Thanks a lot for your help
0 Kudos
Dmitry_K_Intel2
Employee
1,237 Views
Quoting - jriocaton.es
Dear Dmitry,

im sorry for the delay, but I was travelling out of the office.

We could solve the problem updating the drivers and libraries. Thanks a lot for your help

Hi Julio,

Nice to hear that the problem was resolved!
Could you provide details about drivers and libraries you have updated? This information can be useful for anybody else.

Best wishes,
Dmitry
0 Kudos
jriocaton_es
Beginner
1,237 Views
- QLogic OFED+ 1.4.0.1.30
- QLogic SRP v1.4.0.1.5
- QLogic VNIC v1.4.0.1.6
- QLogic IB Tools v4.4.1.0.11
0 Kudos
Reply