Intel® oneAPI Math Kernel Library
Ask questions and share information with other developers who use Intel® Math Kernel Library.

IMPI dapl fabric error

Sangamesh_B_
Beginner
1,024 Views

Hi, I'm trying to run HPL benchmark on an Ivybridge Xeon processor with 2 Xeon Phi 7120P MIC cards. I'm using offload xhpl binary from Intel Linpack.

It throws following error

$ bash runme_offload_intel64
This is a SAMPLE run script.  Change it to reflect the correct number
of CPUs/threads, number of nodes, MPI processes per node, etc..

MPI_RANK_FOR_NODE=1 NODE=1, CORE=, MIC=1, SHARE=
MPI_RANK_FOR_NODE=0 NODE=0, CORE=, MIC=0, SHARE=
[1] MPI startup(): dapl fabric is not available and fallback fabric is not enabled
[0] MPI startup(): dapl fabric is not available and fallback fabric is not enabled

I checked the same errors on this forum and got to know that to unset I_MPI_DEVICES variable. This made the HPL to run. But performance is very low, just 50%. On the other node, with same hardware, HPL efficiency is 84%. Following is the short output of openibd status from both systems, which shows the difference.

ON NODE with HPL 84%                                                 ON NODE with HPL 50%

Currently active MLX4_EN devices:                               Currently active MLX4_EN devices:

                                                                                        | eth0

Can some one guide me how to resolve it?

 

0 Kudos
7 Replies
James_T_Intel
Moderator
1,024 Views

From what I see, you are only running one rank, independently on each node.  Is this your intent?

What InfiniBand* devices do you have in your cluster?

Sincerely,
James Tullos
Technical Consulting Engineer
Intel® Cluster Tools

0 Kudos
Sangamesh_B_
Beginner
1,024 Views

My intent was to know - is the dapl fabric error causing the low HPL performance? The same benchmark is performed separately on two nodes which have the same hardware & software configuration. One is giving 84% and the other is giving 50% hpl efficiency.

First attempt: Executed the benchmark. It came out immediately without running, throwing dapl fabric error.

Second attempt: I used "unset I_MPI_FABRIC & unset I_MPI_DEVICES". The benchmark executed. But performance is just 50%.

My questions: Why there dapl fabric error? What is causing the low performance?

0 Kudos
James_T_Intel
Moderator
1,024 Views

The error you are getting indicates that you do not have DAPL* available on this system.  This will lower performance if you are using multiple nodes.  But from what you're saying, it sounds like you are only using one node.  If you are only using one node, the performance will be unaffected by the network.

0 Kudos
Sangamesh_B_
Beginner
1,024 Views

Yes, this is a single system benchmark. 

May I know how to check TURBO mode is enabled/disabled, without reboot/going to BIOS on LINUX?

0 Kudos
James_T_Intel
Moderator
1,024 Views

I don't know how you could check Turbo Mode from inside of the operating system other than by attempting to activate it.

Since you're only running on a single node with HPL, I'm going to move this thread to the Intel® Math Kernel Library forum.

0 Kudos
kiran_s_
Beginner
1,024 Views
whenever I fired below command  
 
time mpiexec.hydra -machinefile hostfile2 -n 96 ./a.out >out.txt 
 
bash: /opt/intel//impi/4.1.3.048/intel64/bin/pmi_proxy: No such file or directory
[mpiexec@nits-hpc] HYD_pmcd_pmiserv_send_signal (./pm/pmiserv/pmiserv_cb.c:239): assert (!closed) failed
[mpiexec@nits-hpc] ui_cmd_cb (./pm/pmiserv/pmiserv_pmci.c:127): unable to send SIGUSR1 downstream
[mpiexec@nits-hpc] HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77): callback returned error status
[mpiexec@nits-hpc] HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:435): error waiting for event
[mpiexec@nits-hpc] main (./ui/mpich/mpiexec.c:901): process manager error waiting for completion.
 
please help
0 Kudos
Zhang_Z_Intel
Employee
1,024 Views

Kiran,

Your problem seems to be related to MPI. Does your cluster have Infiniband, and is the Infiniband correctly installed/configured? First, try to run a simple MPI program on your cluster with the same configuration. Fix any MPI and/or Infiniband issues before you try HPL again.

 

0 Kudos
Reply