Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.

Hybrid MPI/OpenMP showing poor performance when using all cores on a socket vs. N-1 cores per socket

Gandharv_K_
Beginner
562 Views

Hi,

I'm running a hydrid MPI/OpenMP application on Intel® E5-2600v3 (Haswell) Series cores and I notice a drop in performance by 40% when using all cores (N) on a socket vs. N-1 cores on a socket. This behavior is pronounced with higher core counts >= 160 cores. The cluster is built using 2 CPUs per node. As a test case I tried a similar run on Intel® E5-2600 Series (Sandy-Bridge) and I don't see this behavior and the performance is comparable.

I'm using Intel MPI 5.0. Both the clusters use the same IB hardware. Profiling revealed MPI time is what is causing the performance drop. The application only performs MPI communication outside OpenMP regions. Any help will be appreciated.

Thanks,

GK

0 Kudos
2 Replies
Gregg_S_Intel
Employee
562 Views

Perhaps the MPI or the network layer is running a helper thread to progress the MPI messages?

0 Kudos
Gandharv_K_
Beginner
562 Views

How can I check this? Would a simple top while running give us the PID?

0 Kudos
Reply