Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Gandharv_K_
Beginner
90 Views

Hybrid MPI/OpenMP showing poor performance when using all cores on a socket vs. N-1 cores per socket

Hi,

I'm running a hydrid MPI/OpenMP application on Intel® E5-2600v3 (Haswell) Series cores and I notice a drop in performance by 40% when using all cores (N) on a socket vs. N-1 cores on a socket. This behavior is pronounced with higher core counts >= 160 cores. The cluster is built using 2 CPUs per node. As a test case I tried a similar run on Intel® E5-2600 Series (Sandy-Bridge) and I don't see this behavior and the performance is comparable.

I'm using Intel MPI 5.0. Both the clusters use the same IB hardware. Profiling revealed MPI time is what is causing the performance drop. The application only performs MPI communication outside OpenMP regions. Any help will be appreciated.

Thanks,

GK

0 Kudos
2 Replies
Gregg_S_Intel
Employee
90 Views

Perhaps the MPI or the network layer is running a helper thread to progress the MPI messages?

Gandharv_K_
Beginner
90 Views

How can I check this? Would a simple top while running give us the PID?

Reply