- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I'm running a hydrid MPI/OpenMP application on Intel® E5-2600v3 (Haswell) Series cores and I notice a drop in performance by 40% when using all cores (N) on a socket vs. N-1 cores on a socket. This behavior is pronounced with higher core counts >= 160 cores. The cluster is built using 2 CPUs per node. As a test case I tried a similar run on Intel® E5-2600 Series (Sandy-Bridge) and I don't see this behavior and the performance is comparable.
I'm using Intel MPI 5.0. Both the clusters use the same IB hardware. Profiling revealed MPI time is what is causing the performance drop. The application only performs MPI communication outside OpenMP regions. Any help will be appreciated.
Thanks,
GK
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You should probably ask that question in the HPC forum: https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology . They deal with MPI issues.
I'd start with the Intel MPI Library Troubleshooting Guide: https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/563559 .
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
We posted the new Application Performance Snapshot 2018 Beta that can get some insight into MPI, OpenMP and memory access efficiency.
It should be very easy to deploy - just unsip and run.
Would be interesting to see side by side the reports from these two runs - will the statics give a clue what can be a reason of the performance drop.
Thanks & Regards, Dmitry
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks, Dmitry. I will report back.
- GK
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page