Software Tuning, Performance Optimization & Platform Monitoring
Discussion regarding monitoring and software tuning methodologies, Performance Monitoring Unit (PMU) of Intel microprocessors, and platform updating.

Unexpected OpenMP performance gain

Barkhudarov__Michael
496 Views

We have a CFD application compiled on Windows with Intel FORTRAN compiler version 16.0.1.146.

Running on Windows 10 workstation with 2 Xeon E5-2630 v3 processors @2.4 GHz, 8 physical cores each - 16 in total. 64 GB of RAM. Not hyper-threaded.

The application is OpenMP parallel, with mostly static scheduling. It is running on 16 processors utilizing them almost 100%  (CPU_time/(nprocs*clock_time). While running this application, we initiate another instance of it, also running on all 16 processors. The two instances (simulations) of the application are completely independent of each other. Naturally, both use about 50% of the CPU power,

When the second simulation starts, the first one speeds up by a factor of 2 and even 2.5. As soon as the second simulation stops, the first one goes back to the original speed.

We are struggling to understand this behavior. There are no special affinity settings used in either run, using the default settings.

Michael

0 Kudos
4 Replies
TimP
Honored Contributor III
496 Views
A likely explanation for superlinear speedup would be that an application benefits by increasing total cache size
0 Kudos
Barkhudarov__Michael
496 Views

Tim, could you please elaborate? Why is the cache sized increased when the second simulation starts? More importantly, can the cache size be increased when only one simulation runs?

Also, would you call this behavior 'superlinear speedup'?

0 Kudos
McCalpinJohn
Honored Contributor III
496 Views

Without controlling process and memory affinity, it is extremely difficult to understand the behavior parallel programs on any system (and it is generally not worth the effort to try).

It is actually not too difficult to come up with hypotheses that would result in the behavior you observe, but without control (or extensive instrumentation), there is no way to evaluate such hypotheses....

Simple example:

Your first program uses all 16 cores, but instantiates all of its memory on socket 0.  Its "normal" mode of operation is slow because all the threads running on socket 1 are accessing their data remotely.   When the second program is started, the operating system packs all the threads of the first program into socket 0, and packs all the threads of the second program into socket 1.  Now the first program is accessing local memory and runs much faster.   2x to 2.5x is not out of the question.

 

0 Kudos
Barkhudarov__Michael
495 Views

Thank you, John. This makes sense. With memory access bound performance, which is the case for our application, it is easy to imagine how the reduction in available CPU power can be well compensated by faster memory access. WE are going to try VTune to analyze the situation.

0 Kudos
Reply