Software Tuning, Performance Optimization & Platform Monitoring
Discussion regarding monitoring and software tuning methodologies, Performance Monitoring Unit (PMU) of Intel microprocessors, and platform updating.

How to find the reason for poor scalability in xeon phi?

Surya_Narayanan_N_
485 Views

Hello,

     I am running some multithreaded benchmark programs in Mic. Some programs don't scale beyond 32 threads and some beyond 64. I am trying to find out the reason why they are not scalable beyond certain number of threads. Definitely, the poor scaling is not a result of lack of computing resources (i,e we can run 244 hw threads without the problem of context switching).

I am trying to analyze this using Vtune but am still not sure how to study this issue. 

1.Vtune  Locks and waits analysis doesn't work in Knc (mic). So I don't know how to find whether the locks are the issues?

2. Bandwidth? As more threads are spawned and if they use lot of shared data, there can be an issue of cache coherence eating up the bandwidth which can be studied using core/uncore bandwidth measurement studies using Vtune.

I am not sure of anything else which might contribute to the poor scaling. I would like to take your suggestion in this study.

Thank you.

0 Kudos
7 Replies
Patrick_F_Intel1
Employee
485 Views

Hello Surya,

Can you post your question to the MIC forum at http://software.intel.com/en-us/forums/intel-many-integrated-core ?

You will get quicker, better responses there.

Pat

0 Kudos
SergeyKostrov
Valued Contributor II
485 Views
>>... I am running some multithreaded benchmark programs in Mic. Some programs don't scale beyond 32 threads and >>some beyond 64... You need to provide more technical details about these benchmarks and what they do to evaluate performance.
0 Kudos
Bernard
Valued Contributor I
485 Views

What is your benchmark measuring and what calculation does it perform?

0 Kudos
Surya_Narayanan_N_
485 Views

Am doing a coarse level study without finding out what each benchmark does. They are basically PARSEC(regular benchmark, which has normal data structures) and Lonestar benchmarks( irregular benchmarks which are pointer based datastructure algorithms which uses graph or tree based ).

Am trying to measure how to measure synchronization overhead in xeon-phi. Any shared data structures in these benchmarks will create lot of data transfer and bandwidth will become the bottleneck (and not the processing cores) as we increase the number of threads. But can it be the only reason for poor scaling in xeon-phi? should I consider synchronization overhead and bandwidth issue seperately or Can bandwidth study from the core (with the bandwidth formula given in the xeon-phi book or tutorials) reveal the synchronization effect?

 

0 Kudos
McCalpinJohn
Honored Contributor III
485 Views

For OpenMP codes, the usual approach to estimating synchronization overhead is to run the EPCC OpenMP benchmarks.  These don't reveal the details, but they do provide a common starting point for comparison to other systems.

For Xeon Phi, understanding synchronization overhead is quite difficult because the cache-to-cache intervention latencies for cores that are "close" on the ring vary by a factor of 3 -- from about 130 cycles to almost 400 cycles -- depending on the address being used (which controls the location of the distributed tag directory used to manage the coherence transaction).    The address mapping is not published, but the very low overhead of the RDTSC instruction on Xeon Phi (about 5 cycles) allows one to directly measure the latency of each load independently.  E.g., for any pair of cores one can easily measure the latency for cache-to-cache interventions using a range of addresses to look for "good" ones.

Because of this variability in the coherence protocol (and just to keep the methodology as clean as possible), I recommend studying memory bandwidth and synchronization issues independently. 

0 Kudos
SB17
Beginner
485 Views

maybe it could be useful? http://spcl.inf.ethz.ch/Publications/.pdf/ramos-hoefler-cc-modeling.pdf

but then synchronization is considered the lowest level primitives

0 Kudos
SB17
Beginner
485 Views

maybe it will be useful

http://spcl.inf.ethz.ch/Publications/.pdf/ramos-hoefler-cc-modeling.pdf


In this article synchronization is considered as a set of overhead elementary states of cache coherent protocol

0 Kudos
Reply