- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello All,
I am using PAPI 5.1.0 for doing some performance counter analysis on a Sandy Bridge Machine having 2 processors each with 6 cores.
I am using events
CPU_CLK_UNHALTED,
RESOURCE_STALLS:ANY,
UOPS_DISPATCHED:STALL_CYCLES,
UOPS_ISSUED:STALL_CYCLES.
I run 6 copies of a test program (with only 1 copy invoking PAPI) on 6 physically different cores (sharing main memory) using taskset and get the following output. As can be seen, Dispatch stall cycles and Issue stall cycles are greater than cpu_clk_unhalted cycles. Is this type of data possible or am I doing some thing wrong?
CPU_CLK_UNHALTED, 27494626969,
RESOURCE_STALLS:ANY, 23483871000,
UOPS_DISPATCHED:STALL_CYCLES, 28114602519
UOPS_ISSUED:STALL_CYCLES. 31941881082
Many regards,
Rakhi
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Please check this link : http://software.intel.com/en-us/articles/intel-vtune-amplifier-xe-2011-documentation
IIRC per clock cycle can be retired up to 4 uops.It seems that your cpu spent a lot of time waiting it could be memory stalls or data dependencies or long latency instructions or even branch misprediction.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes! The test program has a of memory stalls.
I was referring to figure 2 and 3 in "Performance Analysis Guide for Intel® CoreTM i7 Processor and Intel® XeonTM 5500 processors" By Dr David Levinthal PhD. Version 1.0.
I now realize that the model is a simple, serialized model, where as the processor is complicated. As many instructions can be issued and dispatched in a single clock cycle ( ~4), the stalls reported is a sum of stalls on all paths? Hence the numbers are greater than expected.
Am I correct?
Thanks again,
Rakhi
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
As front end is serialized reading binary encoded bitstream(in reality probably coupled with an additive noise) the decoding stage will break down machine code instructions coupled with data into simplier more primitive instructions micro-ops and try to exploit instruction level parallelism moreover cpu will try to keep busy its pielines by executing out-of-order for example during the memory stalls or even prefetching some data needed ahead of time.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ok. I think I understand now. Thanks a ton
Rakhi
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You are welcome:)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Btw can you post branching related counters output?

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page