<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Jim,this is a very good point in Intel® Moderncode for Parallel Architectures</title>
    <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963053#M5342</link>
    <description>&lt;P&gt;Jim,&lt;BR /&gt;this is a very good point. While I originally got interested in RTM and HLE for the possibility of lower latency uncontended access, preemption is sometimes really bad if not catastrophic as you commented. Thanks for the insight.&lt;/P&gt;
&lt;P&gt;I am still trying to get my head around the performance and latency aspects of HLE. The use case I'm looking at just now is the claim and publish steps of a multi-producer queue. Our current impl uses CAS (cmpxchgq), the prototype HLE version uses a spin lock loosely based on comments in various blog entries and other places (snippets enclosed below, cut and pasted, so I hope I got everything right).&lt;/P&gt;
&lt;P&gt;It feels like I'm missing some important point here (or possibly some really trivial point ;)&lt;BR /&gt;Do you have any ideas as to what can be expected?&lt;/P&gt;
&lt;P&gt;Best,&lt;BR /&gt;Rolf&lt;/P&gt;
&lt;P&gt;Edit - added backslashes so that \\n is rendered correctly&lt;/P&gt;
&lt;P&gt;[cpp]&lt;/P&gt;
&lt;P&gt;typedef unsigned long u64;&lt;/P&gt;
&lt;P&gt;#define __v64(x) ((volatile u64*) (x))&lt;BR /&gt;#define __HLE_ACQUIRE ".byte 0xf2 ; "&amp;nbsp;&lt;BR /&gt;#define __HLE_RELEASE ".byte 0xf3 ; "&lt;/P&gt;
&lt;P&gt;static inline u64 __ia_cas64 (volatile void* data, u64 curr, u64 next)&lt;BR /&gt;{&lt;BR /&gt; u64 prev;&lt;/P&gt;
&lt;P&gt;asm volatile ("lock;cmpxchgq %1,%2"&lt;BR /&gt; : "=a" (prev) // output&lt;BR /&gt; : "r" (next), "m" (*__v64 (data)), "0" (curr) // inputs&lt;BR /&gt; : "memory");&lt;BR /&gt; return prev;&lt;BR /&gt;}&lt;/P&gt;
&lt;P&gt;static inline void __hle_lock (volatile void* lock)&lt;BR /&gt;{&lt;BR /&gt; u64 value = 1;&lt;BR /&gt;asm volatile ("1: " __HLE_ACQUIRE "lock; xchgq %0,%1\\n"&lt;BR /&gt;" cmpq $0,%0\\n" // prev == 0 ?&lt;BR /&gt;" jz 3f\\n"&lt;BR /&gt;"2: pause\\n" // abort transaction&lt;BR /&gt;" cmpq $1,%1\\n" // lock == 1 ?&lt;BR /&gt;" jz 2b\\n"&lt;BR /&gt;" jmp 1b\\n"&lt;BR /&gt;"3: \\n"&lt;BR /&gt;: "+r" (value), "+m" (*__v64 (lock))&lt;BR /&gt;:: "memory");&lt;BR /&gt;}&lt;/P&gt;
&lt;P&gt;static inline void __hle_unlock (volatile void* lock)&lt;BR /&gt;{&lt;BR /&gt;asm volatile (__HLE_RELEASE "movq $0,%0"&lt;BR /&gt;: "+m" (*__v64 (lock)) :: "memory");&lt;BR /&gt;}&lt;/P&gt;
&lt;P&gt;static inline u64 __hle_cas64 (volatile void* lock, volatile u64* data,&lt;BR /&gt; u64 curr, u64 next)&lt;BR /&gt;{&lt;BR /&gt; __hle_lock (lock);&lt;BR /&gt; u64 temp = *data;&lt;BR /&gt; &lt;BR /&gt; if (temp == curr)&lt;BR /&gt; *data = next;&lt;/P&gt;
&lt;P&gt;__hle_unlock (lock);&lt;BR /&gt; return temp;&lt;BR /&gt;}&lt;/P&gt;
&lt;P&gt;[/cpp]&lt;/P&gt;</description>
    <pubDate>Sun, 28 Jul 2013 14:04:00 GMT</pubDate>
    <dc:creator>Rolf_Andersson</dc:creator>
    <dc:date>2013-07-28T14:04:00Z</dc:date>
    <item>
      <title>Overhead of HLE acquire and release</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963045#M5334</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;(also posted as a comment to a blog entry re tsx-tools by Andy)&lt;/P&gt;
&lt;P&gt;I've just started playing around with the new TSX feature set.&lt;/P&gt;
&lt;P&gt;I wrote a quick test with a loop over lock;xchgl and movl with and without HLE prefixes.&lt;BR /&gt;To my surprise, the version with HLE prefixes seems to be ~50% slower?&lt;BR /&gt;Is the test invalid/irrelevant for some reason?&lt;BR /&gt;Am I doing something wrong or is this expected?&lt;/P&gt;
&lt;P&gt;Thanks,&lt;BR /&gt;Rolf&lt;/P&gt;
&lt;P&gt;---&lt;/P&gt;
&lt;P&gt;The test was run on a MacBook Air with an i7-4650U 1.7 GHz (Haswell) CPU&lt;/P&gt;
&lt;P&gt;tsx-tools reports:&lt;BR /&gt;Rolfs-MacBook-Air:tsx-tools ran$ ./has-tsx&lt;BR /&gt;RTM: Yes&lt;BR /&gt;HLE: Yes&lt;BR /&gt;Rolfs-MacBook-Air:tsx-tools ran$&lt;/P&gt;
&lt;P&gt;The code enclosed below was compiled with:&lt;BR /&gt;Rolfs-MacBook-Air:ran ran$ clang -O4 -o tt tt.c -lc&lt;/P&gt;
&lt;P&gt;Rolfs-MacBook-Air:ran ran$ time ./tt 1 100000000&lt;/P&gt;
&lt;P&gt;real 0m1.616s&lt;BR /&gt;user 0m1.612s&lt;BR /&gt;sys 0m0.004s&lt;BR /&gt;Rolfs-MacBook-Air:ran ran$ time ./tt 2 100000000&lt;/P&gt;
&lt;P&gt;real 0m1.063s&lt;BR /&gt;user 0m1.061s&lt;BR /&gt;sys 0m0.002s&lt;BR /&gt;Rolfs-MacBook-Air:ran ran$&lt;/P&gt;
&lt;P&gt;Source code for tt.c is attached.&lt;/P&gt;</description>
      <pubDate>Sat, 27 Jul 2013 15:58:23 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963045#M5334</guid>
      <dc:creator>Rolf_Andersson</dc:creator>
      <dc:date>2013-07-27T15:58:23Z</dc:date>
    </item>
    <item>
      <title>responding to my own post</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963046#M5335</link>
      <description>&lt;P&gt;responding to my own post with some follow-up info:&lt;/P&gt;
&lt;P&gt;I just ran pcm-tsx.x (from PCM 2.5.1) while executing "tt 1 1000000000" and there were no transactional cycles according to pcm-tsx.&lt;/P&gt;
&lt;P&gt;Any assistance in explaining what is going on would be much appreciated.&lt;/P&gt;</description>
      <pubDate>Sun, 28 Jul 2013 04:01:13 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963046#M5335</guid>
      <dc:creator>Rolf_Andersson</dc:creator>
      <dc:date>2013-07-28T04:01:13Z</dc:date>
    </item>
    <item>
      <title>Hi Rolf,</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963047#M5336</link>
      <description>&lt;P&gt;Hi Rolf,&lt;/P&gt;
&lt;P&gt;HLE mechanism is not that simple to be benchmarked with such unrealistic test. In&amp;nbsp;4th generation Intel Core architecture&amp;nbsp;HLE/RTM/TSX should be used for critical sections that do &lt;STRONG&gt;useful non-trivial amount of work&lt;/STRONG&gt; (please use a real application with lock contention to evaluate TSX) usually with a small to moderate level of &lt;STRONG&gt;data&lt;/STRONG&gt; contention. In contrast to small synthetic microbenchmarks with tight loops, in a real application TSX overheads can be mostly hidden&amp;nbsp;behind the out-of-order execution of the microarchitecture. Section 12.5 "TSX&amp;nbsp;PERFORMANCE GUIDELINES"&amp;nbsp;of the &lt;A href="http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf"&gt;optimization manual&lt;/A&gt; is good to consider.&lt;/P&gt;
&lt;P&gt;Roman&lt;/P&gt;</description>
      <pubDate>Sun, 28 Jul 2013 09:00:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963047#M5336</guid>
      <dc:creator>Roman_D_Intel</dc:creator>
      <dc:date>2013-07-28T09:00:00Z</dc:date>
    </item>
    <item>
      <title>Hi Roman, thank you for your</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963048#M5337</link>
      <description>&lt;P&gt;Hi Roman, thank you for your prompt reply.&lt;/P&gt;
&lt;P&gt;I started out trying to use lock elision for one of our applications, but got inconsistent results so I tried to simplify the code. I ended up with the purely synthetic case that I asked about above. I realize that it is unrealistic, but I'm still curious as to the overhead of acquire and release as, if I have understood correctly the locking part of "lock;xchg" would be elided. So, I expected some additional cost for HLE, but at the same time a cost saving for the elided lock. Are or will latency and throughput numbers be available for TSX, and is there a way to measure or estimate the savings for the elided lock?&lt;/P&gt;
&lt;P&gt;To my other question about pcm-tsx not showing any transactional cycles, is there some other way to discern that the acquire and release operations have actually been executed?&lt;/P&gt;
&lt;P&gt;Again, thanks for provising feedback.&lt;/P&gt;
&lt;P&gt;Best,&lt;BR /&gt;Rolf&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 28 Jul 2013 09:18:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963048#M5337</guid>
      <dc:creator>Rolf_Andersson</dc:creator>
      <dc:date>2013-07-28T09:18:00Z</dc:date>
    </item>
    <item>
      <title>Rolph,</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963049#M5338</link>
      <description>&lt;P&gt;Rolf,&lt;/P&gt;
&lt;P&gt;one of the HLE/TSX targets is to allow &lt;STRONG&gt;concurrent&amp;nbsp;&lt;/STRONG&gt;threads to easily&amp;nbsp;&lt;A href="http://software.intel.com/en-us/blogs/2012/02/07/coarse-grained-locks-and-transactional-synchronization-explained"&gt;avoid unnecessary serialization of critical sections&lt;/A&gt;&amp;nbsp;(see this &lt;A href="http://software.intel.com/en-us/blogs/2012/02/07/coarse-grained-locks-and-transactional-synchronization-explained"&gt;blog&lt;/A&gt;) and seriazation in lock internals themselves (think about hardware serialization on atomic increment/decrement of the counter containing the number of concurrent readers in typical read-write lock implementations: it can be easily avoided with an RTM wrapper around the RW-lock - see Chapter 12 for examples of such). You can consider&amp;nbsp;Amdahl's&amp;nbsp;law thinking when estimating potential performance benefits for TSX in your application.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please post the pcm-tsx output here.&lt;/P&gt;
&lt;P&gt;Thanks,&lt;/P&gt;
&lt;P&gt;Roman&lt;/P&gt;</description>
      <pubDate>Sun, 28 Jul 2013 11:35:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963049#M5338</guid>
      <dc:creator>Roman_D_Intel</dc:creator>
      <dc:date>2013-07-28T11:35:00Z</dc:date>
    </item>
    <item>
      <title>Roman,my hypothesis was that</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963050#M5339</link>
      <description>&lt;P&gt;Roman,&lt;BR /&gt;my hypothesis was that "lock;xchg" would incur coherence traffic and thus a number of cycles of latency (~access to LLC?), and that the lock elision operation somehow would hide that latency (or part thereof). I may have misunderstood how lock elision works.&lt;/P&gt;
&lt;P&gt;Re concurrent threads - the naive test case I wrote would essentially cover a situation with zero contention. I would have thought that the elision mechanism would work the same irrespective of the fact that there is zero contention, with the exception that there would be no txn aborts?&lt;/P&gt;
&lt;P&gt;Output from pcm-tsx follows below.&lt;/P&gt;
&lt;P&gt;Thanks,&lt;BR /&gt;Rolf&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Time elapsed: 1501 ms&lt;BR /&gt;Core | IPC | Instructions | Cycles | Transactional Cycles | Aborted Cycles | #RTM | #HLE | Cycles/Transaction &lt;BR /&gt; 0 0.71 236 M 335 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 1 0.31 15 M 51 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 2 0.59 166 M 282 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 3 0.25 13 M 51 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------&lt;BR /&gt; * 0.60 432 M 721 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;/P&gt;
&lt;P&gt;Time elapsed: 1501 ms&lt;BR /&gt;Core | IPC | Instructions | Cycles | Transactional Cycles | Aborted Cycles | #RTM | #HLE | Cycles/Transaction &lt;BR /&gt; 0 0.38 99 M 262 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 1 0.25 12 M 51 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 2 0.38 90 M 241 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 3 0.26 10 M 41 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------&lt;BR /&gt; * 0.36 213 M 596 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;/P&gt;
&lt;P&gt;Time elapsed: 1501 ms&lt;BR /&gt;Core | IPC | Instructions | Cycles | Transactional Cycles | Aborted Cycles | #RTM | #HLE | Cycles/Transaction &lt;BR /&gt; 0 0.56 142 M 255 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 1 0.52 47 M 90 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 2 0.15 276 M 1891 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 3 0.54 32 M 59 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------&lt;BR /&gt; * 0.22 498 M 2297 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;/P&gt;
&lt;P&gt;Time elapsed: 1501 ms&lt;BR /&gt;Core | IPC | Instructions | Cycles | Transactional Cycles | Aborted Cycles | #RTM | #HLE | Cycles/Transaction &lt;BR /&gt; 0 0.39 92 M 236 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 1 0.61 118 M 192 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 2 0.10 617 M 6338 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 3 0.44 39 M 88 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------&lt;BR /&gt; * 0.13 867 M 6855 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;/P&gt;
&lt;P&gt;Time elapsed: 1501 ms&lt;BR /&gt;Core | IPC | Instructions | Cycles | Transactional Cycles | Aborted Cycles | #RTM | #HLE | Cycles/Transaction &lt;BR /&gt; 0 0.17 122 M 721 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 1 0.87 193 M 223 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 2 0.10 574 M 5862 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 3 0.58 76 M 130 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------&lt;BR /&gt; * 0.14 966 M 6939 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;/P&gt;
&lt;P&gt;Time elapsed: 1501 ms&lt;BR /&gt;Core | IPC | Instructions | Cycles | Transactional Cycles | Aborted Cycles | #RTM | #HLE | Cycles/Transaction &lt;BR /&gt; 0 0.17 173 M 1038 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 1 0.40 44 M 110 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 2 0.10 559 M 5622 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 3 0.83 62 M 75 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------&lt;BR /&gt; * 0.12 840 M 6846 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;/P&gt;
&lt;P&gt;Time elapsed: 1501 ms&lt;BR /&gt;Core | IPC | Instructions | Cycles | Transactional Cycles | Aborted Cycles | #RTM | #HLE | Cycles/Transaction &lt;BR /&gt; 0 0.10 630 M 6499 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 1 0.21 2483 K 12 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 2 0.41 58 M 144 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 3 0.90 196 M 217 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------&lt;BR /&gt; * 0.13 887 M 6873 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;/P&gt;
&lt;P&gt;Time elapsed: 1501 ms&lt;BR /&gt;Core | IPC | Instructions | Cycles | Transactional Cycles | Aborted Cycles | #RTM | #HLE | Cycles/Transaction &lt;BR /&gt; 0 0.10 630 M 6498 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 1 0.24 2905 K 12 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 2 0.33 45 M 135 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 3 1.00 219 M 218 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------&lt;BR /&gt; * 0.13 897 M 6864 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;/P&gt;
&lt;P&gt;Time elapsed: 1501 ms&lt;BR /&gt;Core | IPC | Instructions | Cycles | Transactional Cycles | Aborted Cycles | #RTM | #HLE | Cycles/Transaction &lt;BR /&gt; 0 0.10 629 M 6463 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 1 0.49 33 M 67 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 2 0.63 136 M 217 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 3 0.74 151 M 204 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------&lt;BR /&gt; * 0.14 950 M 6952 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;/P&gt;
&lt;P&gt;Time elapsed: 1501 ms&lt;BR /&gt;Core | IPC | Instructions | Cycles | Transactional Cycles | Aborted Cycles | #RTM | #HLE | Cycles/Transaction &lt;BR /&gt; 0 0.10 627 M 6473 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 1 0.34 16 M 48 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 2 0.61 89 M 147 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 3 0.71 72 M 101 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------&lt;BR /&gt; * 0.12 806 M 6770 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;/P&gt;
&lt;P&gt;Time elapsed: 1501 ms&lt;BR /&gt;Core | IPC | Instructions | Cycles | Transactional Cycles | Aborted Cycles | #RTM | #HLE | Cycles/Transaction &lt;BR /&gt; 0 0.14 644 M 4543 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 1 0.70 204 M 291 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 2 0.57 201 M 352 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 3 0.43 144 M 336 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------&lt;BR /&gt; * 0.22 1194 M 5525 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;/P&gt;
&lt;P&gt;Time elapsed: 1502 ms&lt;BR /&gt;Core | IPC | Instructions | Cycles | Transactional Cycles | Aborted Cycles | #RTM | #HLE | Cycles/Transaction &lt;BR /&gt; 0 0.62 144 M 232 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 1 0.27 12 M 45 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 2 0.60 123 M 204 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 3 0.38 18 M 48 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------&lt;BR /&gt; * 0.56 298 M 530 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;/P&gt;
&lt;P&gt;Time elapsed: 1501 ms&lt;BR /&gt;Core | IPC | Instructions | Cycles | Transactional Cycles | Aborted Cycles | #RTM | #HLE | Cycles/Transaction &lt;BR /&gt; 0 0.54 146 M 268 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 1 0.56 40 M 72 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 2 0.54 132 M 246 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 3 0.34 17 M 51 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------&lt;BR /&gt; * 0.53 336 M 638 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;/P&gt;
&lt;P&gt;Time elapsed: 1501 ms&lt;BR /&gt;Core | IPC | Instructions | Cycles | Transactional Cycles | Aborted Cycles | #RTM | #HLE | Cycles/Transaction &lt;BR /&gt; 0 0.46 97 M 210 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 1 0.25 8685 K 34 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 2 0.39 77 M 201 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 3 0.23 10 M 44 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------&lt;BR /&gt; * 0.39 193 M 490 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 28 Jul 2013 11:52:24 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963050#M5339</guid>
      <dc:creator>Rolf_Andersson</dc:creator>
      <dc:date>2013-07-28T11:52:24Z</dc:date>
    </item>
    <item>
      <title>further to my post above,</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963051#M5340</link>
      <description>&lt;P&gt;further to my post above, here is another run:&lt;/P&gt;
&lt;P&gt;(tt.sh does ./tt 1 1000000000)&lt;/P&gt;
&lt;P&gt;ring ran$ pcm-tsx.x ./tt.sh&lt;/P&gt;
&lt;P&gt;Intel(r) Performance Counter Monitor: Intel(r) Transactional Synchronization Extensions Monitoring Utility&lt;/P&gt;
&lt;P&gt;Copyright (c) 2013 Intel Corporation&lt;/P&gt;
&lt;P&gt;Num logical cores: 4&lt;BR /&gt;Num sockets: 1&lt;BR /&gt;Threads per core: 2&lt;BR /&gt;Core PMU (perfmon) version: 3&lt;BR /&gt;Number of core PMU generic (programmable) counters: 4&lt;BR /&gt;Width of generic (programmable) counters: 48 bits&lt;BR /&gt;Number of core PMU fixed counters: 3&lt;BR /&gt;Width of fixed counters: 48 bits&lt;BR /&gt;Nominal core frequency: 3066666659 Hz&lt;/P&gt;
&lt;P&gt;Detected Intel(R) Core(TM) i7-4650U CPU @ 1.70GHz "Intel(r) microarchitecture codename unknown"&lt;BR /&gt;Update every 0 seconds&lt;/P&gt;
&lt;P&gt;Executing "./tt.sh" command:&lt;/P&gt;
&lt;P&gt;Exit code: 0&lt;/P&gt;
&lt;P&gt;Time elapsed: 11983 ms&lt;BR /&gt;Core | IPC | Instructions | Cycles | Transactional Cycles | Aborted Cycles | #RTM | #HLE | Cycles/Transaction &lt;BR /&gt; 0 0.12 3062 M 24 G 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 1 0.48 421 M 880 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 2 0.11 3117 M 28 G 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt; 3 0.40 523 M 1310 M 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;BR /&gt;-------------------------------------------------------------------------------------------------------------------&lt;BR /&gt; * 0.13 7124 M 55 G 0 ( 0.00%) 0 ( 0.00%) 0 0 N/A&lt;/P&gt;
&lt;P&gt;Cleaning up&lt;BR /&gt;ring ran$&lt;/P&gt;</description>
      <pubDate>Sun, 28 Jul 2013 12:54:55 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963051#M5340</guid>
      <dc:creator>Rolf_Andersson</dc:creator>
      <dc:date>2013-07-28T12:54:55Z</dc:date>
    </item>
    <item>
      <title>Rolf,</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963052#M5341</link>
      <description>&lt;P&gt;Rolf,&lt;/P&gt;
&lt;P&gt;One of the benificial characteristics of HLE is that it makes your code imune to the annoying (catostrophic?) problem of the lock holder thread being preempted by the O/S for interrupt or context switch. Thus blocking other thread's entry for duration of interrupt/preemption.&amp;nbsp;This cannot happen using HLE, the preemption undoes the transaction while permitting other threads to pass throught the transaction section of code. IOW this alieviates a necessity of writing wait-free algorithm if this preemption avoidence becomes necessary.&lt;/P&gt;
&lt;P&gt;Jim Dempsey&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 28 Jul 2013 13:28:48 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963052#M5341</guid>
      <dc:creator>jimdempseyatthecove</dc:creator>
      <dc:date>2013-07-28T13:28:48Z</dc:date>
    </item>
    <item>
      <title>Jim,this is a very good point</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963053#M5342</link>
      <description>&lt;P&gt;Jim,&lt;BR /&gt;this is a very good point. While I originally got interested in RTM and HLE for the possibility of lower latency uncontended access, preemption is sometimes really bad if not catastrophic as you commented. Thanks for the insight.&lt;/P&gt;
&lt;P&gt;I am still trying to get my head around the performance and latency aspects of HLE. The use case I'm looking at just now is the claim and publish steps of a multi-producer queue. Our current impl uses CAS (cmpxchgq), the prototype HLE version uses a spin lock loosely based on comments in various blog entries and other places (snippets enclosed below, cut and pasted, so I hope I got everything right).&lt;/P&gt;
&lt;P&gt;It feels like I'm missing some important point here (or possibly some really trivial point ;)&lt;BR /&gt;Do you have any ideas as to what can be expected?&lt;/P&gt;
&lt;P&gt;Best,&lt;BR /&gt;Rolf&lt;/P&gt;
&lt;P&gt;Edit - added backslashes so that \\n is rendered correctly&lt;/P&gt;
&lt;P&gt;[cpp]&lt;/P&gt;
&lt;P&gt;typedef unsigned long u64;&lt;/P&gt;
&lt;P&gt;#define __v64(x) ((volatile u64*) (x))&lt;BR /&gt;#define __HLE_ACQUIRE ".byte 0xf2 ; "&amp;nbsp;&lt;BR /&gt;#define __HLE_RELEASE ".byte 0xf3 ; "&lt;/P&gt;
&lt;P&gt;static inline u64 __ia_cas64 (volatile void* data, u64 curr, u64 next)&lt;BR /&gt;{&lt;BR /&gt; u64 prev;&lt;/P&gt;
&lt;P&gt;asm volatile ("lock;cmpxchgq %1,%2"&lt;BR /&gt; : "=a" (prev) // output&lt;BR /&gt; : "r" (next), "m" (*__v64 (data)), "0" (curr) // inputs&lt;BR /&gt; : "memory");&lt;BR /&gt; return prev;&lt;BR /&gt;}&lt;/P&gt;
&lt;P&gt;static inline void __hle_lock (volatile void* lock)&lt;BR /&gt;{&lt;BR /&gt; u64 value = 1;&lt;BR /&gt;asm volatile ("1: " __HLE_ACQUIRE "lock; xchgq %0,%1\\n"&lt;BR /&gt;" cmpq $0,%0\\n" // prev == 0 ?&lt;BR /&gt;" jz 3f\\n"&lt;BR /&gt;"2: pause\\n" // abort transaction&lt;BR /&gt;" cmpq $1,%1\\n" // lock == 1 ?&lt;BR /&gt;" jz 2b\\n"&lt;BR /&gt;" jmp 1b\\n"&lt;BR /&gt;"3: \\n"&lt;BR /&gt;: "+r" (value), "+m" (*__v64 (lock))&lt;BR /&gt;:: "memory");&lt;BR /&gt;}&lt;/P&gt;
&lt;P&gt;static inline void __hle_unlock (volatile void* lock)&lt;BR /&gt;{&lt;BR /&gt;asm volatile (__HLE_RELEASE "movq $0,%0"&lt;BR /&gt;: "+m" (*__v64 (lock)) :: "memory");&lt;BR /&gt;}&lt;/P&gt;
&lt;P&gt;static inline u64 __hle_cas64 (volatile void* lock, volatile u64* data,&lt;BR /&gt; u64 curr, u64 next)&lt;BR /&gt;{&lt;BR /&gt; __hle_lock (lock);&lt;BR /&gt; u64 temp = *data;&lt;BR /&gt; &lt;BR /&gt; if (temp == curr)&lt;BR /&gt; *data = next;&lt;/P&gt;
&lt;P&gt;__hle_unlock (lock);&lt;BR /&gt; return temp;&lt;BR /&gt;}&lt;/P&gt;
&lt;P&gt;[/cpp]&lt;/P&gt;</description>
      <pubDate>Sun, 28 Jul 2013 14:04:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963053#M5342</guid>
      <dc:creator>Rolf_Andersson</dc:creator>
      <dc:date>2013-07-28T14:04:00Z</dc:date>
    </item>
    <item>
      <title>Quote:</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963054#M5343</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;my hypothesis was that "lock;xchg" would incur coherence traffic and thus a number of cycles of latency (~access to LLC?), and that the lock elision operation somehow would hide that latency (or part thereof). I may have misunderstood how lock elision works.&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;in your single threaded (non-HLE)&amp;nbsp;baseline test the cache line with the lock is always kept in the L1 local cache, therefore there is no LLC accesses or other expensive cache misses.&lt;/P&gt;
&lt;P&gt;But if you run the non-HLE baseline on many cores then the xchgl accesses to the lock will experience cache misses since other cores will often have a more recent version of the cache line with the lock word. This more recent copy needs to be transferred to your core with "write permissions" before modifying the state of the lock. Sometimes it is referenced as coherency cache misses or lock cache line transfer/shipping overhead. With HLE, the lock word modification is elided (and not seen by other cores). The XACQUIRE xchgl operation does not issue the "write permission" request to other cores, therefore there is no coherency cache misses / lock shipping overhead.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Roman&lt;/P&gt;</description>
      <pubDate>Mon, 29 Jul 2013 13:18:29 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963054#M5343</guid>
      <dc:creator>Roman_D_Intel</dc:creator>
      <dc:date>2013-07-29T13:18:29Z</dc:date>
    </item>
    <item>
      <title>Quote:</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963055#M5344</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;I am still trying to get my head around the performance and latency aspects of HLE. The use case I'm looking at just now is the claim and publish steps of a multi-producer queue. Our current impl uses CAS (cmpxchgq), the prototype HLE version uses a spin lock loosely based on comments in various blog entries and other places (snippets enclosed below, cut and pasted, so I hope I got everything right).&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;I must say the idea to emulate CAS using TSX 1-to-1 is not a good one.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Essentially Intel TSX is there to allow developers to avoid hard and error-prone thinking about how to express their higher-level algorithms and data structure operations in terms of low-level CAS to get high level of concurrency. Instead developers can use normal implementations using normal memory load and stores (as generated by any compiler by default) and pack their bigger higher-level data operations into TSX critical sections.&lt;/P&gt;
&lt;P&gt;Roman&lt;/P&gt;</description>
      <pubDate>Mon, 29 Jul 2013 13:29:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963055#M5344</guid>
      <dc:creator>Roman_D_Intel</dc:creator>
      <dc:date>2013-07-29T13:29:00Z</dc:date>
    </item>
    <item>
      <title>[block]</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963056#M5345</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;To my other question about pcm-tsx not showing any transactional cycles, is there some other way to discern that the acquire and release operations have actually been executed?&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;please re-run pcm-tsx with the patch I had provided for MacBook Air. On my system with Intel(r) Core(tm) i7-4770 I see 100M HLE starts (#HLE column) and about 68% transactional cycles in pcm-tsx output for your microbenchmark.&lt;/P&gt;
&lt;P&gt;[bash]&lt;/P&gt;
&lt;P&gt;pcm-tsx.x "./tt 1 100000000"&lt;/P&gt;
&lt;P&gt;[/bash]&lt;/P&gt;
&lt;P&gt;Thanks,&lt;/P&gt;
&lt;P&gt;Roman&lt;/P&gt;</description>
      <pubDate>Mon, 29 Jul 2013 13:38:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963056#M5345</guid>
      <dc:creator>Roman_D_Intel</dc:creator>
      <dc:date>2013-07-29T13:38:00Z</dc:date>
    </item>
    <item>
      <title>Roman,</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963057#M5346</link>
      <description>&lt;P&gt;Roman,&lt;/P&gt;
&lt;P&gt;I get similar results with the patch applied.&lt;BR /&gt;Thanks for your assistance.&lt;/P&gt;
&lt;P&gt;/Rolf&lt;/P&gt;</description>
      <pubDate>Mon, 29 Jul 2013 18:02:58 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963057#M5346</guid>
      <dc:creator>Rolf_Andersson</dc:creator>
      <dc:date>2013-07-29T18:02:58Z</dc:date>
    </item>
    <item>
      <title>&lt;blockquote&gt;in your single</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963058#M5347</link>
      <description>&lt;P&gt;&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;in your single threaded (non-HLE)&amp;nbsp;baseline test the cache line with the lock is always kept in the L1 local cache, therefore there is no LLC accesses or other expensive cache misses.&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;This would be inconsistent with the numbers I get in the simple tests? The latency of the "xchg" instruction seems to indicate that there is a much higher latency than for other insns with a memory target. I thought this was due to locking -&amp;gt; LLC cache lockup?&lt;/P&gt;
&lt;P&gt;I added two more test cases to tt.c, loops with:&lt;/P&gt;
&lt;P&gt;"addl $1,%0" where %0 is a local stack address&lt;/P&gt;
&lt;P&gt;"lock;addl $1, %0" where %0 is also a local stack address&lt;/P&gt;
&lt;P&gt;I get 1.9 ns per iteration for the case without lock and 6.0 ns with the lock (the "xchg %0,%1" case yields ~10 ns)&lt;BR /&gt;this corresponds to 4ns and 8ns delta respectively, but I am currently unable to explain this difference ...&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I assume that the lock operation has some extra memory access overhead, even though I'm not entirely clear what is going on.&lt;/P&gt;
&lt;P&gt;I'd very much appreciate if someone can shed some light on this.&lt;/P&gt;
&lt;P&gt;/Rolf&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 29 Jul 2013 18:27:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963058#M5347</guid>
      <dc:creator>Rolf_Andersson</dc:creator>
      <dc:date>2013-07-29T18:27:00Z</dc:date>
    </item>
    <item>
      <title>Roman and Rolf,</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963059#M5348</link>
      <description>&lt;P&gt;Roman and Rolf,&lt;/P&gt;
&lt;P&gt;I do not have a processor here for testing but I can make an observation and sugestion. Roman can counter the observation if it is wrong.&lt;/P&gt;
&lt;P&gt;In Rolf's __hle_cas64 he is calling __hle_lock which is using the __HLE_ACQUIRE on a lock; xchgq ....&lt;BR /&gt;I believe, within the __HLE_ACQUIRE, the lock; xchg... is unnecessisarily bogging down the pipeline.&lt;/P&gt;
&lt;P&gt;I think, (Roman please correct if I am wrong),&amp;nbsp;the __HLE_ACQUIRE protected region could better be served by using BTS _without_ LOCK;&lt;/P&gt;
&lt;P&gt;Rolf, it should be easy enough for you to setup a diagnostic to verify this.&lt;/P&gt;
&lt;P&gt;Jim Dempsey&lt;/P&gt;</description>
      <pubDate>Mon, 29 Jul 2013 19:48:18 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963059#M5348</guid>
      <dc:creator>jimdempseyatthecove</dc:creator>
      <dc:date>2013-07-29T19:48:18Z</dc:date>
    </item>
    <item>
      <title>Jim,</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963060#M5349</link>
      <description>&lt;P&gt;Jim,&lt;/P&gt;
&lt;P&gt;I did some more tests; the LOCK prefix seems (as expected from reading the doc) to be required in order for HLE to kick in. This makes sense as HLE will retry without elision if the transaction is aborted and then the LOCK is needed.&lt;/P&gt;
&lt;P&gt;BTS with a LOCK prefix gives pretty much identical execution times compared to the XCHG version in the uncontended case.&lt;/P&gt;
&lt;P&gt;Let me know if you would like me to post a new version of the test rig.&lt;/P&gt;
&lt;P&gt;Best,&lt;BR /&gt;Rolf&lt;/P&gt;</description>
      <pubDate>Tue, 30 Jul 2013 05:58:29 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963060#M5349</guid>
      <dc:creator>Rolf_Andersson</dc:creator>
      <dc:date>2013-07-30T05:58:29Z</dc:date>
    </item>
    <item>
      <title>Quote:</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963061#M5350</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;This would be inconsistent with the numbers I get in the simple tests? The latency of the "xchg" instruction seems to indicate that there is a much higher latency than for other insns with a memory target. I thought this was due to locking -&amp;gt; LLC cache lockup?&lt;/P&gt;
&lt;P&gt;I added two more test cases to tt.c, loops with:&lt;/P&gt;
&lt;P&gt;"addl $1,%0" where %0 is a local stack address&lt;/P&gt;
&lt;P&gt;"lock;addl $1, %0" where %0 is also a local stack address&lt;/P&gt;
&lt;P&gt;I get 1.9 ns per iteration for the case without lock and 6.0 ns with the lock (the "xchg %0,%1" case yields ~10 ns)&lt;BR /&gt;this corresponds to 4ns and 8ns delta respectively, but I am currently unable to explain this difference ...&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;The LOCK prefixed instructions have an overhead (also if there are no cache misses) compared to those without LOCK. With Haswell uarch the LOCK-prefixed instructions take at least ~12 cycles because of it (SandyBridge had at least 16 cycles per LOCK).&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;I assume that the lock operation has some extra memory access overhead, even though I'm not entirely clear what is going on.&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;You can count the L2 and LLC cache hits and misses using pcm.x and compare it with the number of iterations in your test. I think it will be a very low count.&lt;/P&gt;
&lt;P&gt;Roman&lt;/P&gt;</description>
      <pubDate>Tue, 30 Jul 2013 10:32:36 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963061#M5350</guid>
      <dc:creator>Roman_D_Intel</dc:creator>
      <dc:date>2013-07-30T10:32:36Z</dc:date>
    </item>
    <item>
      <title>Roman, thx for the feedback.</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963062#M5351</link>
      <description>&lt;P&gt;Roman, thx for the feedback. I will certainly have a look att using pcm for cache traffic instrumentation.&lt;/P&gt;
&lt;P&gt;You mentioned the overhead/latency of LOCK prefixing;&lt;BR /&gt;Is there any info available on the overhead/latency of HLE_ACQUIRE and HLE_RELEASE?&lt;/P&gt;
&lt;P&gt;Thanks,&lt;BR /&gt;Rolf&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 30 Jul 2013 10:43:16 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963062#M5351</guid>
      <dc:creator>Rolf_Andersson</dc:creator>
      <dc:date>2013-07-30T10:43:16Z</dc:date>
    </item>
    <item>
      <title>yes. Please see the</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963063#M5352</link>
      <description>&lt;P&gt;yes. Please see the discussion about XBEGIN/XEND/XACQUIRE/XRELEASE&amp;nbsp;latencies and overheads in Section 12.5 of the Intel Architecture Optimization Manual.&lt;/P&gt;
&lt;P&gt;Roman&lt;/P&gt;</description>
      <pubDate>Tue, 30 Jul 2013 10:55:33 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963063#M5352</guid>
      <dc:creator>Roman_D_Intel</dc:creator>
      <dc:date>2013-07-30T10:55:33Z</dc:date>
    </item>
    <item>
      <title>it was that section that</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963064#M5353</link>
      <description>&lt;P&gt;it was that section that sparked my initial interest, specifically tuning suggestion 33:&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;Tuning Suggestion 33.Intel TSX is designed for critical sections and thus the latency profiles of the XBEGIN/XEND instructions and XACQUIRE/XRELEASE prefixes are intended to match the LOCK prefixed instructions. These instructions should not be expected to have the latency of a regular load operation.&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;My initial tests were written to verify if XACQUIRE/XRELEASE would yield an overhead comparable to LOCK. Currently, the overhead doesn't seem to be comparable, but I will continue to run some more tests to see if I can find an explanation.&lt;/P&gt;
&lt;P&gt;/Rolf&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 30 Jul 2013 11:03:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Overhead-of-HLE-acquire-and-release/m-p/963064#M5353</guid>
      <dc:creator>Rolf_Andersson</dc:creator>
      <dc:date>2013-07-30T11:03:00Z</dc:date>
    </item>
  </channel>
</rss>

