<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic You cannot use a &amp;quot;LOCK, add in Software Archive</title>
    <link>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997345#M28677</link>
    <description>&lt;P&gt;You cannot use a "LOCK, add to memory" as performed by __synch_fetch_and_add though you can perform something like:&lt;/P&gt;

&lt;P&gt;do {&lt;BR /&gt;
	float temp = array&lt;I&gt;;&lt;BR /&gt;
	float result = temp + 1.0f;&lt;BR /&gt;
	} while(!CAS(&amp;amp;array&lt;I&gt;, temp, result));&lt;/I&gt;&lt;/I&gt;&lt;/P&gt;

&lt;P&gt;The best way is to partition the code such that no two threads will simultaneously update the same location within the array.&lt;/P&gt;

&lt;P&gt;Reference: &lt;A href="http://en.wikipedia.org/wiki/Compare-and-swap"&gt;http://en.wikipedia.org/wiki/Compare-and-swap&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;Jim Dempsey&lt;/P&gt;</description>
    <pubDate>Mon, 21 Apr 2014 16:59:00 GMT</pubDate>
    <dc:creator>jimdempseyatthecove</dc:creator>
    <dc:date>2014-04-21T16:59:00Z</dc:date>
    <item>
      <title>Most efficient way for atomic updates on Xeon Phi</title>
      <link>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997344#M28676</link>
      <description>&lt;P&gt;I have found out that __kmpc_atomic_float4_add was used in the assembly code of the following two lines:&lt;/P&gt;

&lt;P&gt;#pragma omp atomic&lt;BR /&gt;
	array&lt;I&gt; += 1.0;&lt;/I&gt;&lt;/P&gt;

&lt;P&gt;Performance of this code is not good on Intel Xeon Phi when many threads are used. Is there any information about how __kmpc_atomic_float4_add is implemented? Are there any better solutions for efficient and scalable atomic updates? Is it possible to use GCC intrinsics such as __sync_add_and_fetch() in offload regions?&lt;/P&gt;</description>
      <pubDate>Mon, 21 Apr 2014 08:58:42 GMT</pubDate>
      <guid>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997344#M28676</guid>
      <dc:creator>kadir</dc:creator>
      <dc:date>2014-04-21T08:58:42Z</dc:date>
    </item>
    <item>
      <title>You cannot use a "LOCK, add</title>
      <link>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997345#M28677</link>
      <description>&lt;P&gt;You cannot use a "LOCK, add to memory" as performed by __synch_fetch_and_add though you can perform something like:&lt;/P&gt;

&lt;P&gt;do {&lt;BR /&gt;
	float temp = array&lt;I&gt;;&lt;BR /&gt;
	float result = temp + 1.0f;&lt;BR /&gt;
	} while(!CAS(&amp;amp;array&lt;I&gt;, temp, result));&lt;/I&gt;&lt;/I&gt;&lt;/P&gt;

&lt;P&gt;The best way is to partition the code such that no two threads will simultaneously update the same location within the array.&lt;/P&gt;

&lt;P&gt;Reference: &lt;A href="http://en.wikipedia.org/wiki/Compare-and-swap"&gt;http://en.wikipedia.org/wiki/Compare-and-swap&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;Jim Dempsey&lt;/P&gt;</description>
      <pubDate>Mon, 21 Apr 2014 16:59:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997345#M28677</guid>
      <dc:creator>jimdempseyatthecove</dc:creator>
      <dc:date>2014-04-21T16:59:00Z</dc:date>
    </item>
    <item>
      <title>Is there any information</title>
      <link>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997346#M28678</link>
      <description>&lt;BLOCKQUOTE&gt;
	&lt;P&gt;&lt;SPAN style="font-size: 12px; line-height: 18px;"&gt;Is there any information about how __kmpc_atomic_float4_add is implemented?&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;/BLOCKQUOTE&gt;

&lt;P&gt;Sure, the whole of the OpenMP* runtime sources are available (either from &lt;A href="http://openmprtl.org" target="_blank"&gt;http://openmprtl.org&lt;/A&gt; or &lt;A href="http://openmp.llvm.org" target="_blank"&gt;http://openmp.llvm.org&lt;/A&gt; ). So you can see&amp;nbsp;&lt;STRONG&gt;exactly&amp;nbsp;&lt;/STRONG&gt;how they are implemented. (Which is effectively as Jim D describes).&lt;/P&gt;</description>
      <pubDate>Tue, 22 Apr 2014 10:34:41 GMT</pubDate>
      <guid>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997346#M28678</guid>
      <dc:creator>James_C_Intel2</dc:creator>
      <dc:date>2014-04-22T10:34:41Z</dc:date>
    </item>
    <item>
      <title>Kadir,</title>
      <link>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997347#M28679</link>
      <description>&lt;P&gt;Kadir,&lt;/P&gt;

&lt;P&gt;You might want to look at using reduction variables and syntax as used by OpenMP&lt;/P&gt;

&lt;PRE class="brush:cpp;"&gt;double sum = 0.0;
// sum is private within parallel region
// ** However, upon exit of parallel region operator(+) performed on outer scope sum
// this operation is performed in a thread-safe manner
#pragma omp parallel for reduction(+:sum)
for(int i=0; i &amp;lt; N; ++i) {
&amp;nbsp; sum += a&lt;I&gt;; }&lt;/I&gt;&lt;/PRE&gt;

&lt;P&gt;Jim Dempsey&lt;/P&gt;</description>
      <pubDate>Tue, 22 Apr 2014 12:50:50 GMT</pubDate>
      <guid>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997347#M28679</guid>
      <dc:creator>jimdempseyatthecove</dc:creator>
      <dc:date>2014-04-22T12:50:50Z</dc:date>
    </item>
    <item>
      <title>Dear Jim,</title>
      <link>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997348#M28680</link>
      <description>&lt;P&gt;Dear Jim,&lt;/P&gt;

&lt;P&gt;I have to perform reduction on an array. Sorry for the example that I give since it does not consider my real need. What is the most efficient way to reduce multiple arrays into one array in parallel on MIC architecture? I am using C/C++, not Fortran.&lt;/P&gt;</description>
      <pubDate>Thu, 24 Apr 2014 06:29:50 GMT</pubDate>
      <guid>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997348#M28680</guid>
      <dc:creator>kadir</dc:creator>
      <dc:date>2014-04-24T06:29:50Z</dc:date>
    </item>
    <item>
      <title>Divide the output array into</title>
      <link>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997349#M28681</link>
      <description>&lt;P&gt;Divide the output array into sections (often called tiles) and have only one thread write to any one section. This way you will not require atomics or locks.&lt;/P&gt;

&lt;P&gt;If work per cell in output array is relatively the same then for N threads make N tiles. (e.g. static partitioning and scheduling)&lt;/P&gt;

&lt;P&gt;If work per cell varies, then consider more partitions and dynamic scheduling.&lt;/P&gt;

&lt;P&gt;For some situations consider a plesiochronous phasing barrier. Here is an article I wrote&amp;nbsp;(&lt;A href="https://software.intel.com/en-us/blogs/2014/02/22/the-chronicles-of-phi-part-5-plesiochronous-phasing-barrier-tiled-ht3"&gt;https://software.intel.com/en-us/blogs/2014/02/22/the-chronicles-of-phi-part-5-plesiochronous-phasing-barrier-tiled-ht3)&lt;/A&gt;&amp;nbsp;or some variation there upon. You might want to read the first 4 parts of that series of blog to give you some background insight as to the problem, solution, problem, solution, ... iterations that lead to the final solution.&lt;/P&gt;

&lt;P&gt;As with most optimizing situations you will find that some of the promising steps you take at the beginning of the process&amp;nbsp;yield less than expected results. In trying to understand why this happened (or did not happen), this leads you to an improved path for solution. I am of the philosophy that it is better to teach someone to learn how to figure it out as opposed to telling them the (a) solution. If you did your teaching right, then the student may out perform the teacher.&lt;/P&gt;

&lt;P&gt;Jim Dempsey&lt;/P&gt;</description>
      <pubDate>Thu, 24 Apr 2014 12:53:46 GMT</pubDate>
      <guid>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997349#M28681</guid>
      <dc:creator>jimdempseyatthecove</dc:creator>
      <dc:date>2014-04-24T12:53:46Z</dc:date>
    </item>
    <item>
      <title>I forgot to mention. Be</title>
      <link>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997350#M28682</link>
      <description>&lt;P&gt;I forgot to mention. Be mindful that the strength of the Xeon Phi is not necessarily with the number of cores and hardware threads. Its real strength lies in the wide vector units (64 bytes, 16 floats, 8 doubles).&lt;/P&gt;

&lt;P&gt;Keep this in mind such that your partitioning scheme favors vectorization. This may also affect how you collect the&amp;nbsp;input data and/or layout.&lt;/P&gt;

&lt;P&gt;Jim Dempsey&lt;/P&gt;</description>
      <pubDate>Thu, 24 Apr 2014 12:58:39 GMT</pubDate>
      <guid>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997350#M28682</guid>
      <dc:creator>jimdempseyatthecove</dc:creator>
      <dc:date>2014-04-24T12:58:39Z</dc:date>
    </item>
    <item>
      <title>float temp;</title>
      <link>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997351#M28683</link>
      <description>&lt;PRE class="brush:cpp;"&gt;I was not able to compile following code using `icc`:

float temp;
float result;
do {
  temp = array&lt;I&gt;;
  result = temp + 1.0f;
} while(!CAS(&amp;amp;array&lt;I&gt;, temp, result));&lt;/I&gt;&lt;/I&gt;&lt;/PRE&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;PRE class="brush:cpp;"&gt;I have found  &lt;A href="https://software.intel.com/en-us/node/506125"&gt;a solution in C++&lt;/A&gt;. However, I am using C language. Are there any solutions in C language?&lt;/PRE&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 22 May 2014 09:37:49 GMT</pubDate>
      <guid>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997351#M28683</guid>
      <dc:creator>kadir</dc:creator>
      <dc:date>2014-05-22T09:37:49Z</dc:date>
    </item>
    <item>
      <title>You will have to write your</title>
      <link>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997352#M28684</link>
      <description>&lt;P&gt;You will have to write your own CAS (Compare And Swap). When your compiler error&amp;nbsp;indicated missing function named CAS your first course of action is to perform a web search for "CAS" as it relates to computer programming. You will find that CAS is Compare And Swap. This is an abstract function name used in computer programming papers. Various compilers have different named functions an with different argument orders and return values. There are usually flavors of CAS for byte, word, dword, qword, dqword. Not all processors support all the different word lengths. There is a similar abstract function DCAS (Double Compare And Swap), and various other function.&lt;/P&gt;

&lt;P&gt;Here are some of the functions you might use&lt;/P&gt;

&lt;P&gt;&lt;A href="http://stackoverflow.com/questions/2975485/atomic-swap-with-cas-using-gcc-sync-builtins"&gt;http://stackoverflow.com/questions/2975485/atomic-swap-with-cas-using-gcc-sync-builtins&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;&lt;A href="http://msdn.microsoft.com/en-us/library/windows/desktop/ms683560(v=vs.85).aspx"&gt;http://msdn.microsoft.com/en-us/library/windows/desktop/ms683560(v=vs.85).aspx&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;&lt;A href="http://msdn.microsoft.com/en-us/library/windows/desktop/ms683562(v=vs.85).aspx"&gt;http://msdn.microsoft.com/en-us/library/windows/desktop/ms683562(v=vs.85).aspx&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;Using "float" you would select the function that uses a dword (4 bytes).&lt;/P&gt;

&lt;P&gt;It is your responsibility to assure that the (destination)&amp;nbsp;variable being swapped is located in RAM and is assured to have the most recent written value. This may require memory barrier and/or volatile attributes. You do not want the compiler to optimize away your intended function.&lt;/P&gt;

&lt;P&gt;Note, CAS is not functional for values stored in SSE/AVX/AVX2/AVX512 registers. Computational results using SSE/AVX/AVX2/AVX512 will have to be stored into a local float, preferably with volatile, such that it can be fetched into a GP register.&lt;/P&gt;

&lt;P&gt;Jim Dempsey&lt;/P&gt;</description>
      <pubDate>Thu, 22 May 2014 12:59:08 GMT</pubDate>
      <guid>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997352#M28684</guid>
      <dc:creator>jimdempseyatthecove</dc:creator>
      <dc:date>2014-05-22T12:59:08Z</dc:date>
    </item>
    <item>
      <title>The link you found is fine.</title>
      <link>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997353#M28685</link>
      <description>&lt;P&gt;The link you found is fine. You will need to include the TBB header file.&lt;/P&gt;

&lt;P&gt;Note, the example shown in the link to TBB was using int (4 bytes) and as written would not be suitable for float. The "o = x" will perform a float to int conversion. You would have to modify the code to store the float then reinterpret cast to fetch the bit pattern as int.&lt;/P&gt;

&lt;P&gt;Jim Dempsey&lt;/P&gt;</description>
      <pubDate>Thu, 22 May 2014 13:12:23 GMT</pubDate>
      <guid>https://community.intel.com/t5/Software-Archive/Most-efficient-way-for-atomic-updates-on-Xeon-Phi/m-p/997353#M28685</guid>
      <dc:creator>jimdempseyatthecove</dc:creator>
      <dc:date>2014-05-22T13:12:23Z</dc:date>
    </item>
  </channel>
</rss>

