<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic In general MKL routines in Intel® Moderncode for Parallel Architectures</title>
    <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/openmp-nested-parallelism/m-p/1118965#M7549</link>
    <description>&lt;P&gt;In general MKL routines perform best with 1 thread per core on Intel Xeon processors. &amp;nbsp;Just set KMP_AFFINITY=scatter, and if the prospect of MKL generating additional threads inside parallel region is troubling, temporarily change MKL number of threads to 1 with mkl_set_num_threads().&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 01 Jun 2017 19:53:12 GMT</pubDate>
    <dc:creator>Gregg_S_Intel</dc:creator>
    <dc:date>2017-06-01T19:53:12Z</dc:date>
    <item>
      <title>openmp nested parallelism</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/openmp-nested-parallelism/m-p/1118962#M7546</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;

&lt;P&gt;I am trying to understanding how to specify thread affinity in the case of nested parallelism. I am not sure if I can use KMP_AFFINITY in this case. I have 2 level of parallelism. At the first level, I have a parallel loop. I would like for this loop to run each thread on a different processor (I have 10 proc. per core). This corresponds to use the type scatter. Inside the parallel loop I am using multithread openmp MKL routines. For mkl, I need to use compact. This is a beginner question, but what is the way to get this result. Also, to make things a little bit more complicated, I am using mkl, not in a parallel region, before the loop. This means I need to change the affinity inside my code.&lt;/P&gt;

&lt;P&gt;Thanks for helping,&lt;/P&gt;

&lt;P&gt;Marc&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 31 May 2017 21:27:25 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/openmp-nested-parallelism/m-p/1118962#M7546</guid>
      <dc:creator>marcsolal</dc:creator>
      <dc:date>2017-05-31T21:27:25Z</dc:date>
    </item>
    <item>
      <title>MKL attempts to detect this</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/openmp-nested-parallelism/m-p/1118963#M7547</link>
      <description>&lt;P&gt;&lt;SPAN style="color: rgb(102, 102, 102); font-family: Arial, Tahoma, Helvetica, sans-serif; font-size: 14px;"&gt;MKL attempts to detect this scenario and choose an optimal number of threads automatically.&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="color: rgb(102, 102, 102); font-family: Arial, Tahoma, Helvetica, sans-serif; font-size: 14px;"&gt;If that is not working, try setting number of threads using mkl_set_num_threads().&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;But if you really want to use nested threading, these affinity settings may help.&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="color: rgb(0, 113, 197); font-family: &amp;quot;Courier New&amp;quot;; font-size: 18pt; text-indent: 0in;"&gt;MKL_DYNAMIC=false&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="color: rgb(0, 113, 197); font-family: &amp;quot;Courier New&amp;quot;; font-size: 18pt; text-indent: 0in;"&gt;OMP_NESTED=1&lt;/SPAN&gt;&lt;/P&gt;

&lt;P style="margin-top: 12pt; margin-bottom: 0pt; margin-left: 0in; text-indent: 0in; direction: ltr; unicode-bidi: embed; word-break: normal;"&gt;&lt;SPAN style="font-size: 18pt; font-family: &amp;quot;Courier New&amp;quot;; color: rgb(0, 113, 197);"&gt;OMP_MAX_ACTIVE_LEVELS=2&lt;/SPAN&gt;&lt;/P&gt;

&lt;P style="margin-top: 12pt; margin-bottom: 0pt; margin-left: 0in; text-indent: 0in; direction: ltr; unicode-bidi: embed; word-break: normal;"&gt;&lt;SPAN style="font-size: 18pt; font-family: &amp;quot;Courier New&amp;quot;; color: rgb(0, 113, 197);"&gt;KMP_HOT_TEAMS_MODE=1&lt;/SPAN&gt;&lt;/P&gt;

&lt;P style="margin-top: 12pt; margin-bottom: 0pt; margin-left: 0in; text-indent: 0in; direction: ltr; unicode-bidi: embed; word-break: normal;"&gt;&lt;SPAN style="font-size: 18pt; font-family: &amp;quot;Courier New&amp;quot;; color: rgb(0, 113, 197);"&gt;KMP_HOT_TEAMS_MAX_LEVEL=2&lt;/SPAN&gt;&lt;/P&gt;

&lt;P style="margin-top: 12pt; margin-bottom: 0pt; margin-left: 0in; text-indent: 0in; direction: ltr; unicode-bidi: embed; word-break: normal;"&gt;&lt;SPAN style="font-size: 18pt; font-family: &amp;quot;Courier New&amp;quot;; color: rgb(0, 113, 197);"&gt;OMP_NUM_THREADS=10,2&lt;/SPAN&gt;&lt;/P&gt;

&lt;P style="margin-top: 12pt; margin-bottom: 0pt; margin-left: 0in; text-indent: 0in; direction: ltr; unicode-bidi: embed; word-break: normal;"&gt;&lt;SPAN style="font-size: 18pt; font-family: &amp;quot;Courier New&amp;quot;; color: rgb(0, 113, 197);"&gt;OMP_PROC_BIND&lt;/SPAN&gt;&lt;SPAN style="font-size: 18pt; font-family: &amp;quot;Courier New&amp;quot;; color: rgb(0, 113, 197);"&gt;=“spread, close”&lt;/SPAN&gt;&lt;/P&gt;

&lt;P style="margin-top: 12pt; margin-bottom: 0pt; margin-left: 0in; text-indent: 0in; direction: ltr; unicode-bidi: embed; word-break: normal;"&gt;&lt;SPAN style="font-size: 18pt; font-family: &amp;quot;Courier New&amp;quot;; color: rgb(0, 113, 197);"&gt;OMP_PLACES=cores&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 31 May 2017 23:35:09 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/openmp-nested-parallelism/m-p/1118963#M7547</guid>
      <dc:creator>Gregg_S_Intel</dc:creator>
      <dc:date>2017-05-31T23:35:09Z</dc:date>
    </item>
    <item>
      <title>Thanks, it will help. Is it</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/openmp-nested-parallelism/m-p/1118964#M7548</link>
      <description>&lt;P&gt;Thanks, it will help. Is it possible to modify the settings inside the code. I am using mkl before the parallel. So i would need OMP_PROC_BIND=close for MKL and to switch "spread,close" after. I am assuming I can simply set the env. variables inside the code. Is it correct?&lt;/P&gt;

&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Thu, 01 Jun 2017 17:18:55 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/openmp-nested-parallelism/m-p/1118964#M7548</guid>
      <dc:creator>marcsolal</dc:creator>
      <dc:date>2017-06-01T17:18:55Z</dc:date>
    </item>
    <item>
      <title>In general MKL routines</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/openmp-nested-parallelism/m-p/1118965#M7549</link>
      <description>&lt;P&gt;In general MKL routines perform best with 1 thread per core on Intel Xeon processors. &amp;nbsp;Just set KMP_AFFINITY=scatter, and if the prospect of MKL generating additional threads inside parallel region is troubling, temporarily change MKL number of threads to 1 with mkl_set_num_threads().&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 01 Jun 2017 19:53:12 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/openmp-nested-parallelism/m-p/1118965#M7549</guid>
      <dc:creator>Gregg_S_Intel</dc:creator>
      <dc:date>2017-06-01T19:53:12Z</dc:date>
    </item>
    <item>
      <title>Here are two examples how</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/openmp-nested-parallelism/m-p/1118966#M7550</link>
      <description>&lt;P&gt;Here are two examples how OpenMP threads are pinned to different cores on a &lt;STRONG&gt;KNL&lt;/STRONG&gt; server&amp;nbsp;for KMP_AFFINITY set to &lt;STRONG&gt;scatter&lt;/STRONG&gt; and &lt;STRONG&gt;compact&lt;/STRONG&gt;.&lt;/P&gt;</description>
      <pubDate>Mon, 05 Jun 2017 23:15:18 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/openmp-nested-parallelism/m-p/1118966#M7550</guid>
      <dc:creator>SergeyKostrov</dc:creator>
      <dc:date>2017-06-05T23:15:18Z</dc:date>
    </item>
    <item>
      <title>KMP_AFFINITY=scatter</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/openmp-nested-parallelism/m-p/1118967#M7551</link>
      <description>&lt;P&gt;KMP_AFFINITY=scatter&lt;/P&gt;

&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="CmmaKMPAFFINITYscatter.png"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/9557i95C5D728EA643185/image-size/large?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="CmmaKMPAFFINITYscatter.png" alt="CmmaKMPAFFINITYscatter.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 05 Jun 2017 23:16:39 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/openmp-nested-parallelism/m-p/1118967#M7551</guid>
      <dc:creator>SergeyKostrov</dc:creator>
      <dc:date>2017-06-05T23:16:39Z</dc:date>
    </item>
    <item>
      <title>KMP_AFFINITY=compact</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/openmp-nested-parallelism/m-p/1118968#M7552</link>
      <description>&lt;P&gt;KMP_AFFINITY=compact&lt;/P&gt;

&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="CmmaKMPAFFINITYcompact.png"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/9558iB96036C7DF6E416C/image-size/large?v=v2&amp;amp;px=999&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" role="button" title="CmmaKMPAFFINITYcompact.png" alt="CmmaKMPAFFINITYcompact.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 05 Jun 2017 23:18:10 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/openmp-nested-parallelism/m-p/1118968#M7552</guid>
      <dc:creator>SergeyKostrov</dc:creator>
      <dc:date>2017-06-05T23:18:10Z</dc:date>
    </item>
  </channel>
</rss>

