<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic On a Linux system, I would in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184884#M6796</link>
    <description>&lt;P&gt;On a Linux system, I would start with either "taskset" or "numactl" to provide an initial core binding for the mpiexec command (which should be inherited by its children). &amp;nbsp; E.g.,&amp;nbsp;&lt;/P&gt;
&lt;PRE class="brush:bash; class-name:dark;"&gt;taskset -c 8-15 mpiexec -n 8 FirstProgram
taskset -c 16-23 mpiexec -n 8 SecondProgram
taskset -c 24-31 mpiexec -n 8 ThirdProgram&lt;/PRE&gt;

&lt;P&gt;You can use I_MPI_DEBUG=5 to check the bindings to see if this works....&lt;/P&gt;</description>
    <pubDate>Thu, 21 May 2020 15:05:30 GMT</pubDate>
    <dc:creator>McCalpinJohn</dc:creator>
    <dc:date>2020-05-21T15:05:30Z</dc:date>
    <item>
      <title>MPI locked to specific core on node</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184881#M6793</link>
      <description>&lt;P style="margin-left:0cm; margin-right:0cm"&gt;Hi all,&lt;/P&gt;&lt;P style="margin-left:0cm; margin-right:0cm"&gt;I am running a multiple core single node machine. I wish to run several instances of a mpi processes (say 8 cores for each mpi process) but without the use of a scheduler. Is this possible, i.e. will the mpi process stick to the assigned core or will I need a scheduler to assign the cores to the required tasks?&lt;/P&gt;&lt;P style="margin-left:0cm; margin-right:0cm"&gt;Thanks,&lt;/P&gt;&lt;P style="margin-left:0cm; margin-right:0cm"&gt;&amp;nbsp;&lt;/P&gt;&lt;P style="margin-left:0cm; margin-right:0cm"&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 20 May 2020 15:03:58 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184881#M6793</guid>
      <dc:creator>Hob</dc:creator>
      <dc:date>2020-05-20T15:03:58Z</dc:date>
    </item>
    <item>
      <title>Hi Hob,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184882#M6794</link>
      <description>&lt;P&gt;Hi Hob,&lt;/P&gt;&lt;P&gt;Yes, you can launch MPI processes without a job scheduler.&lt;/P&gt;&lt;P&gt;We want to know more details about&amp;nbsp;what you were trying to achieve here.&lt;/P&gt;&lt;P&gt;The IMPI&amp;nbsp;allocates the CPUs based on the number of ranks launched&lt;/P&gt;&lt;P&gt;ex: if you have 80 core CPU and launch 10 processes each rank will be allocated 8 cores.&lt;/P&gt;
&lt;PRE class="brush:plain; class-name:dark;"&gt;sdp@sdp:~/prasanth/mpi$ cpuinfo -g

===== &amp;nbsp;Processor composition &amp;nbsp;=====
Processor name &amp;nbsp; &amp;nbsp;: Intel(R) Xeon(R) Gold 6148
Packages(sockets) : 2
Cores &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; : 40
Processors(CPUs) &amp;nbsp;: 80
Cores per package : 20
Threads per core &amp;nbsp;: 2

sdp@sdp:~/prasanth/mpi$ I_MPI_DEBUG=5 mpirun &amp;nbsp;-n 10 ./test
[0] MPI startup(): libfabric version: 1.10.0a1-impi
[0] MPI startup(): libfabric provider: tcp;ofi_rxm
[0] MPI startup(): Rank &amp;nbsp; &amp;nbsp;Pid &amp;nbsp; &amp;nbsp; &amp;nbsp;Node name &amp;nbsp;Pin cpu
[0] MPI startup(): 0 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70069 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{0,1,2,3,40,41,42,43}
[0] MPI startup(): 1 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70070 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{4,5,6,7,44,45,46,47}
[0] MPI startup(): 2 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70071 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{8,9,10,11,48,49,50,51}
[0] MPI startup(): 3 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70072 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{12,13,14,15,52,53,54,55}
[0] MPI startup(): 4 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70073 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{16,17,18,19,56,57,58,59}
[0] MPI startup(): 5 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70074 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{20,21,22,23,60,61,62,63}
[0] MPI startup(): 6 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70075 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{24,25,26,27,64,65,66,67}
[0] MPI startup(): 7 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70076 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{28,29,30,31,68,69,70,71}
[0] MPI startup(): 8 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70077 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{32,33,34,35,72,73,74,75}
[0] MPI startup(): 9 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70078 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{36,37,38,39,76,77,78,79}&lt;/PRE&gt;

&lt;P&gt;Else do you want to bind 8 cores for each process?&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Can you give an example of the scenario you want? This will help us in understanding the problem better.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards&lt;/P&gt;
&lt;P&gt;Prasanth&lt;/P&gt;</description>
      <pubDate>Thu, 21 May 2020 11:25:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184882#M6794</guid>
      <dc:creator>PrasanthD_intel</dc:creator>
      <dc:date>2020-05-21T11:25:00Z</dc:date>
    </item>
    <item>
      <title>Quote:Dwadasi, Prasanth</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184883#M6795</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;Dwadasi, Prasanth (Intel) wrote:&lt;BR /&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hi Hob,&lt;/P&gt;&lt;P&gt;Yes, you can launch MPI processes without a job scheduler.&lt;/P&gt;&lt;P&gt;We want to know more details about&amp;nbsp;what you were trying to achieve here.&lt;/P&gt;&lt;P&gt;The IMPI&amp;nbsp;allocates the CPUs based on the number of ranks launched&lt;/P&gt;&lt;P&gt;ex: if you have 80 core CPU and launch 10 processes each rank will be allocated 8 cores.&lt;/P&gt;
&lt;PRE class="brush:plain; class-name:dark;"&gt;sdp@sdp:~/prasanth/mpi$ cpuinfo -g

===== &amp;nbsp;Processor composition &amp;nbsp;=====
Processor name &amp;nbsp; &amp;nbsp;: Intel(R) Xeon(R) Gold 6148
Packages(sockets) : 2
Cores &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; : 40
Processors(CPUs) &amp;nbsp;: 80
Cores per package : 20
Threads per core &amp;nbsp;: 2

sdp@sdp:~/prasanth/mpi$ I_MPI_DEBUG=5 mpirun &amp;nbsp;-n 10 ./test
[0] MPI startup(): libfabric version: 1.10.0a1-impi
[0] MPI startup(): libfabric provider: tcp;ofi_rxm
[0] MPI startup(): Rank &amp;nbsp; &amp;nbsp;Pid &amp;nbsp; &amp;nbsp; &amp;nbsp;Node name &amp;nbsp;Pin cpu
[0] MPI startup(): 0 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70069 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{0,1,2,3,40,41,42,43}
[0] MPI startup(): 1 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70070 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{4,5,6,7,44,45,46,47}
[0] MPI startup(): 2 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70071 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{8,9,10,11,48,49,50,51}
[0] MPI startup(): 3 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70072 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{12,13,14,15,52,53,54,55}
[0] MPI startup(): 4 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70073 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{16,17,18,19,56,57,58,59}
[0] MPI startup(): 5 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70074 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{20,21,22,23,60,61,62,63}
[0] MPI startup(): 6 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70075 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{24,25,26,27,64,65,66,67}
[0] MPI startup(): 7 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70076 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{28,29,30,31,68,69,70,71}
[0] MPI startup(): 8 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70077 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{32,33,34,35,72,73,74,75}
[0] MPI startup(): 9 &amp;nbsp; &amp;nbsp; &amp;nbsp; 70078 &amp;nbsp; &amp;nbsp;sdp &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;{36,37,38,39,76,77,78,79}&lt;/PRE&gt;

&lt;P&gt;Else do you want to bind 8 cores for each process?&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Can you give an example of the scenario you want? This will help us in understanding the problem better.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Regards&lt;/P&gt;
&lt;P&gt;Prasanth&lt;/P&gt;
&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hi Prasanth,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;thanks for the reply, basically I have software that runs on mpi that is launched via command line as:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;mpiexec -n 8 prog prog.extension&lt;/P&gt;
&lt;P&gt;So it launches 8 cores (as an example) and begins running at 100% usage on eight cores.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Some time later I would like to run another 8 cores for a different simulation and again launch via mpiexec, the machine has 32 cores so two instances eachs requriing 8 cores should not be an issue (16 cores total).&lt;/P&gt;
&lt;P&gt;My question is how the load across the cores will be managed and if they mpiexec process will be dedicated on the cores, i.e. it will not try to use a core on the already launched program that is at 100% usage? I want ot make sure there is no cross talk between two instances launched on the same computer,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks,&lt;/P&gt;</description>
      <pubDate>Thu, 21 May 2020 12:34:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184883#M6795</guid>
      <dc:creator>Hob</dc:creator>
      <dc:date>2020-05-21T12:34:00Z</dc:date>
    </item>
    <item>
      <title>On a Linux system, I would</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184884#M6796</link>
      <description>&lt;P&gt;On a Linux system, I would start with either "taskset" or "numactl" to provide an initial core binding for the mpiexec command (which should be inherited by its children). &amp;nbsp; E.g.,&amp;nbsp;&lt;/P&gt;
&lt;PRE class="brush:bash; class-name:dark;"&gt;taskset -c 8-15 mpiexec -n 8 FirstProgram
taskset -c 16-23 mpiexec -n 8 SecondProgram
taskset -c 24-31 mpiexec -n 8 ThirdProgram&lt;/PRE&gt;

&lt;P&gt;You can use I_MPI_DEBUG=5 to check the bindings to see if this works....&lt;/P&gt;</description>
      <pubDate>Thu, 21 May 2020 15:05:30 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184884#M6796</guid>
      <dc:creator>McCalpinJohn</dc:creator>
      <dc:date>2020-05-21T15:05:30Z</dc:date>
    </item>
    <item>
      <title>Quote:McCalpin, John</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184885#M6797</link>
      <description>&lt;BLOCKQUOTE&gt;McCalpin, John (Blackbelt) wrote:&lt;BR /&gt; &lt;P&gt;On a Linux system, I would start with either "taskset" or "numactl" to provide an initial core binding for the mpiexec command (which should be inherited by its children). &amp;nbsp; E.g.,&amp;nbsp;&lt;/P&gt;
&lt;PRE class="brush:bash; class-name:dark;"&gt;taskset -c 8-15 mpiexec -n 8 FirstProgram
taskset -c 16-23 mpiexec -n 8 SecondProgram
taskset -c 24-31 mpiexec -n 8 ThirdProgram&lt;/PRE&gt;&lt;P&gt;You can use I_MPI_DEBUG=5 to check the bindings to see if this works....&lt;/P&gt;
 &lt;/BLOCKQUOTE&gt;

Hi John,

Thanks for that (for some reason my other account will not log in), I am running on windows 10, I'm not sure if the process allocation is held specifically by the OS, there are options for processor affinity and I was trying with powershell to specify but I'm not sure if this would work.


It may be that I have to use something like windows HPC or PBS in order to allocate resources (cpu range),

Regards,</description>
      <pubDate>Fri, 22 May 2020 09:27:34 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184885#M6797</guid>
      <dc:creator>Barden__Jason</dc:creator>
      <dc:date>2020-05-22T09:27:34Z</dc:date>
    </item>
    <item>
      <title>The environment variables for</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184886#M6798</link>
      <description>&lt;P&gt;The environment variables for process pinning of Intel MPI are explained in the Intel MPI Reference Guide at&amp;nbsp;https://software.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-windows/top/environment-variable-reference/process-pinning.html&lt;/P&gt;&lt;P&gt;If you are not(!)&amp;nbsp;running a hybrid MPI/OpenMP (or other threads per MPI process) the following easy approach using I_MPI_PIN_PROCESSOR_LIST should work. As an example I ran&amp;nbsp;the benchmark IMB-MPI1 (included in the Intel MPI distribution) on a 4 cores laptop.&lt;/P&gt;&lt;P&gt;A call of "cpuinfo" (part of Intel MPI) shows the cores/hyperthreads&amp;nbsp;layout and numbering. In parentheses the hyperthreads on a core are identified:&lt;/P&gt;&lt;P&gt;cpuinfo&lt;BR /&gt;Intel(R) processor family information utility, Version 2019 Update 7 Build 20200312 (id: 5dc2dd3e9)&lt;BR /&gt;Copyright (C) 2005-2020 Intel Corporation. &amp;nbsp;All rights reserved.&lt;/P&gt;&lt;P&gt;===== &amp;nbsp;Processor composition &amp;nbsp;=====&lt;BR /&gt;Processor name &amp;nbsp;&amp;nbsp;&amp;nbsp;: Intel(R) Core(TM) i5-8350U&lt;BR /&gt;Packages(sockets) : 1&lt;BR /&gt;Cores &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;: 4&lt;BR /&gt;Processors(CPUs) &amp;nbsp;: 8&lt;BR /&gt;Cores per package : 4&lt;BR /&gt;Threads per core &amp;nbsp;: 2&lt;/P&gt;&lt;P&gt;===== &amp;nbsp;Processor identification &amp;nbsp;=====&lt;BR /&gt;Processor &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Thread Id. &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Core Id. &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Package Id.&lt;BR /&gt;0 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;0 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;0 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;0&lt;BR /&gt;1 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;1 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;0 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;0&lt;BR /&gt;2 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;0 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;1 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;0&lt;BR /&gt;3 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;1 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;1 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;0&lt;BR /&gt;4 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;0 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;2 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;0&lt;BR /&gt;5 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;1 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;2 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;0&lt;BR /&gt;6 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;0 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;3 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;0&lt;BR /&gt;7 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;1 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;3 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;0&lt;BR /&gt;===== &amp;nbsp;Placement on packages &amp;nbsp;=====&lt;BR /&gt;Package Id. &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Core Id. &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Processors&lt;BR /&gt;0 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;0,1,2,3 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;(0,1)(2,3)(4,5)(6,7)&lt;/P&gt;&lt;P&gt;===== &amp;nbsp;Cache sharing &amp;nbsp;=====&lt;BR /&gt;Cache &amp;nbsp;&amp;nbsp;Size &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Processors&lt;BR /&gt;L1 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;32 &amp;nbsp;KB &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;(0,1)(2,3)(4,5)(6,7)&lt;BR /&gt;L2 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;256 KB &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;(0,1)(2,3)(4,5)(6,7)&lt;BR /&gt;L3 &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;6 &amp;nbsp;&amp;nbsp;MB &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;(0,1,2,3,4,5,6,7)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To execute two non-hybrid MPI applications in parallel, in a first window 2 MPI processes are started on the 1st hyperthreads of the first two cores, i.e. hyperthreads 0,2:&lt;/P&gt;&lt;P&gt;set I_MPI_DEBUG=5&lt;BR /&gt;mpiexec -env I_MPI_PIN_PROCESSOR_LIST 0,2&amp;nbsp;-n 2 IMB-MPI1&lt;/P&gt;&lt;P&gt;(Alternative:&lt;BR /&gt;set&amp;nbsp;I_MPI_PIN_PROCESSOR_LIST=0,2&lt;BR /&gt;mpiexec&amp;nbsp;-n 2 IMB-MPI1)&lt;/P&gt;&lt;P&gt;In a second window 2 MPI processes are started on the 1st hyperthreads of the third and fourth core, i.e. hyperthreads 4,6:&lt;/P&gt;&lt;P&gt;set I_MPI_DEBUG=5&lt;BR /&gt;mpiexec -env I_MPI_PIN_PROCESSOR_LIST 4,6 -n 2 IMB-MPI1&lt;/P&gt;&lt;P&gt;Both runs execute in parallel. The Intel MPI ranks to cores mapping is shown at the beginning of the debug output, see column "Pin cpu":&lt;/P&gt;&lt;P&gt;mpiexec -env I_MPI_PIN_PROCESSOR_LIST 0,2 -n 2 IMB-MPI1&lt;BR /&gt;[0] MPI startup(): Rank &amp;nbsp; &amp;nbsp;Pid &amp;nbsp; &amp;nbsp; &amp;nbsp;Node name &amp;nbsp; &amp;nbsp; &amp;nbsp;Pin cpu&lt;BR /&gt;[0] MPI startup(): 0 &amp;nbsp; &amp;nbsp; &amp;nbsp; 54184 &amp;nbsp; &amp;nbsp;xxxxxxxx-MOBL &amp;nbsp;0&lt;BR /&gt;[0] MPI startup(): 1 &amp;nbsp; &amp;nbsp; &amp;nbsp; 26992 &amp;nbsp; &amp;nbsp;xxxxxxxx-MOBL &amp;nbsp;2&lt;/P&gt;&lt;P&gt;mmpiexec -env I_MPI_PIN_PROCESSOR_LIST 4,6 -n 2 IMB-MPI1&lt;BR /&gt;[0] MPI startup(): Rank &amp;nbsp; &amp;nbsp;Pid &amp;nbsp; &amp;nbsp; &amp;nbsp;Node name &amp;nbsp; &amp;nbsp; &amp;nbsp;Pin cpu&lt;BR /&gt;[0] MPI startup(): 0 &amp;nbsp; &amp;nbsp; &amp;nbsp; 41268 &amp;nbsp; &amp;nbsp;xxxxxxxx-MOBL &amp;nbsp;4&lt;BR /&gt;[0] MPI startup(): 1 &amp;nbsp; &amp;nbsp; &amp;nbsp; 45460 &amp;nbsp; &amp;nbsp;xxxxxxxx-MOBL &amp;nbsp;6&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;In case of a hybrid MPI/OpenMP code you have to specify domains using I_MPI_PIN_DOMAIN (instead of I_MPI_PIN_PROCESSOR_LIST). Exactly one MPI process is started per domain, the rest of the hyperthreads in a domain is used for the threads of that MPI process (NB: Pinning of threads have to be done by other means!). For the first MPI run the specification is quite easy:&lt;/P&gt;&lt;P&gt;mpiexec -env I_MPI_PIN_DOMAIN core&amp;nbsp;-n 2 IMB-MPI1&lt;BR /&gt;[0] MPI startup(): Rank &amp;nbsp; &amp;nbsp;Pid &amp;nbsp; &amp;nbsp; &amp;nbsp;Node name &amp;nbsp; &amp;nbsp; &amp;nbsp;Pin cpu&lt;BR /&gt;[0] MPI startup(): 0 &amp;nbsp; &amp;nbsp; &amp;nbsp; 48760 &amp;nbsp; &amp;nbsp;xxxxxxxx-MOBL &amp;nbsp;{0,1}&lt;BR /&gt;[0] MPI startup(): 1 &amp;nbsp; &amp;nbsp; &amp;nbsp; 33752 &amp;nbsp; &amp;nbsp;xxxxxxxx-MOBL &amp;nbsp;{2,3}&lt;/P&gt;&lt;P&gt;There is no easy specification of domain shifts&amp;nbsp;available. Therefore explicit domain masks (see reference guide) have to be used for the second run. For the 1st MPI process of this run hyperthreads 4,5 will be used. Setting the bits in the bitmask gives a hexadecimal value of 2^4+2^5=48=0x30. For the 2nd MPI process of this run hyperthreads 6,7 will be used. The corresponding bitmask evaluate to the hexadecimal value 2^6+2^7=192=0xC0. Therefore the domain mask is [30,C0]:&lt;/P&gt;&lt;P&gt;mpiexec -env I_MPI_PIN_DOMAIN [30,C0] -n 2 IMB-MPI1&lt;BR /&gt;[0] MPI startup(): Rank &amp;nbsp; &amp;nbsp;Pid &amp;nbsp; &amp;nbsp; &amp;nbsp;Node name &amp;nbsp; &amp;nbsp; &amp;nbsp;Pin cpu&lt;BR /&gt;[0] MPI startup(): 0 &amp;nbsp; &amp;nbsp; &amp;nbsp; 39360 &amp;nbsp; &amp;nbsp;xxxxxxxx-MOBL &amp;nbsp;{4,5}&lt;BR /&gt;[0] MPI startup(): 1 &amp;nbsp; &amp;nbsp; &amp;nbsp; 27728 &amp;nbsp; &amp;nbsp;xxxxxxxx-MOBL &amp;nbsp;{6,7}&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Final note: You can also use the I_MPI_PIN_DOMAIN approach instead of I_MPI_PIN_PROCESSOR_LIST for non-hybrid applications. Then the OS might move an MPI process between the hyperthreads of its domain (until an explicit thread pinning is defined for this single thread!).&lt;BR /&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 22 May 2020 16:43:33 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184886#M6798</guid>
      <dc:creator>Klaus-Dieter_O_Intel</dc:creator>
      <dc:date>2020-05-22T16:43:33Z</dc:date>
    </item>
    <item>
      <title>Quote:Klaus-Dieter Oertel</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184887#M6799</link>
      <description>&lt;BLOCKQUOTE&gt;Klaus-Dieter Oertel (Intel) wrote:&lt;BR /&gt; &lt;P&gt;The environment variables for process pinning of Intel MPI are explained in the Intel MPI Reference Guide
 &lt;/P&gt;&lt;/BLOCKQUOTE&gt;

Hi Klaus, 

Thanks, this is infact the way to do it on windows!

Unfortunately I thought I would need a task scheduler (windows HPC pack) and went the server 2019 route on the pc, not that I cannot revert to windows 10 but the HPC scheduler is quite nice.

One question I now have is, is there a way to exclude a specific core in a global setting or varaible somewhere? Ideally I want the HPC scheduler to automatically allocate cores without having to specify I_MPI_PIN_PROCESSOR_LIST for each mpiexec job.

The problem at the moment is, the mpiexec are assinged to dedicated cores (within HPC pack job scheduler), however, they are assigned to say core 0 to core 4 (-n 4), but the scheduler operates on core 0 and so cannot accept new requests due to the mpiexec task using the core.

I was wondering if there was an enviromental variable or something that would specify the core range for the mpi process to use?

Thanks for the help though, as a fall back I_MPI_PIN_PROCESSOR_LIST is pretty much what I need to use,</description>
      <pubDate>Sun, 24 May 2020 00:12:49 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184887#M6799</guid>
      <dc:creator>Barden__Jason</dc:creator>
      <dc:date>2020-05-24T00:12:49Z</dc:date>
    </item>
    <item>
      <title>On Linux there would be I_MPI</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184888#M6800</link>
      <description>&lt;P&gt;On Linux there would be&amp;nbsp;I_MPI_PIN_PROCESSOR_EXCLUDE_LIST, however this variable is not available on Windows.&lt;/P&gt;</description>
      <pubDate>Mon, 25 May 2020 09:35:12 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184888#M6800</guid>
      <dc:creator>Klaus-Dieter_O_Intel</dc:creator>
      <dc:date>2020-05-25T09:35:12Z</dc:date>
    </item>
    <item>
      <title>Quote:Klaus-Dieter Oertel</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184889#M6801</link>
      <description>&lt;BLOCKQUOTE&gt;Klaus-Dieter Oertel (Intel) wrote:&lt;BR /&gt; &lt;P&gt;On Linux there would be&amp;nbsp;I_MPI_PIN_PROCESSOR_EXCLUDE_LIST, however this variable is not available on Windows.&lt;/P&gt;
 &lt;/BLOCKQUOTE&gt;

Hi Klaus, thanks for that, 

I actually found a way to do it using windows HPC scheduler, if I take back affinity assigning to the HPC scheduler instead of the mpi process and upon first boot I assign a dummy process to run on 1 core, this gets assigned to core 0 (it is sequential scheduling within windows), from that point on the core range 1-31 is auto assigned by the scheduler. Since it is a dummy job with no completion or exit code, it uses also no cpu and therefore allows the OS/scheduler to continue to distribute the 1-31 cores when required.</description>
      <pubDate>Mon, 25 May 2020 12:48:45 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184889#M6801</guid>
      <dc:creator>Barden__Jason</dc:creator>
      <dc:date>2020-05-25T12:48:45Z</dc:date>
    </item>
    <item>
      <title>Hi Jason,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184890#M6802</link>
      <description>&lt;P&gt;Hi Jason,&lt;/P&gt;&lt;P&gt;Is your issue/query got resolved?&lt;/P&gt;&lt;P&gt;If yes please confirm so we can close the case.&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;Prasanth&lt;/P&gt;</description>
      <pubDate>Thu, 28 May 2020 11:55:15 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184890#M6802</guid>
      <dc:creator>PrasanthD_intel</dc:creator>
      <dc:date>2020-05-28T11:55:15Z</dc:date>
    </item>
    <item>
      <title>Hi Jason,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184891#M6803</link>
      <description>&lt;P&gt;Hi Jason,&lt;/P&gt;&lt;P&gt;We are closing this case assuming that your issue has been resolved.&lt;/P&gt;&lt;P&gt;Please raise a new thread for any further queries.&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;Prasanth&lt;/P&gt;</description>
      <pubDate>Tue, 09 Jun 2020 06:54:31 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/MPI-locked-to-specific-core-on-node/m-p/1184891#M6803</guid>
      <dc:creator>PrasanthD_intel</dc:creator>
      <dc:date>2020-06-09T06:54:31Z</dc:date>
    </item>
  </channel>
</rss>

