<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: mpirun: Error: sizex*sizey ne size0size0=           4 sizez=           1   sizey=         120 in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/Re-mpirun-Error-sizex-sizey-ne-size0size0-4-sizez-1-sizey-120/m-p/1601540#M11729</link>
    <description>&lt;P class="sub_section_element_selectors"&gt;&lt;A class="sub_section_element_selectors" href="https://community.intel.com/t5/user/viewprofilepage/user-id/298242" target="_blank"&gt;@RobbieTheK&lt;/A&gt;&amp;nbsp;&lt;BR /&gt;if you see a drop in performance it's very likely that hyperthreading is not beneficial for you at all. Do you have some proof points why you expect an improve in performance using hyper threading?&lt;BR /&gt;&lt;BR /&gt;Please provide the output of I_MPI_DEBUG=10 which displays the affinity. For Slurm it's usually best to enable Slurm with cpu affinity and let Slurm handle the pinning. For that it's best to use srun instead of mpirun to launch your application.&lt;/P&gt;
&lt;P class="sub_section_element_selectors"&gt;&lt;A class="sub_section_element_selectors" href="https://www.intel.com/content/www/us/en/docs/mpi-library/developer-guide-linux/2021-12/job-schedulers-support.html#GUID-D3EF3D59-99C3-4529-B2A1-52F3009F5880" target="_blank" rel="nofollow noopener noreferrer"&gt;https://www.intel.com/content/www/us/en/docs/mpi-library/developer-guide-linux/2021-12/job-schedulers-support.html#GUID-D3EF3D59-99C3-4529-B2A1-52F3009F5880&lt;/A&gt;&lt;/P&gt;
&lt;P class="sub_section_element_selectors"&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Tue, 28 May 2024 11:46:53 GMT</pubDate>
    <dc:creator>TobiasK</dc:creator>
    <dc:date>2024-05-28T11:46:53Z</dc:date>
    <item>
      <title>Re: mpirun: Error: sizex*sizey ne size0size0=           4 sizez=           1   sizey=         120</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Re-mpirun-Error-sizex-sizey-ne-size0size0-4-sizez-1-sizey-120/m-p/1599472#M11728</link>
      <description>&lt;P&gt;We have an additional request re: hyperthreading.&lt;/P&gt;&lt;P&gt;"it also appears that my code is not getting much from hyperthreading (I was expecting a factor of two drop in seconds per timestep going from:&lt;/P&gt;&lt;PRE&gt;#SBATCH --exclusive&lt;BR /&gt;#SBATCH -n 80&lt;/PRE&gt;&lt;P&gt;to:&lt;/P&gt;&lt;PRE&gt;#SBATCH --exclusive&lt;BR /&gt;#SBATCH -n 160&lt;/PRE&gt;&lt;P&gt;[both cases will run on a single node]. is there a flag I should activate to leverage on hyperthreading?&lt;/P&gt;&lt;P&gt;I've found some of these suggestions:&lt;/P&gt;&lt;P&gt;. For example, are there differences when setting I_MPI_PIN_ORDER and/or I_MPI_PIN_PROCESSOR_LIST ?&lt;/P&gt;&lt;P&gt;This doc:&lt;BR /&gt;&lt;A href="https://www.intel.com/content/www/us/en/docs/mpi-library/developer-reference-linux/2021-8/environment-variables-for-process-pinning.html" target="_blank" rel="noopener"&gt;https://www.intel.com/content/www/us/en/docs/mpi-library/developer-reference-linux/2021-8/environment-variables-for-process-pinning.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Suggests "I_MPI_PIN_PROCESSOR_LIST with &amp;lt;procset&amp;gt; Specify a processor subset based on the topological numeration. The default value is allcores."&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It also mentions to use 'all' rather than 'allcores':&lt;BR /&gt;"all All logical processors. Specify this subset to define the number of CPUs on a node."&lt;/P&gt;&lt;P&gt;The full example from the other suggestion was this:&lt;/P&gt;&lt;P&gt;"1. To place the processes exclusively on physical cores regardless of Hyper Threading mode,&lt;/P&gt;&lt;PRE&gt;$ mpirun –genv I_MPI_PIN_PROCESSOR_LIST allcores -n &amp;lt;# total processes&amp;gt; ./app&lt;/PRE&gt;&lt;P&gt;2. To avoid sharing of common resources by adjacent MPI processes, use map=scatter setting&lt;/P&gt;&lt;PRE&gt;$ mpirun –genv I_MPI_PIN_PROCESSOR_LIST map=scatter -n &amp;lt;# total processes&amp;gt; ./apI &amp;nbsp;&amp;nbsp;&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 22 May 2024 15:03:59 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Re-mpirun-Error-sizex-sizey-ne-size0size0-4-sizez-1-sizey-120/m-p/1599472#M11728</guid>
      <dc:creator>RobbieTheK</dc:creator>
      <dc:date>2024-05-22T15:03:59Z</dc:date>
    </item>
    <item>
      <title>Re: mpirun: Error: sizex*sizey ne size0size0=           4 sizez=           1   sizey=         120</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Re-mpirun-Error-sizex-sizey-ne-size0size0-4-sizez-1-sizey-120/m-p/1601540#M11729</link>
      <description>&lt;P class="sub_section_element_selectors"&gt;&lt;A class="sub_section_element_selectors" href="https://community.intel.com/t5/user/viewprofilepage/user-id/298242" target="_blank"&gt;@RobbieTheK&lt;/A&gt;&amp;nbsp;&lt;BR /&gt;if you see a drop in performance it's very likely that hyperthreading is not beneficial for you at all. Do you have some proof points why you expect an improve in performance using hyper threading?&lt;BR /&gt;&lt;BR /&gt;Please provide the output of I_MPI_DEBUG=10 which displays the affinity. For Slurm it's usually best to enable Slurm with cpu affinity and let Slurm handle the pinning. For that it's best to use srun instead of mpirun to launch your application.&lt;/P&gt;
&lt;P class="sub_section_element_selectors"&gt;&lt;A class="sub_section_element_selectors" href="https://www.intel.com/content/www/us/en/docs/mpi-library/developer-guide-linux/2021-12/job-schedulers-support.html#GUID-D3EF3D59-99C3-4529-B2A1-52F3009F5880" target="_blank" rel="nofollow noopener noreferrer"&gt;https://www.intel.com/content/www/us/en/docs/mpi-library/developer-guide-linux/2021-12/job-schedulers-support.html#GUID-D3EF3D59-99C3-4529-B2A1-52F3009F5880&lt;/A&gt;&lt;/P&gt;
&lt;P class="sub_section_element_selectors"&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 28 May 2024 11:46:53 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Re-mpirun-Error-sizex-sizey-ne-size0size0-4-sizez-1-sizey-120/m-p/1601540#M11729</guid>
      <dc:creator>TobiasK</dc:creator>
      <dc:date>2024-05-28T11:46:53Z</dc:date>
    </item>
  </channel>
</rss>

