<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Configuration for Intel impi in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1558889#M11344</link>
    <description>&lt;P&gt;This is one of the ways to use pmi2&lt;/P&gt;&lt;PRE&gt;$ salloc -N10 --exclusive
$ export I_MPI_PMI_LIBRARY=/path/to/slurm/lib/libpmi2.so
$ mpirun -np &amp;lt;num_procs&amp;gt; user_app.bin&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please follow the link for more information&lt;/P&gt;&lt;P&gt;&lt;A href="https://slurm.schedmd.com/mpi_guide.html#intel_mpi" target="_blank"&gt;https://slurm.schedmd.com/mpi_guide.html#intel_mpi&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Do let me know if you face any issues.&lt;/P&gt;</description>
    <pubDate>Mon, 01 Jan 2024 18:22:17 GMT</pubDate>
    <dc:creator>Mahan</dc:creator>
    <dc:date>2024-01-01T18:22:17Z</dc:date>
    <item>
      <title>Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1553070#M11253</link>
      <description>&lt;DIV&gt;On a supercomputer using slurm/srun I am seeing irreproducible crashes, some a sigsev in program A, sometime a bus error in program B. Both seem to be linked to mpi operation. These are large calculations using hybrid omp/mpi of 2omp x 128mpi as hybrid is more memory efficient. Intel impi. The crashes occur 5-10% of the time, and are not in the base code.&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;According to&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A href="https://slurm.schedmd.com/mpi_guide.html" target="_blank" rel="noopener noreferrer"&gt;https://slurm.schedmd.com/mpi_guide.html&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;I should use PMI2 with&lt;/DIV&gt;&lt;DIV&gt;I_MPI_PMI_LIBRARY=/path/to/slurm/lib/libpmi2.so .&amp;nbsp; (Currently I_MPI_PMI_LIBRARY is not set.) Apparently PMI1 is not very thread safe. Has anyone come across anything similar?&lt;/DIV&gt;</description>
      <pubDate>Tue, 12 Dec 2023 13:34:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1553070#M11253</guid>
      <dc:creator>L__D__Marks</dc:creator>
      <dc:date>2023-12-12T13:34:00Z</dc:date>
    </item>
    <item>
      <title>Re: Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1553382#M11260</link>
      <description>&lt;P&gt;&lt;FONT size="4"&gt;&lt;SPAN&gt;Hi,&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;SPAN&gt;Thanks for posting in Intel communities!&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;SPAN&gt;To assist you more effectively, could you kindly provide the following details:&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;SPAN&gt;Operating System (OS) Details&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;SPAN&gt;Intel MPI version&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;SPAN&gt;Output of the "lscpu" command&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;SPAN&gt;Hardware Details&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;SPAN&gt;Detailed Steps for Recreating the Scenario&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;SPAN&gt;Interconnect Details&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;SPAN&gt;Your cooperation in furnishing this information will greatly aid in addressing your concerns. Thank you in advance!&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;SPAN&gt;Regards,&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;SPAN&gt;Veena&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 13 Dec 2023 06:05:49 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1553382#M11260</guid>
      <dc:creator>VeenaJ_Intel</dc:creator>
      <dc:date>2023-12-13T06:05:49Z</dc:date>
    </item>
    <item>
      <title>Re: Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1553534#M11266</link>
      <description>Sorry, but please read my posting. I was asking about PMI1 versus PMI2 with impi. Your response is not relevant.</description>
      <pubDate>Wed, 13 Dec 2023 15:49:50 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1553534#M11266</guid>
      <dc:creator>L__D__Marks</dc:creator>
      <dc:date>2023-12-13T15:49:50Z</dc:date>
    </item>
    <item>
      <title>Re: Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1554246#M11282</link>
      <description>&lt;P&gt;&lt;FONT size="4"&gt;&lt;SPAN&gt;Hi,&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;SPAN&gt;Sorry for the inconvenience caused. &lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;SPAN&gt;Intel® MPI currently supports only PMI-1 and PMI-2, without support for PMIx. For optimal scalability, it is strongly recommended to configure this MPI implementation to use Slurm's PMI-2, as it offers superior scalability compared to PMI-1. While PMI-1 is still available, it is advised to transition to PMI-2, considering that PMI-1 may be deprecated in the near future. Your consideration of this recommendation is highly appreciated.&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;SPAN&gt;Regards,&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT size="4"&gt;&lt;SPAN&gt;Veena&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 15 Dec 2023 06:08:39 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1554246#M11282</guid>
      <dc:creator>VeenaJ_Intel</dc:creator>
      <dc:date>2023-12-15T06:08:39Z</dc:date>
    </item>
    <item>
      <title>Re: Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1554373#M11287</link>
      <description>&lt;P&gt;Thankyou for the response. It is somewhat an answer, but there are some points you do not mention. A key one is that the SLURM documents I mentioned (and there are many similar) all say to use:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I_MPI_PMI_LIBRARY=/path/to/slurm/lib/libpmi2.so&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;However, with slurm as default in many cases this leads to&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;MPI startup(): Warning: I_MPI_PMI_LIBRARY will be ignored since the hydra process manager was found&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;There are two other environmental variable which might be relevant&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;SLURM_MPI_TYPE=pmi2&lt;BR /&gt;I_MPI_PMI=pmi2&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;To date I see no difference using these. Can you please clarify what is appropriate with Intel impi, since currently I cannot find anything about how to use PMI2 in the available Intel documentation and the information in the slurm documentation out there appears to be incorrect.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;--&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;N.B., This is for Wien2k, which is the standard benchmark code for density functional theory calculations, e.g.&amp;nbsp;&lt;A href="https://doi.org/10.1038/s42254-023-00655-3" target="_blank"&gt;https://doi.org/10.1038/s42254-023-00655-3&lt;/A&gt;. This code&amp;nbsp;&lt;STRONG&gt;does not&lt;/STRONG&gt; just use a single mpirun (or srun), it is more intelligent (faster) and dispatches multiple mpi tasks to different nodes/cores. Therefore oversimple answers, alas, as less useful.&lt;/P&gt;</description>
      <pubDate>Fri, 15 Dec 2023 14:49:31 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1554373#M11287</guid>
      <dc:creator>L__D__Marks</dc:creator>
      <dc:date>2023-12-15T14:49:31Z</dc:date>
    </item>
    <item>
      <title>Re: Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1558889#M11344</link>
      <description>&lt;P&gt;This is one of the ways to use pmi2&lt;/P&gt;&lt;PRE&gt;$ salloc -N10 --exclusive
$ export I_MPI_PMI_LIBRARY=/path/to/slurm/lib/libpmi2.so
$ mpirun -np &amp;lt;num_procs&amp;gt; user_app.bin&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please follow the link for more information&lt;/P&gt;&lt;P&gt;&lt;A href="https://slurm.schedmd.com/mpi_guide.html#intel_mpi" target="_blank"&gt;https://slurm.schedmd.com/mpi_guide.html#intel_mpi&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Do let me know if you face any issues.&lt;/P&gt;</description>
      <pubDate>Mon, 01 Jan 2024 18:22:17 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1558889#M11344</guid>
      <dc:creator>Mahan</dc:creator>
      <dc:date>2024-01-01T18:22:17Z</dc:date>
    </item>
    <item>
      <title>Re: Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1558957#M11345</link>
      <description>&lt;P&gt;Sorry, this is very incorrect, please see the prior information about the argument being ignored. The slurm info is not correct.&lt;BR /&gt;&lt;BR /&gt;Also --exclusive is not an appropriate suggestion, that has too many other consequences.&lt;/P&gt;</description>
      <pubDate>Mon, 01 Jan 2024 23:59:36 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1558957#M11345</guid>
      <dc:creator>L__D__Marks</dc:creator>
      <dc:date>2024-01-01T23:59:36Z</dc:date>
    </item>
    <item>
      <title>Re: Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1559010#M11346</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/68967"&gt;@L__D__Marks&lt;/a&gt;, you are right that the slurm information in the intel documentation is not correct.&lt;/P&gt;&lt;P&gt;The environment variable seems to be correct.&lt;/P&gt;&lt;P&gt;Would it be possible for you to use "srun" instead of "mpirun/mpiexec"&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 02 Jan 2024 04:35:34 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1559010#M11346</guid>
      <dc:creator>Mahan</dc:creator>
      <dc:date>2024-01-02T04:35:34Z</dc:date>
    </item>
    <item>
      <title>Re: Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1559058#M11348</link>
      <description>&lt;P&gt;Unfortunately I have not found any way of using srun directly. The code runs a sequence of (what slurm calls) job steps. Some are serial and quick, the others are multiple parallel mpi tasks using different nodes. A schematic example would be&lt;/P&gt;&lt;P&gt;mpirun -np 8 -machinefile host1 &amp;amp;&lt;/P&gt;&lt;P&gt;mpirun -np 8 -machinefile host2 &amp;amp;&lt;/P&gt;&lt;P&gt;...wait for completion then do the next job step.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Similar to&amp;nbsp;&lt;A href="https://bugs.schedmd.com/show_bug.cgi?id=11863" target="_blank" rel="noopener"&gt;https://bugs.schedmd.com/show_bug.cgi?id=11863&lt;/A&gt; it seems that&amp;nbsp; "export SLURM_OVERLAP=1" matters, this appears to be common. (This is passed down through mpiexec.hydra which uses srun to launch.)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It is not clear to me whether I_MPI_PMI, SLURM_MPI_TYPE or even SLURM_OVERCOMMIT matter.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Unfortunately,currently I cannot switch to ssh launcher due to some form of misconfiguration where ssh is blocked on some of the nodes. Some sys_admins are trying to sort that out. Hence I can only at the moment test with srun launcher in mpirun.&lt;/P&gt;</description>
      <pubDate>Tue, 02 Jan 2024 13:01:38 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1559058#M11348</guid>
      <dc:creator>L__D__Marks</dc:creator>
      <dc:date>2024-01-02T13:01:38Z</dc:date>
    </item>
    <item>
      <title>Re: Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1559342#M11349</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/68967"&gt;@L__D__Marks&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I understand your difficulty in running the application using 'srun'.&lt;/P&gt;&lt;P&gt;Please allow me a few days as I need to discuss this with the development teams to see whether there is any workaround where one could use mpirun with PMI2 instead of 'srun'.&lt;/P&gt;</description>
      <pubDate>Wed, 03 Jan 2024 02:45:54 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1559342#M11349</guid>
      <dc:creator>Mahan</dc:creator>
      <dc:date>2024-01-03T02:45:54Z</dc:date>
    </item>
    <item>
      <title>Re: Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1563886#M11430</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/68967"&gt;@L__D__Marks&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As it appears that the only way to use mpi2 with slurm is using srun.&lt;/P&gt;&lt;P&gt;I have the following output from an Intel MPI benchmarking program for your reference&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;MPI startup(): Copyright (C) 2003-2023 Intel Corporation.&amp;nbsp; All rights reserved.&lt;BR /&gt;[0] MPI startup(): library kind: release&lt;BR /&gt;&lt;FONT color="#FF0000"&gt;&lt;STRONG&gt;MPI startup(): Warning: I_MPI_PMI_LIBRARY will be ignored since the hydra process manager was found&lt;/STRONG&gt;&lt;/FONT&gt;&lt;BR /&gt;[0] MPI startup(): libfabric loaded: libfabric.so.1&lt;BR /&gt;[0] MPI startup(): libfabric version: 1.18.1-impi&lt;BR /&gt;[0] MPI startup(): max number of MPI_Request per vci: 67108864 (pools: 1)&lt;BR /&gt;[0] MPI startup(): libfabric provider: tcp&lt;BR /&gt;[0] MPI startup(): File "" not found&lt;BR /&gt;[0] MPI startup(): Load tuning file: "/opt/intel/oneapi/mpi/2021.11/opt/mpi/etc/tuning_spr_shm-ofi.dat"&lt;BR /&gt;[0] MPI startup(): threading: mode: direct&lt;BR /&gt;[0] MPI startup(): threading: vcis: 1&lt;BR /&gt;[0] MPI startup(): threading: app_threads: -1&lt;BR /&gt;[0] MPI startup(): threading: runtime: generic&lt;BR /&gt;[0] MPI startup(): threading: progress_threads: 0&lt;BR /&gt;[0] MPI startup(): threading: async_progress: 0&lt;BR /&gt;[0] MPI startup(): threading: lock_level: global&lt;BR /&gt;[0] MPI startup(): tag bits available: 19 (TAG_UB value: 524287)&lt;BR /&gt;[0] MPI startup(): source bits available: 20 (Maximal number of rank: 1048575)&lt;BR /&gt;[0] MPI startup(): ===== Nic pinning on sdp4578 =====&lt;BR /&gt;[0] MPI startup(): Rank Pin nic&lt;BR /&gt;[0] MPI startup(): 0&amp;nbsp;&amp;nbsp;&amp;nbsp; enp1s0&lt;BR /&gt;[0] MPI startup(): Rank&amp;nbsp;&amp;nbsp;&amp;nbsp; Pid&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Node name&amp;nbsp; Pin cpu&lt;BR /&gt;[0] MPI startup(): 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1539366&amp;nbsp; sdp4578&amp;nbsp;&amp;nbsp;&amp;nbsp; {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,8&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 3,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,10&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 7,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,12&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 7,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,14&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 7,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,16&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 7,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,18&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 7,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,20&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 7,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223}&lt;BR /&gt;[0] MPI startup(): 1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 184901&amp;nbsp;&amp;nbsp; sdp5259&amp;nbsp;&amp;nbsp;&amp;nbsp; {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,8&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 3,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,10&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 7,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,12&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 7,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,14&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 7,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,16&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 7,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,18&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 7,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,20&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 7,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223}&lt;BR /&gt;[0] MPI startup(): I_MPI_ROOT=/opt/intel/oneapi/mpi/2021.11&lt;BR /&gt;[0] MPI startup(): ONEAPI_ROOT=/opt/intel/oneapi&lt;BR /&gt;[0] MPI startup(): I_MPI_BIND_WIN_ALLOCATE=localalloc&lt;BR /&gt;[0] MPI startup(): I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS=--external-launcher&lt;BR /&gt;[0] MPI startup(): I_MPI_HYDRA_TOPOLIB=hwloc&lt;BR /&gt;[0] MPI startup(): I_MPI_RETURN_WIN_MEM_NUMA=0&lt;BR /&gt;[0] MPI startup(): I_MPI_INTERNAL_MEM_POLICY=default&lt;BR /&gt;[0] MPI startup(): I_MPI_DEBUG=10&lt;BR /&gt;[0] &lt;STRONG&gt;&lt;FONT color="#FF0000"&gt;MPI startup(): I_MPI_PMI_LIBRARY=/usr/local/lib/libpmi2.so&lt;/FONT&gt;&lt;/STRONG&gt;&lt;BR /&gt;#----------------------------------------------------------------&lt;BR /&gt;#&amp;nbsp;&amp;nbsp;&amp;nbsp; Intel(R) MPI Benchmarks 2021.7, MPI-1 part&lt;BR /&gt;#----------------------------------------------------------------&lt;BR /&gt;# Date&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : Wed Jan 17 22:32:47 2024&lt;BR /&gt;# Machine&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : x86_64&lt;BR /&gt;# System&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : Linux&lt;BR /&gt;# Release&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : 5.15.0-86-generic&lt;BR /&gt;# Version&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : #96-Ubuntu SMP Wed Sep 20 08:23:49 UTC 2023&lt;BR /&gt;# MPI Version&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; : 3.1&lt;BR /&gt;# MPI Thread Environment:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# Calling sequence was:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# IMB-MPI1 allreduce -msglog 2:3&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# Minimum message length in bytes:&amp;nbsp;&amp;nbsp; 0&lt;BR /&gt;# Maximum message length in bytes:&amp;nbsp;&amp;nbsp; 8&lt;BR /&gt;#&lt;BR /&gt;# MPI_Datatype&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; :&amp;nbsp;&amp;nbsp; MPI_BYTE&lt;BR /&gt;# MPI_Datatype for reductions&amp;nbsp;&amp;nbsp;&amp;nbsp; :&amp;nbsp;&amp;nbsp; MPI_FLOAT&lt;BR /&gt;# MPI_Op&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; :&amp;nbsp;&amp;nbsp; MPI_SUM&lt;BR /&gt;#&lt;BR /&gt;#&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# List of Benchmarks to run:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# Allreduce&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;#----------------------------------------------------------------&lt;BR /&gt;# Benchmarking Allreduce&lt;BR /&gt;# #processes = 2&lt;BR /&gt;#----------------------------------------------------------------&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; #bytes #repetitions&amp;nbsp; t_min[usec]&amp;nbsp; t_max[usec]&amp;nbsp; t_avg[usec]&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1000&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0.02&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0.03&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0.03&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 4&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1000&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 48.90&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 49.16&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 49.03&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 8&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1000&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 48.87&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 48.90&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 48.88&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# All processes entering MPI_Finalize&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jan 2024 07:03:02 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1563886#M11430</guid>
      <dc:creator>Mahan</dc:creator>
      <dc:date>2024-01-18T07:03:02Z</dc:date>
    </item>
    <item>
      <title>Re: Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1563887#M11431</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/68967"&gt;@L__D__Marks&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Could please also let me know the reason of using pmi2, as you mentioned briefly in your initial post that there is some crash/performance drop while using IntelMPI&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jan 2024 07:05:06 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1563887#M11431</guid>
      <dc:creator>Mahan</dc:creator>
      <dc:date>2024-01-18T07:05:06Z</dc:date>
    </item>
    <item>
      <title>Re: Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1563964#M11434</link>
      <description>&lt;P&gt;The reason to try PMI2 is because all documentation (Intel's included) says PMI1 is inferior (obsolete). Your printout just confirms what I and others have reported, some more details please:&lt;/P&gt;&lt;P&gt;1) Are you running under slurm?&lt;/P&gt;&lt;P&gt;2) What launcher are you using&lt;/P&gt;&lt;P&gt;3) Does your test program report the protocol it is using? Would the line&lt;/P&gt;&lt;P&gt;# IMB-MPI1 allreduce -msglog 2:3&lt;/P&gt;&lt;P&gt;change if PMI2 is being used?&lt;/P&gt;&lt;P&gt;4) Did you set relevant environmental parameters:&lt;/P&gt;&lt;P&gt;export SLURM_MPI_TYPE =pmi2&lt;BR /&gt;export I_MPI_PMI =pmi2&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Sorry. But your last messages don't answer the question. What information did the development team provide? Maybe they should respond (escalation).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;N.B., PMIX may also be relevant.&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jan 2024 12:42:12 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1563964#M11434</guid>
      <dc:creator>L__D__Marks</dc:creator>
      <dc:date>2024-01-18T12:42:12Z</dc:date>
    </item>
    <item>
      <title>Re: Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1563970#M11435</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/68967"&gt;@L__D__Marks&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am running this on a cluster which has&lt;/P&gt;&lt;P&gt;slurm 23.11&lt;/P&gt;&lt;P&gt;oneAPI 2024, which comes with Intel MPI 2021.11.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;"IMB-MPI1 allreduce -msglog 2:3" is a benchmark problem available in Intel OneAPI suit to test mpi. You could choose your own mpi program. The environment variables which are in use, as shown in the previous reply&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;[0] MPI startup(): I_MPI_ROOT=/opt/intel/oneapi/mpi/2021.11&lt;BR /&gt;[0] MPI startup(): ONEAPI_ROOT=/opt/intel/oneapi&lt;BR /&gt;[0] MPI startup(): I_MPI_BIND_WIN_ALLOCATE=localalloc&lt;BR /&gt;[0] MPI startup(): I_MPI_HYDRA_BOOTSTRAP_EXEC_EXTRA_ARGS=--external-launcher&lt;BR /&gt;[0] MPI startup(): I_MPI_HYDRA_TOPOLIB=hwloc&lt;BR /&gt;[0] MPI startup(): I_MPI_RETURN_WIN_MEM_NUMA=0&lt;BR /&gt;[0] MPI startup(): I_MPI_INTERNAL_MEM_POLICY=default&lt;BR /&gt;[0] MPI startup(): I_MPI_DEBUG=10&lt;BR /&gt;[0]&amp;nbsp;&lt;STRONG&gt;&lt;FONT color="#FF0000"&gt;MPI startup(): I_MPI_PMI_LIBRARY=/usr/local/lib/libpmi2.so&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The important point which I wanted to make here that if you want to run using &lt;STRONG&gt;pmi2&lt;/STRONG&gt;, then currently the only option is to use &lt;STRONG&gt;srun,&lt;/STRONG&gt;&lt;/P&gt;&lt;PRE&gt;# Run your application using srun with the PMI-2 interface.
I_MPI_PMI_LIBRARY=&amp;lt;path-to-libpmi2.so&amp;gt;/libpmi2.so srun --mpi=pmi2 ./myprog&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;For more information please check the following&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://www.intel.com/content/www/us/en/docs/mpi-library/developer-guide-linux/2021-11/job-schedulers-support.html" target="_blank" rel="noopener"&gt;https://www.intel.com/content/www/us/en/docs/mpi-library/developer-guide-linux/2021-11/job-schedulers-support.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jan 2024 13:16:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1563970#M11435</guid>
      <dc:creator>Mahan</dc:creator>
      <dc:date>2024-01-18T13:16:00Z</dc:date>
    </item>
    <item>
      <title>Re: Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1564040#M11436</link>
      <description>&lt;P&gt;Please read the prior posts, and do not respond with trivial answers.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;&lt;STRONG&gt;Just using srun ./mprog is a novice response, inappropriate for professional hard-core supercomputing.&lt;/STRONG&gt;&amp;nbsp;&lt;/EM&gt;I pointed out that this is inappropriate weeks ago.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please escalate this to someone who is an expert, will read the prior information (including the fact that the page you suggest I read is wrong) and is knowledgeable. Hopefully then can construct a code which will show what interface is being used.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Escalate please.&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 18 Jan 2024 17:15:30 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1564040#M11436</guid>
      <dc:creator>L__D__Marks</dc:creator>
      <dc:date>2024-01-18T17:15:30Z</dc:date>
    </item>
    <item>
      <title>Re: Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1566352#M11470</link>
      <description>&lt;P&gt;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/68967"&gt;@L__D__Marks&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;This forum is a community forum, not a support forum.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;mpirun will always use our internal PMI library, if you want to use a different PMI library you have to provide the full path and use srun instead of mpirun.&lt;/P&gt;
&lt;PRE&gt; I_MPI_PMI_LIBRARY=/path/to/slurm/lib/libpmi2.so&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 26 Jan 2024 13:11:30 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1566352#M11470</guid>
      <dc:creator>TobiasK</dc:creator>
      <dc:date>2024-01-26T13:11:30Z</dc:date>
    </item>
    <item>
      <title>Re: Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1594469#M11692</link>
      <description>&lt;P&gt;I have same "problem" with SLRUM&amp;nbsp; on intelmpi need pmi2&amp;nbsp; &amp;nbsp;with mpirun, and good to find out others have same srun problem here.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;MPI startup(): Warning: I_MPI_PMI_LIBRARY will be ignored since the hydra process manager was found&lt;/STRONG&gt;&lt;BR /&gt;&amp;nbsp;&lt;BR /&gt;My workaround to make my mpi-intel/2021.u11's mpirun work is built my own ucx 1.15.0&amp;nbsp; and&amp;nbsp; set LD_LIBRARY_PATH to it and using&amp;nbsp;-env UCX_TLS rc,sm,self.&amp;nbsp; &amp;nbsp;&lt;/P&gt;&lt;P&gt;Test run with np=192,&amp;nbsp; &amp;nbsp;the srun with pmi2 is about 26 min wall time and mpirun + ucx1.15.0&amp;nbsp; is about 23 min,&amp;nbsp; so they are close enough.&lt;/P&gt;&lt;P&gt;For me, this works for np up-to around 400 for my CFD type of simulation. After that, intelmpi die.&amp;nbsp; Anything above np=400,&amp;nbsp; I use MPI+threads hybrid approach to workaround this problem.&amp;nbsp; Our system is AMD Genoa from Cray.&amp;nbsp; &amp;nbsp; On the other hand,&amp;nbsp; OpenMPI seems doing just fine with mpirun.&amp;nbsp; &amp;nbsp; Our code on Genoa run faster with MPI+threads actually.&amp;nbsp; &amp;nbsp;So, in the very early stage, we were using mpirun to perform the core binding / threading pinning.&amp;nbsp; &amp;nbsp;HPC systems on our different sites run different type of job scheduler.&amp;nbsp; &amp;nbsp;Maybe srun can do it,&amp;nbsp; but,&amp;nbsp; same mpirun command is used by both SLURM and LSF&amp;nbsp; (which is easier for me to do so.)&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 02 May 2024 14:52:43 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1594469#M11692</guid>
      <dc:creator>Terrence_at_Houston</dc:creator>
      <dc:date>2024-05-02T14:52:43Z</dc:date>
    </item>
    <item>
      <title>Re: Configuration for Intel impi</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1594474#M11693</link>
      <description>&lt;P&gt;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/355290"&gt;@Terrence_at_Houston&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;Sorry, but I really do not understand what's your question.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;The initial question was if PMI2 is more thread safe than PMI1&lt;BR /&gt;To use PMI2 or PMIx srun has to be used together with setting the path. mpirun will just ignore this, hence prints out the warning.&lt;BR /&gt;&lt;BR /&gt;If you are building your own UCX and setting some UCX env variables that has nothing to do with PMI/srun/mpirun.&lt;/P&gt;
&lt;P&gt;UCX is always used with Infiniband networks / the mlx provider and UCX environment variables are always used by UCX.&lt;/P&gt;</description>
      <pubDate>Thu, 02 May 2024 15:11:07 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Configuration-for-Intel-impi/m-p/1594474#M11693</guid>
      <dc:creator>TobiasK</dc:creator>
      <dc:date>2024-05-02T15:11:07Z</dc:date>
    </item>
  </channel>
</rss>

