<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re:mpirun corrupts SLURM_NNODES environment variable when run on more than 16 nodes in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-corrupts-SLURM-NNODES-environment-variable-when-run-on/m-p/1744179#M12271</link>
    <description>&lt;P&gt;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/351397"&gt;@nickw1&lt;/a&gt;&lt;/P&gt;&lt;P&gt;The root cause of this issue is coming from srun. For the upcoming release Intel MPI 2021.18 included in oneAPI 2026.0 we decided to disable hydra branching when slurm is used (set always I_MPI_HYDRA_BRANCH_COUNT=0 if slurm is detected and the env variable is not set by the user). This will in a sense fix the issue.&lt;/P&gt;&lt;BR /&gt;</description>
    <pubDate>Mon, 13 Apr 2026 13:24:34 GMT</pubDate>
    <dc:creator>TobiasK</dc:creator>
    <dc:date>2026-04-13T13:24:34Z</dc:date>
    <item>
      <title>mpirun corrupts SLURM_NNODES environment variable when run on more than 16 nodes</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-corrupts-SLURM-NNODES-environment-variable-when-run-on/m-p/1691252#M12155</link>
      <description>&lt;P&gt;When you submit to run on more than 16 nodes of a Slurm cluster the value of the SLURM_NNODES environment variable in the MPI processes becomes corrupted:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;#!/bin/sh
#SBATCH --nodes=18 --ntasks-per-node=1
mpirun -prepend-rank /usr/bin/env | grep SLURM_NNODES&lt;/LI-CODE&gt;&lt;P&gt;gives:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;[17] SLURM_NNODES: 16
[8] SLURM_NNODES: 16
[9] SLURM_NNODES: 16
[6] SLURM_NNODES: 16
[13] SLURM_NNODES: 16
[7] SLURM_NNODES: 16
[15] SLURM_NNODES: 16
[12] SLURM_NNODES: 16
[16] SLURM_NNODES: 16
[0] SLURM_NNODES: 16
[1] SLURM_NNODES: 1
[4] SLURM_NNODES: 16
[14] SLURM_NNODES: 16
[10] SLURM_NNODES: 16
[11] SLURM_NNODES: 16
[3] SLURM_NNODES: 1
[5] SLURM_NNODES: 16
[2] SLURM_NNODES: 16&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The SLURM_JOB_NUM_NODES environment variable gives the correct value and setting:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;export I_MPI_HYDRA_BRANCH_COUNT=0&lt;/LI-CODE&gt;&lt;P&gt;works around the issue&lt;/P&gt;</description>
      <pubDate>Tue, 20 May 2025 11:48:08 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-corrupts-SLURM-NNODES-environment-variable-when-run-on/m-p/1691252#M12155</guid>
      <dc:creator>nickw1</dc:creator>
      <dc:date>2025-05-20T11:48:08Z</dc:date>
    </item>
    <item>
      <title>Re: mpirun corrupts SLURM_NNODES environment variable when run on more than 16 nodes</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-corrupts-SLURM-NNODES-environment-variable-when-run-on/m-p/1691264#M12156</link>
      <description>&lt;P&gt;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/351397"&gt;@nickw1&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;can you please give more information on your environment? Please also add the output of I_MPI_DEBUG=10&lt;/P&gt;</description>
      <pubDate>Tue, 20 May 2025 12:40:24 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-corrupts-SLURM-NNODES-environment-variable-when-run-on/m-p/1691264#M12156</guid>
      <dc:creator>TobiasK</dc:creator>
      <dc:date>2025-05-20T12:40:24Z</dc:date>
    </item>
    <item>
      <title>Re:mpirun corrupts SLURM_NNODES environment variable when run on more than 16 nodes</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-corrupts-SLURM-NNODES-environment-variable-when-run-on/m-p/1744179#M12271</link>
      <description>&lt;P&gt;&lt;a href="https://community.intel.com/t5/user/viewprofilepage/user-id/351397"&gt;@nickw1&lt;/a&gt;&lt;/P&gt;&lt;P&gt;The root cause of this issue is coming from srun. For the upcoming release Intel MPI 2021.18 included in oneAPI 2026.0 we decided to disable hydra branching when slurm is used (set always I_MPI_HYDRA_BRANCH_COUNT=0 if slurm is detected and the env variable is not set by the user). This will in a sense fix the issue.&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 13 Apr 2026 13:24:34 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-corrupts-SLURM-NNODES-environment-variable-when-run-on/m-p/1744179#M12271</guid>
      <dc:creator>TobiasK</dc:creator>
      <dc:date>2026-04-13T13:24:34Z</dc:date>
    </item>
  </channel>
</rss>

