<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic SLURM and I_MPI_JOB_RESPECT_PROCESS_PLACEMENT in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/SLURM-and-I-MPI-JOB-RESPECT-PROCESS-PLACEMENT/m-p/1075787#M4776</link>
    <description>&lt;P&gt;I was having issues with Intel MPI 5.x&amp;nbsp; (5.2.1 and older) not respecting -ppn or -perhost.&amp;nbsp; Searching this forum I found this post:&lt;/P&gt;

&lt;P&gt;&lt;A href="https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/557016" target="_blank"&gt;https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/557016&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;So the original behavior is ignoring -ppn.&amp;nbsp; I have 2 nodes, ml036 and ml311.&amp;nbsp; My SLURM_NODELIST is;&lt;/P&gt;

&lt;P&gt;SLURM_JOB_NODELIST=ml[036,311]&lt;/P&gt;

&lt;P&gt;without setting I_MPI_JOB_RESPECT_PROCESS_PLACEMENT I see ppn ignored:&lt;/P&gt;

&lt;P&gt;[green@ml036 ~]$ mpirun -n 2 -ppn 1 ./hello_mpi&lt;BR /&gt;
	hello_parallel.f: Number of tasks=&amp;nbsp; 2 My rank=&amp;nbsp; 0 My name=ml036.localdomain&lt;BR /&gt;
	hello_parallel.f: Number of tasks=&amp;nbsp; 2 My rank=&amp;nbsp; 1 My name=ml036.localdomain&lt;/P&gt;

&lt;P&gt;&lt;BR /&gt;
	Following that previous post, I&lt;/P&gt;

&lt;P&gt;setenv I_MPI_JOB_RESPECT_PROCESS_PLACEMENT disable&lt;/P&gt;

&lt;P&gt;then ppn works as expected&lt;/P&gt;

&lt;P&gt;[green@ml036 ~]$ setenv I_MPI_JOB_RESPECT_PROCESS_PLACEMENT disable&lt;BR /&gt;
	[green@ml036 ~]$ mpirun -n 2 -ppn 1 ./hello_mpi&lt;BR /&gt;
	hello_parallel.f: Number of tasks=&amp;nbsp; 2 My rank=&amp;nbsp; 0 My name=ml036.localdomain&lt;BR /&gt;
	hello_parallel.f: Number of tasks=&amp;nbsp; 2 My rank=&amp;nbsp; 1 My name=ml311.localdomain&lt;/P&gt;

&lt;P&gt;So is this a local configuration issue?&amp;nbsp; It's easy enough to set this I_MPI_JOB_RESPECT_PROCESS_PLACEMENT env var, but curious what it is and why I have to manually set this.&amp;nbsp; Shouldn't iMPI figure out I'm on a SLURM system and 'automatically' do the right thing w/o this env var?&lt;/P&gt;

&lt;P&gt;with I_MPI_DEBUG 6 and w/o I_MPI_JOB_RESPECT_PROCESS_PLACEMENT I got this:&lt;/P&gt;

&lt;P&gt;$ mpirun -n 2 -ppn 1 ./hello_mpi&lt;BR /&gt;
	[0] MPI startup(): Intel(R) MPI Library, Version 2017 Update 1&amp;nbsp; Build 20161016 (id: 16418)&lt;BR /&gt;
	[0] MPI startup(): Copyright (C) 2003-2016 Intel Corporation.&amp;nbsp; All rights reserved.&lt;BR /&gt;
	[0] MPI startup(): Multi-threaded optimized library&lt;BR /&gt;
	[0] MPI startup(): shm data transfer mode&lt;BR /&gt;
	[1] MPI startup(): shm data transfer mode&lt;BR /&gt;
	[0] MPI startup(): Device_reset_idx=8&lt;BR /&gt;
	[0] MPI startup(): Allgather: 2: 0-0 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allgather: 3: 1-256 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allgather: 1: 257-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allgather: 3: 257-5851 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allgather: 1: 5852-57344 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allgather: 3: 57345-388846 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allgather: 1: 388847-1453707 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allgather: 3: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allgatherv: 3: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allreduce: 1: 0-1901 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allreduce: 7: 1902-2071 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allreduce: 1: 2072-32768 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allreduce: 8: 32769-65536 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allreduce: 1: 65537-131072 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allreduce: 2: 131073-524288 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allreduce: 7: 524289-1048576 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allreduce: 2: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Alltoall: 3: 0-131072 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Alltoall: 4: 131073-529941 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Alltoall: 2: 529942-1756892 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Alltoall: 4: 1756893-2097152 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Alltoall: 3: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Alltoallv: 0: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Alltoallw: 0: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Barrier: 2: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 1: 0-0 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 8: 1-3938 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 1: 3939-4274 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 8: 4275-12288 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 3: 12289-36805 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 7: 36806-95325 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 1: 95326-158190 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 7: 158191-2393015 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 1: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Exscan: 0: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Gather: 3: 0-874 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Gather: 1: 875-2048 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Gather: 3: 2049-4096 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Gather: 1: 4097-65536 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Gather: 3: 65537-297096 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Gather: 1: 297097-524288 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Gather: 3: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Gatherv: 0: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Reduce_scatter: 1: 0-6 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Reduce_scatter: 2: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Reduce: 1: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scan: 0: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 3: 0-0 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 1: 1-48 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 3: 49-91 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 0: 92-201 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 3: 202-2048 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 1: 2049-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 3: 2049-4751 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 0: 4752-12719 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 3: 12720-20604 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 0: 20605-32768 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 3: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatterv: 0: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Rank&amp;nbsp;&amp;nbsp;&amp;nbsp; Pid&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Node name&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Pin cpu&lt;BR /&gt;
	[0] MPI startup(): 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 99166&amp;nbsp;&amp;nbsp;&amp;nbsp; ml036.localdomain&amp;nbsp; {0,1,2,3,4,5,6,7}&lt;BR /&gt;
	[0] MPI startup(): 1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 99167&amp;nbsp;&amp;nbsp;&amp;nbsp; ml036.localdomain&amp;nbsp; {8,9,10,11,12,13,14,15}&lt;BR /&gt;
	[0] MPI startup(): Recognition=2 Platform(code=8 ippn=1 dev=1) Fabric(intra=1 inter=1 flags=0x0)&lt;BR /&gt;
	[1] MPI startup(): Recognition=2 Platform(code=8 ippn=1 dev=1) Fabric(intra=1 inter=1 flags=0x0)&lt;BR /&gt;
	[0] MPI startup(): I_MPI_DEBUG=6&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_NUMA_NODE_MAP=qib0:0&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_NUMA_NODE_NUM=2&lt;BR /&gt;
	[0] MPI startup(): I_MPI_PIN_MAPPING=2:0 0,1 8&lt;BR /&gt;
	hello_parallel.f: Number of tasks=&amp;nbsp; 2 My rank=&amp;nbsp; 1 My name=ml036.localdomain&lt;BR /&gt;
	hello_parallel.f: Number of tasks=&amp;nbsp; 2 My rank=&amp;nbsp; 0 My name=ml036.localdomain&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;SLURM vars&lt;/P&gt;

&lt;P&gt;[green@ml015 ~]$ env | grep SLURM&lt;BR /&gt;
	SLURM_NTASKS_PER_NODE=16&lt;BR /&gt;
	SLURM_SUBMIT_DIR=/users/green&lt;BR /&gt;
	SLURM_JOB_ID=534349&lt;BR /&gt;
	SLURM_JOB_NUM_NODES=2&lt;BR /&gt;
	SLURM_JOB_NODELIST=ml[015,017]&lt;BR /&gt;
	SLURM_JOB_CPUS_PER_NODE=16(x2)&lt;BR /&gt;
	SLURM_JOBID=534349&lt;BR /&gt;
	SLURM_NNODES=2&lt;BR /&gt;
	SLURM_NODELIST=ml[015,017]&lt;BR /&gt;
	SLURM_TASKS_PER_NODE=16(x2)&lt;BR /&gt;
	SLURM_NTASKS=32&lt;BR /&gt;
	SLURM_NPROCS=32&lt;BR /&gt;
	SLURM_PRIO_PROCESS=0&lt;BR /&gt;
	SLURM_DISTRIBUTION=cyclic&lt;BR /&gt;
	SLURM_STEPID=0&lt;BR /&gt;
	SLURM_SRUN_COMM_PORT=41294&lt;BR /&gt;
	SLURM_PTY_PORT=43155&lt;BR /&gt;
	SLURM_PTY_WIN_COL=143&lt;BR /&gt;
	SLURM_PTY_WIN_ROW=33&lt;BR /&gt;
	SLURM_STEP_ID=0&lt;BR /&gt;
	SLURM_STEP_NODELIST=ml015&lt;BR /&gt;
	SLURM_STEP_NUM_NODES=1&lt;BR /&gt;
	SLURM_STEP_NUM_TASKS=1&lt;BR /&gt;
	SLURM_STEP_TASKS_PER_NODE=1&lt;BR /&gt;
	SLURM_STEP_LAUNCHER_PORT=41294&lt;BR /&gt;
	SLURM_SRUN_COMM_HOST=192.168.0.153&lt;BR /&gt;
	SLURM_TOPOLOGY_ADDR=ml015&lt;BR /&gt;
	SLURM_TOPOLOGY_ADDR_PATTERN=node&lt;BR /&gt;
	SLURM_TASK_PID=129118&lt;BR /&gt;
	SLURM_CPUS_ON_NODE=16&lt;BR /&gt;
	SLURM_NODEID=0&lt;BR /&gt;
	SLURM_PROCID=0&lt;BR /&gt;
	SLURM_LOCALID=0&lt;BR /&gt;
	SLURM_LAUNCH_NODE_IPADDR=192.168.0.153&lt;BR /&gt;
	SLURM_GTIDS=0&lt;BR /&gt;
	SLURM_CHECKPOINT_IMAGE_DIR=/users/green&lt;BR /&gt;
	SLURMD_NODENAME=ml015&lt;/P&gt;</description>
    <pubDate>Fri, 04 Nov 2016 17:23:25 GMT</pubDate>
    <dc:creator>Ronald_G_2</dc:creator>
    <dc:date>2016-11-04T17:23:25Z</dc:date>
    <item>
      <title>SLURM and I_MPI_JOB_RESPECT_PROCESS_PLACEMENT</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/SLURM-and-I-MPI-JOB-RESPECT-PROCESS-PLACEMENT/m-p/1075787#M4776</link>
      <description>&lt;P&gt;I was having issues with Intel MPI 5.x&amp;nbsp; (5.2.1 and older) not respecting -ppn or -perhost.&amp;nbsp; Searching this forum I found this post:&lt;/P&gt;

&lt;P&gt;&lt;A href="https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/557016" target="_blank"&gt;https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/557016&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;So the original behavior is ignoring -ppn.&amp;nbsp; I have 2 nodes, ml036 and ml311.&amp;nbsp; My SLURM_NODELIST is;&lt;/P&gt;

&lt;P&gt;SLURM_JOB_NODELIST=ml[036,311]&lt;/P&gt;

&lt;P&gt;without setting I_MPI_JOB_RESPECT_PROCESS_PLACEMENT I see ppn ignored:&lt;/P&gt;

&lt;P&gt;[green@ml036 ~]$ mpirun -n 2 -ppn 1 ./hello_mpi&lt;BR /&gt;
	hello_parallel.f: Number of tasks=&amp;nbsp; 2 My rank=&amp;nbsp; 0 My name=ml036.localdomain&lt;BR /&gt;
	hello_parallel.f: Number of tasks=&amp;nbsp; 2 My rank=&amp;nbsp; 1 My name=ml036.localdomain&lt;/P&gt;

&lt;P&gt;&lt;BR /&gt;
	Following that previous post, I&lt;/P&gt;

&lt;P&gt;setenv I_MPI_JOB_RESPECT_PROCESS_PLACEMENT disable&lt;/P&gt;

&lt;P&gt;then ppn works as expected&lt;/P&gt;

&lt;P&gt;[green@ml036 ~]$ setenv I_MPI_JOB_RESPECT_PROCESS_PLACEMENT disable&lt;BR /&gt;
	[green@ml036 ~]$ mpirun -n 2 -ppn 1 ./hello_mpi&lt;BR /&gt;
	hello_parallel.f: Number of tasks=&amp;nbsp; 2 My rank=&amp;nbsp; 0 My name=ml036.localdomain&lt;BR /&gt;
	hello_parallel.f: Number of tasks=&amp;nbsp; 2 My rank=&amp;nbsp; 1 My name=ml311.localdomain&lt;/P&gt;

&lt;P&gt;So is this a local configuration issue?&amp;nbsp; It's easy enough to set this I_MPI_JOB_RESPECT_PROCESS_PLACEMENT env var, but curious what it is and why I have to manually set this.&amp;nbsp; Shouldn't iMPI figure out I'm on a SLURM system and 'automatically' do the right thing w/o this env var?&lt;/P&gt;

&lt;P&gt;with I_MPI_DEBUG 6 and w/o I_MPI_JOB_RESPECT_PROCESS_PLACEMENT I got this:&lt;/P&gt;

&lt;P&gt;$ mpirun -n 2 -ppn 1 ./hello_mpi&lt;BR /&gt;
	[0] MPI startup(): Intel(R) MPI Library, Version 2017 Update 1&amp;nbsp; Build 20161016 (id: 16418)&lt;BR /&gt;
	[0] MPI startup(): Copyright (C) 2003-2016 Intel Corporation.&amp;nbsp; All rights reserved.&lt;BR /&gt;
	[0] MPI startup(): Multi-threaded optimized library&lt;BR /&gt;
	[0] MPI startup(): shm data transfer mode&lt;BR /&gt;
	[1] MPI startup(): shm data transfer mode&lt;BR /&gt;
	[0] MPI startup(): Device_reset_idx=8&lt;BR /&gt;
	[0] MPI startup(): Allgather: 2: 0-0 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allgather: 3: 1-256 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allgather: 1: 257-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allgather: 3: 257-5851 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allgather: 1: 5852-57344 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allgather: 3: 57345-388846 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allgather: 1: 388847-1453707 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allgather: 3: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allgatherv: 3: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allreduce: 1: 0-1901 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allreduce: 7: 1902-2071 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allreduce: 1: 2072-32768 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allreduce: 8: 32769-65536 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allreduce: 1: 65537-131072 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allreduce: 2: 131073-524288 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allreduce: 7: 524289-1048576 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Allreduce: 2: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Alltoall: 3: 0-131072 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Alltoall: 4: 131073-529941 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Alltoall: 2: 529942-1756892 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Alltoall: 4: 1756893-2097152 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Alltoall: 3: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Alltoallv: 0: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Alltoallw: 0: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Barrier: 2: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 1: 0-0 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 8: 1-3938 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 1: 3939-4274 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 8: 4275-12288 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 3: 12289-36805 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 7: 36806-95325 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 1: 95326-158190 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 7: 158191-2393015 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Bcast: 1: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Exscan: 0: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Gather: 3: 0-874 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Gather: 1: 875-2048 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Gather: 3: 2049-4096 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Gather: 1: 4097-65536 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Gather: 3: 65537-297096 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Gather: 1: 297097-524288 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Gather: 3: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Gatherv: 0: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Reduce_scatter: 1: 0-6 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Reduce_scatter: 2: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Reduce: 1: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scan: 0: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 3: 0-0 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 1: 1-48 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 3: 49-91 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 0: 92-201 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 3: 202-2048 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 1: 2049-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 3: 2049-4751 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 0: 4752-12719 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 3: 12720-20604 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 0: 20605-32768 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatter: 3: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Scatterv: 0: 0-2147483647 &amp;amp; 0-2147483647&lt;BR /&gt;
	[0] MPI startup(): Rank&amp;nbsp;&amp;nbsp;&amp;nbsp; Pid&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Node name&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Pin cpu&lt;BR /&gt;
	[0] MPI startup(): 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 99166&amp;nbsp;&amp;nbsp;&amp;nbsp; ml036.localdomain&amp;nbsp; {0,1,2,3,4,5,6,7}&lt;BR /&gt;
	[0] MPI startup(): 1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 99167&amp;nbsp;&amp;nbsp;&amp;nbsp; ml036.localdomain&amp;nbsp; {8,9,10,11,12,13,14,15}&lt;BR /&gt;
	[0] MPI startup(): Recognition=2 Platform(code=8 ippn=1 dev=1) Fabric(intra=1 inter=1 flags=0x0)&lt;BR /&gt;
	[1] MPI startup(): Recognition=2 Platform(code=8 ippn=1 dev=1) Fabric(intra=1 inter=1 flags=0x0)&lt;BR /&gt;
	[0] MPI startup(): I_MPI_DEBUG=6&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_NUMA_NODE_MAP=qib0:0&lt;BR /&gt;
	[0] MPI startup(): I_MPI_INFO_NUMA_NODE_NUM=2&lt;BR /&gt;
	[0] MPI startup(): I_MPI_PIN_MAPPING=2:0 0,1 8&lt;BR /&gt;
	hello_parallel.f: Number of tasks=&amp;nbsp; 2 My rank=&amp;nbsp; 1 My name=ml036.localdomain&lt;BR /&gt;
	hello_parallel.f: Number of tasks=&amp;nbsp; 2 My rank=&amp;nbsp; 0 My name=ml036.localdomain&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;SLURM vars&lt;/P&gt;

&lt;P&gt;[green@ml015 ~]$ env | grep SLURM&lt;BR /&gt;
	SLURM_NTASKS_PER_NODE=16&lt;BR /&gt;
	SLURM_SUBMIT_DIR=/users/green&lt;BR /&gt;
	SLURM_JOB_ID=534349&lt;BR /&gt;
	SLURM_JOB_NUM_NODES=2&lt;BR /&gt;
	SLURM_JOB_NODELIST=ml[015,017]&lt;BR /&gt;
	SLURM_JOB_CPUS_PER_NODE=16(x2)&lt;BR /&gt;
	SLURM_JOBID=534349&lt;BR /&gt;
	SLURM_NNODES=2&lt;BR /&gt;
	SLURM_NODELIST=ml[015,017]&lt;BR /&gt;
	SLURM_TASKS_PER_NODE=16(x2)&lt;BR /&gt;
	SLURM_NTASKS=32&lt;BR /&gt;
	SLURM_NPROCS=32&lt;BR /&gt;
	SLURM_PRIO_PROCESS=0&lt;BR /&gt;
	SLURM_DISTRIBUTION=cyclic&lt;BR /&gt;
	SLURM_STEPID=0&lt;BR /&gt;
	SLURM_SRUN_COMM_PORT=41294&lt;BR /&gt;
	SLURM_PTY_PORT=43155&lt;BR /&gt;
	SLURM_PTY_WIN_COL=143&lt;BR /&gt;
	SLURM_PTY_WIN_ROW=33&lt;BR /&gt;
	SLURM_STEP_ID=0&lt;BR /&gt;
	SLURM_STEP_NODELIST=ml015&lt;BR /&gt;
	SLURM_STEP_NUM_NODES=1&lt;BR /&gt;
	SLURM_STEP_NUM_TASKS=1&lt;BR /&gt;
	SLURM_STEP_TASKS_PER_NODE=1&lt;BR /&gt;
	SLURM_STEP_LAUNCHER_PORT=41294&lt;BR /&gt;
	SLURM_SRUN_COMM_HOST=192.168.0.153&lt;BR /&gt;
	SLURM_TOPOLOGY_ADDR=ml015&lt;BR /&gt;
	SLURM_TOPOLOGY_ADDR_PATTERN=node&lt;BR /&gt;
	SLURM_TASK_PID=129118&lt;BR /&gt;
	SLURM_CPUS_ON_NODE=16&lt;BR /&gt;
	SLURM_NODEID=0&lt;BR /&gt;
	SLURM_PROCID=0&lt;BR /&gt;
	SLURM_LOCALID=0&lt;BR /&gt;
	SLURM_LAUNCH_NODE_IPADDR=192.168.0.153&lt;BR /&gt;
	SLURM_GTIDS=0&lt;BR /&gt;
	SLURM_CHECKPOINT_IMAGE_DIR=/users/green&lt;BR /&gt;
	SLURMD_NODENAME=ml015&lt;/P&gt;</description>
      <pubDate>Fri, 04 Nov 2016 17:23:25 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/SLURM-and-I-MPI-JOB-RESPECT-PROCESS-PLACEMENT/m-p/1075787#M4776</guid>
      <dc:creator>Ronald_G_2</dc:creator>
      <dc:date>2016-11-04T17:23:25Z</dc:date>
    </item>
    <item>
      <title>Hello,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/SLURM-and-I-MPI-JOB-RESPECT-PROCESS-PLACEMENT/m-p/1075788#M4777</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;

&lt;P&gt;The behavior of Intel MPI is as expected, it does respect the job scheduler which is SLURM in your case. In your SLURM job script, you define to use 16 MPI ranks per node (SLURM_NTASKS_PER_NODE=16). While you only run 2 MPI ranks, they will be executed on the first node since that is what you requested from the job scheduler. Therefore Intel MPI will ignore your PPN parameter and stick with the SLURM configuration, unless you overwrite that by setting I_MPI_JOB_RESPECT_PROCESS_PLACEMENT to 0 (/disable).&lt;/P&gt;

&lt;P&gt;The reason for the change compared to older IMPI versions is that we have observed issues where job schedulers terminated user jobs when the user started to claim resources that were not requested.&lt;/P&gt;

&lt;P&gt;Best regards,&lt;/P&gt;

&lt;P&gt;Michael&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 09 Nov 2016 16:15:54 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/SLURM-and-I-MPI-JOB-RESPECT-PROCESS-PLACEMENT/m-p/1075788#M4777</guid>
      <dc:creator>Michael_Intel</dc:creator>
      <dc:date>2016-11-09T16:15:54Z</dc:date>
    </item>
  </channel>
</rss>

