<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Dear Prasanth in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184962#M6813</link>
    <description>&lt;P&gt;Dear&amp;nbsp;Prasanth&lt;/P&gt;&lt;P&gt;I didn't mention the system was booted with hypertrading, but to verify it wasn't this the problem I rebooted without HT.&lt;BR /&gt;Unfortunately nothing changes.&lt;BR /&gt;&lt;BR /&gt;To simplify the problem I compile and use a very small code taken from&lt;BR /&gt;&lt;A href="https://people.sc.fsu.edu/~jburkardt/f_src/hello_mpi/hello_mpi.f90" target="_blank"&gt;https://people.sc.fsu.edu/~jburkardt/f_src/hello_mpi/hello_mpi.f90&lt;/A&gt;&lt;BR /&gt;mpiifort hello_mpi.f90&lt;BR /&gt;&lt;BR /&gt;The following are the commands I give to create my cpusets.&lt;BR /&gt;I tryed them also without --mem_exclusive and --cpu_exclusive switches but nothing changes.&lt;BR /&gt;&lt;BR /&gt;cset set -lr&lt;BR /&gt;cset:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Name &amp;nbsp; &amp;nbsp; &amp;nbsp; CPUs-X &amp;nbsp; &amp;nbsp;MEMs-X Tasks Subs Path&lt;BR /&gt;&amp;nbsp;------------ ---------- - ------- - ----- ---- ----------&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;root &amp;nbsp; &amp;nbsp; &amp;nbsp;0-431 y &amp;nbsp; &amp;nbsp;0-11 y &amp;nbsp;4965 &amp;nbsp; &amp;nbsp;0 /&lt;BR /&gt;&lt;BR /&gt;cset set -c 0-35 -m 0 --mem_exclusive --cpu_exclusive -s system&lt;BR /&gt;cset set -c 36-431 -m 1-11 --mem_exclusive --cpu_exclusive -s user&lt;BR /&gt;&lt;BR /&gt;cset set -lr&lt;BR /&gt;cset:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Name &amp;nbsp; &amp;nbsp; &amp;nbsp; CPUs-X &amp;nbsp; &amp;nbsp;MEMs-X Tasks Subs Path&lt;BR /&gt;&amp;nbsp;------------ ---------- - ------- - ----- ---- ----------&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;root &amp;nbsp; &amp;nbsp; &amp;nbsp;0-431 y &amp;nbsp; &amp;nbsp;0-11 y &amp;nbsp;4921 &amp;nbsp; &amp;nbsp;2 /&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;user &amp;nbsp; &amp;nbsp; 36-431 y &amp;nbsp; &amp;nbsp;1-11 y &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;0 /user&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;system &amp;nbsp; &amp;nbsp; &amp;nbsp; 0-35 y &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 y &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;0 /system&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;export FI_PROVIDER=sockets&lt;BR /&gt;export I_MPI_DEBUG=5&lt;BR /&gt;&lt;BR /&gt;cset proc --move -p $$ /system&lt;BR /&gt;mpirun -np 2 ./a.out&lt;BR /&gt;[0] MPI startup(): libfabric version: 1.9.0a1-impi&lt;BR /&gt;[0] MPI startup(): libfabric provider: sockets&lt;BR /&gt;P1 &amp;nbsp;"Hello, world!"&lt;BR /&gt;[0] MPI startup(): Rank &amp;nbsp; &amp;nbsp;Pid &amp;nbsp; &amp;nbsp; &amp;nbsp;Node name &amp;nbsp;Pin cpu&lt;BR /&gt;[0] MPI startup(): 0 &amp;nbsp; &amp;nbsp; &amp;nbsp; 145005 &amp;nbsp; tiziano &amp;nbsp; {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17}&lt;BR /&gt;[0] MPI startup(): 1 &amp;nbsp; &amp;nbsp; &amp;nbsp; 145006 &amp;nbsp; tiziano &amp;nbsp; {18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35}&lt;BR /&gt;[0] MPI startup(): I_MPI_ROOT=/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi&lt;BR /&gt;[0] MPI startup(): I_MPI_MPIRUN=mpirun&lt;BR /&gt;[0] MPI startup(): I_MPI_HYDRA_TOPOLIB=hwloc&lt;BR /&gt;[0] MPI startup(): I_MPI_INTERNAL_MEM_POLICY=default&lt;BR /&gt;[0] MPI startup(): I_MPI_DEBUG=5&lt;BR /&gt;26 May 2020 &amp;nbsp; 9:42:01.341 AM&lt;BR /&gt;&lt;BR /&gt;P0 &amp;nbsp;HELLO_MPI - Master process:&lt;BR /&gt;P0 &amp;nbsp; &amp;nbsp;FORTRAN90/MPI version&lt;BR /&gt;P0 &amp;nbsp; &amp;nbsp;An MPI test program.&lt;BR /&gt;P0 &amp;nbsp; &amp;nbsp;The number of MPI processes is &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;2&lt;BR /&gt;P0 &amp;nbsp;"Hello, world!"&lt;BR /&gt;&lt;BR /&gt;P0 &amp;nbsp;HELLO_MPI - Master process:&lt;BR /&gt;P0 &amp;nbsp; &amp;nbsp;Normal end of execution: "Goodbye, world!".&lt;BR /&gt;&lt;BR /&gt;P0 &amp;nbsp; &amp;nbsp;Elapsed wall clock time = &amp;nbsp; 0.253995E-03 seconds.&lt;BR /&gt;&lt;BR /&gt;P0 &amp;nbsp;HELLO_MPI - Master process:&lt;BR /&gt;P0 &amp;nbsp; &amp;nbsp;Normal end of execution.&lt;BR /&gt;&lt;BR /&gt;26 May 2020 &amp;nbsp; 9:42:01.342 AM&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;cset proc --move -p $$ /user&lt;BR /&gt;mpirun -np 2 ./a.out&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpirun: line 103: 145410 Segmentation fault &amp;nbsp; &amp;nbsp; &amp;nbsp;(core dumped) mpiexec.hydra "$@" 0&amp;lt;&amp;amp;0&lt;BR /&gt;&lt;BR /&gt;which mpirun&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpirun&lt;BR /&gt;&lt;BR /&gt;bash -x /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpirun -np 2 ./a.out&lt;BR /&gt;+ tempdir=/tmp&lt;BR /&gt;+ '[' -n '' ']'&lt;BR /&gt;+ '[' -n '' ']'&lt;BR /&gt;+ np_boot=&lt;BR /&gt;++ whoami&lt;BR /&gt;+ username=root&lt;BR /&gt;+ rc=0&lt;BR /&gt;++ uname -m&lt;BR /&gt;++ grep 1om&lt;BR /&gt;+ '[' -z /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi -a -z '' ']'&lt;BR /&gt;+ export I_MPI_MPIRUN=mpirun&lt;BR /&gt;+ I_MPI_MPIRUN=mpirun&lt;BR /&gt;+ '[' -n '' -a -z '' ']'&lt;BR /&gt;+ '[' -n '' -a -z '' ']'&lt;BR /&gt;+ '[' -z '' -a -z '' ']'&lt;BR /&gt;+ '[' -n '' ']'&lt;BR /&gt;+ '[' -n '' ']'&lt;BR /&gt;+ '[' -n '' ']'&lt;BR /&gt;+ '[' -n '' -a -n '' -a -n '' ']'&lt;BR /&gt;+ '[' -n '' -o -n '' ']'&lt;BR /&gt;+ '[' x = xyes -o x = xenable -o x = xon -o x = x1 ']'&lt;BR /&gt;+ '[' -n '' ']'&lt;BR /&gt;+ mpiexec.hydra -np 2 ./a.out&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpirun: line 103: 145555 Segmentation fault &amp;nbsp; &amp;nbsp; &amp;nbsp;(core dumped) mpiexec.hydra "$@" 0&amp;lt;&amp;amp;0&lt;BR /&gt;+ rc=139&lt;BR /&gt;+ cleanup=0&lt;BR /&gt;+ echo -np 2 ./a.out&lt;BR /&gt;+ grep '\-cleanup'&lt;BR /&gt;+ '[' 1 -eq 0 ']'&lt;BR /&gt;+ '[' -n '' ']'&lt;BR /&gt;+ '[' 0 -eq 1 ']'&lt;BR /&gt;+ exit 139&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;which mpiexec.hydra&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpiexec.hydra&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 29 May 2020 17:45:11 GMT</pubDate>
    <dc:creator>lombardi__emanuele</dc:creator>
    <dc:date>2020-05-29T17:45:11Z</dc:date>
    <item>
      <title>mpirun error running in a cpuset</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184959#M6810</link>
      <description>I've have errors using mpirun whitin a cpuset (regardles if the cset shield is activatet or not)

cset set -lr
cset: 
         Name       CPUs-X    MEMs-X Tasks Subs Path
 ------------ ---------- - ------- - ----- ---- ----------
         root        0-431 y    0-11 y        4956    2 /
         user     24-431 n    1-11 n              0    0 /user
       system       0-23 n         0 n              0    0 /system

which mpirun
/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpirun

cset proc --move -p $$ /
mpirun -np 10  ./wrf.exe               #PROPERLY WORKS

cset proc --move -p $$ /system
mpirun -np 10  ./wrf.exe               #PROPERLY WORKS

cset proc --move -p $$ /user
mpirun -np 10  ./wrf.exe               #ERROR!!!!
/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpirun: line 103: 343504 Segmentation fault      (core dumped) mpiexec.hydra "$@" 0&amp;lt;&amp;amp;0

The error happens also in this way:
cset proc --exec -s /user  mpirun -- -np 10 ./wrf.exe

The fact the error happens only in the /user cpuset is quite strange, isn'nt it?
After all cpuset /user doesn't differ much from cpust /system wher mpirun work properly!

The error happens whichever -np is, also without -np flags.

Can anybody help me?
thanks from Italy,

Emanuele Lombardi


ifort (IFORT) 19.1.0.166 20191121
Intel(R) MPI Library for Linux* OS, Version 2019 Update 6 Build 20191024 (id: 082ae5608)
SLES15SP1 
HP Superdome Flex (ex SGI UV)  

topology
System type: Superdome Flex
System name: tiziano
Serial number: CZ20040JWV
      12 Blades
     432 CPUs (online: 0-431)
      12 Nodes
    2230 GB Memory Total
       1 Co-processor
       2 Fibre Channel Controllers
       4 Network Controllers
       1 SATA Storage Controller
       1 USB Controller
       1 VGA GPU
       2 RAID Controllers

BTW I had the same error in 2013 as you can see from 
&lt;A href="https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/392814#" target="_blank"&gt;https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/392814#&lt;/A&gt;</description>
      <pubDate>Thu, 21 May 2020 15:16:04 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184959#M6810</guid>
      <dc:creator>lombardi__emanuele</dc:creator>
      <dc:date>2020-05-21T15:16:04Z</dc:date>
    </item>
    <item>
      <title>Hi Emanuele,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184960#M6811</link>
      <description>&lt;P&gt;Hi Emanuele,&lt;/P&gt;&lt;P&gt;We have tried and were able to run mpirun in the cpuset successfully.&lt;/P&gt;&lt;P&gt;Could you provide the exact commands you have used for creating the cpusets? This will help us in replicating the scenario at our end.&lt;/P&gt;&lt;P&gt;Please provide us the log report after setting I_MPI_DEBUG=5.&lt;/P&gt;&lt;P&gt;If&amp;nbsp;there is any other information you have please share.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;Prasanth&lt;/P&gt;</description>
      <pubDate>Fri, 22 May 2020 11:46:48 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184960#M6811</guid>
      <dc:creator>PrasanthD_intel</dc:creator>
      <dc:date>2020-05-22T11:46:48Z</dc:date>
    </item>
    <item>
      <title>Hi Emanuele,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184961#M6812</link>
      <description>&lt;P&gt;Hi Emanuele,&lt;/P&gt;&lt;P&gt;Could you provide us the log report after&amp;nbsp;setting I_MPI_DEBUG=5?&lt;/P&gt;&lt;P&gt;This will help us in understanding the error better.&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;Prasanth&lt;/P&gt;</description>
      <pubDate>Thu, 28 May 2020 11:52:58 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184961#M6812</guid>
      <dc:creator>PrasanthD_intel</dc:creator>
      <dc:date>2020-05-28T11:52:58Z</dc:date>
    </item>
    <item>
      <title>Dear Prasanth</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184962#M6813</link>
      <description>&lt;P&gt;Dear&amp;nbsp;Prasanth&lt;/P&gt;&lt;P&gt;I didn't mention the system was booted with hypertrading, but to verify it wasn't this the problem I rebooted without HT.&lt;BR /&gt;Unfortunately nothing changes.&lt;BR /&gt;&lt;BR /&gt;To simplify the problem I compile and use a very small code taken from&lt;BR /&gt;&lt;A href="https://people.sc.fsu.edu/~jburkardt/f_src/hello_mpi/hello_mpi.f90" target="_blank"&gt;https://people.sc.fsu.edu/~jburkardt/f_src/hello_mpi/hello_mpi.f90&lt;/A&gt;&lt;BR /&gt;mpiifort hello_mpi.f90&lt;BR /&gt;&lt;BR /&gt;The following are the commands I give to create my cpusets.&lt;BR /&gt;I tryed them also without --mem_exclusive and --cpu_exclusive switches but nothing changes.&lt;BR /&gt;&lt;BR /&gt;cset set -lr&lt;BR /&gt;cset:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Name &amp;nbsp; &amp;nbsp; &amp;nbsp; CPUs-X &amp;nbsp; &amp;nbsp;MEMs-X Tasks Subs Path&lt;BR /&gt;&amp;nbsp;------------ ---------- - ------- - ----- ---- ----------&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;root &amp;nbsp; &amp;nbsp; &amp;nbsp;0-431 y &amp;nbsp; &amp;nbsp;0-11 y &amp;nbsp;4965 &amp;nbsp; &amp;nbsp;0 /&lt;BR /&gt;&lt;BR /&gt;cset set -c 0-35 -m 0 --mem_exclusive --cpu_exclusive -s system&lt;BR /&gt;cset set -c 36-431 -m 1-11 --mem_exclusive --cpu_exclusive -s user&lt;BR /&gt;&lt;BR /&gt;cset set -lr&lt;BR /&gt;cset:&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;Name &amp;nbsp; &amp;nbsp; &amp;nbsp; CPUs-X &amp;nbsp; &amp;nbsp;MEMs-X Tasks Subs Path&lt;BR /&gt;&amp;nbsp;------------ ---------- - ------- - ----- ---- ----------&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;root &amp;nbsp; &amp;nbsp; &amp;nbsp;0-431 y &amp;nbsp; &amp;nbsp;0-11 y &amp;nbsp;4921 &amp;nbsp; &amp;nbsp;2 /&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;user &amp;nbsp; &amp;nbsp; 36-431 y &amp;nbsp; &amp;nbsp;1-11 y &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;0 /user&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;system &amp;nbsp; &amp;nbsp; &amp;nbsp; 0-35 y &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 y &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;0 /system&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;export FI_PROVIDER=sockets&lt;BR /&gt;export I_MPI_DEBUG=5&lt;BR /&gt;&lt;BR /&gt;cset proc --move -p $$ /system&lt;BR /&gt;mpirun -np 2 ./a.out&lt;BR /&gt;[0] MPI startup(): libfabric version: 1.9.0a1-impi&lt;BR /&gt;[0] MPI startup(): libfabric provider: sockets&lt;BR /&gt;P1 &amp;nbsp;"Hello, world!"&lt;BR /&gt;[0] MPI startup(): Rank &amp;nbsp; &amp;nbsp;Pid &amp;nbsp; &amp;nbsp; &amp;nbsp;Node name &amp;nbsp;Pin cpu&lt;BR /&gt;[0] MPI startup(): 0 &amp;nbsp; &amp;nbsp; &amp;nbsp; 145005 &amp;nbsp; tiziano &amp;nbsp; {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17}&lt;BR /&gt;[0] MPI startup(): 1 &amp;nbsp; &amp;nbsp; &amp;nbsp; 145006 &amp;nbsp; tiziano &amp;nbsp; {18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35}&lt;BR /&gt;[0] MPI startup(): I_MPI_ROOT=/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi&lt;BR /&gt;[0] MPI startup(): I_MPI_MPIRUN=mpirun&lt;BR /&gt;[0] MPI startup(): I_MPI_HYDRA_TOPOLIB=hwloc&lt;BR /&gt;[0] MPI startup(): I_MPI_INTERNAL_MEM_POLICY=default&lt;BR /&gt;[0] MPI startup(): I_MPI_DEBUG=5&lt;BR /&gt;26 May 2020 &amp;nbsp; 9:42:01.341 AM&lt;BR /&gt;&lt;BR /&gt;P0 &amp;nbsp;HELLO_MPI - Master process:&lt;BR /&gt;P0 &amp;nbsp; &amp;nbsp;FORTRAN90/MPI version&lt;BR /&gt;P0 &amp;nbsp; &amp;nbsp;An MPI test program.&lt;BR /&gt;P0 &amp;nbsp; &amp;nbsp;The number of MPI processes is &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;2&lt;BR /&gt;P0 &amp;nbsp;"Hello, world!"&lt;BR /&gt;&lt;BR /&gt;P0 &amp;nbsp;HELLO_MPI - Master process:&lt;BR /&gt;P0 &amp;nbsp; &amp;nbsp;Normal end of execution: "Goodbye, world!".&lt;BR /&gt;&lt;BR /&gt;P0 &amp;nbsp; &amp;nbsp;Elapsed wall clock time = &amp;nbsp; 0.253995E-03 seconds.&lt;BR /&gt;&lt;BR /&gt;P0 &amp;nbsp;HELLO_MPI - Master process:&lt;BR /&gt;P0 &amp;nbsp; &amp;nbsp;Normal end of execution.&lt;BR /&gt;&lt;BR /&gt;26 May 2020 &amp;nbsp; 9:42:01.342 AM&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;cset proc --move -p $$ /user&lt;BR /&gt;mpirun -np 2 ./a.out&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpirun: line 103: 145410 Segmentation fault &amp;nbsp; &amp;nbsp; &amp;nbsp;(core dumped) mpiexec.hydra "$@" 0&amp;lt;&amp;amp;0&lt;BR /&gt;&lt;BR /&gt;which mpirun&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpirun&lt;BR /&gt;&lt;BR /&gt;bash -x /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpirun -np 2 ./a.out&lt;BR /&gt;+ tempdir=/tmp&lt;BR /&gt;+ '[' -n '' ']'&lt;BR /&gt;+ '[' -n '' ']'&lt;BR /&gt;+ np_boot=&lt;BR /&gt;++ whoami&lt;BR /&gt;+ username=root&lt;BR /&gt;+ rc=0&lt;BR /&gt;++ uname -m&lt;BR /&gt;++ grep 1om&lt;BR /&gt;+ '[' -z /opt/intel/compilers_and_libraries_2020.0.166/linux/mpi -a -z '' ']'&lt;BR /&gt;+ export I_MPI_MPIRUN=mpirun&lt;BR /&gt;+ I_MPI_MPIRUN=mpirun&lt;BR /&gt;+ '[' -n '' -a -z '' ']'&lt;BR /&gt;+ '[' -n '' -a -z '' ']'&lt;BR /&gt;+ '[' -z '' -a -z '' ']'&lt;BR /&gt;+ '[' -n '' ']'&lt;BR /&gt;+ '[' -n '' ']'&lt;BR /&gt;+ '[' -n '' ']'&lt;BR /&gt;+ '[' -n '' -a -n '' -a -n '' ']'&lt;BR /&gt;+ '[' -n '' -o -n '' ']'&lt;BR /&gt;+ '[' x = xyes -o x = xenable -o x = xon -o x = x1 ']'&lt;BR /&gt;+ '[' -n '' ']'&lt;BR /&gt;+ mpiexec.hydra -np 2 ./a.out&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpirun: line 103: 145555 Segmentation fault &amp;nbsp; &amp;nbsp; &amp;nbsp;(core dumped) mpiexec.hydra "$@" 0&amp;lt;&amp;amp;0&lt;BR /&gt;+ rc=139&lt;BR /&gt;+ cleanup=0&lt;BR /&gt;+ echo -np 2 ./a.out&lt;BR /&gt;+ grep '\-cleanup'&lt;BR /&gt;+ '[' 1 -eq 0 ']'&lt;BR /&gt;+ '[' -n '' ']'&lt;BR /&gt;+ '[' 0 -eq 1 ']'&lt;BR /&gt;+ exit 139&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;which mpiexec.hydra&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/bin/mpiexec.hydra&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 29 May 2020 17:45:11 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184962#M6813</guid>
      <dc:creator>lombardi__emanuele</dc:creator>
      <dc:date>2020-05-29T17:45:11Z</dc:date>
    </item>
    <item>
      <title>Hi Emanuele,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184963#M6814</link>
      <description>&lt;P&gt;Hi Emanuele,&lt;/P&gt;&lt;P&gt;We have tried reproducing the same at our end but haven't faced any such errors.&lt;/P&gt;&lt;P&gt;We are transferring this issue to the concerned team.&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;Prasanth&lt;/P&gt;</description>
      <pubDate>Wed, 03 Jun 2020 07:40:56 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184963#M6814</guid>
      <dc:creator>PrasanthD_intel</dc:creator>
      <dc:date>2020-06-03T07:40:56Z</dc:date>
    </item>
    <item>
      <title>In the mean time, consider</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184964#M6815</link>
      <description>&lt;P&gt;In the mean time, consider writing a script that uses "cset set -lr" piped to grep to obtain the set of interest (e.g. user), or lack thereof, extract the logical CPU number range(s)/list(s), then set environment variable I_MPI_PIN_PROCESSOR_LIST to conform to the selected cset.&lt;/P&gt;&lt;P&gt;See: &lt;A href="https://software.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-linux/top/environment-variable-reference/process-pinning/environment-variables-for-process-pinning.html" target="_blank"&gt;https://software.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-linux/top/environment-variable-reference/process-pinning/environment-variables-for-process-pinning.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Jim Dempsey&lt;/P&gt;</description>
      <pubDate>Wed, 03 Jun 2020 13:06:15 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184964#M6815</guid>
      <dc:creator>jimdempseyatthecove</dc:creator>
      <dc:date>2020-06-03T13:06:15Z</dc:date>
    </item>
    <item>
      <title>Hi Emanuele,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184965#M6816</link>
      <description>&lt;P&gt;Hi&amp;nbsp;Emanuele,&lt;/P&gt;&lt;P&gt;You may try to use our internal system topology recognition via&amp;nbsp;I_MPI_HYDRA_TOPOLIB=ipl .&lt;/P&gt;&lt;P&gt;Otherwise I'd recommend you to wait for IMPI 2019 update 8 which is scheduled for mid July, since we addressed several issues regarding the topology recognition within that build.&lt;/P&gt;&lt;P&gt;Therefore please let me know if update 8 addressed your issue, once you had a chance to give it a try.&lt;/P&gt;&lt;P&gt;Best regards,&lt;/P&gt;&lt;P&gt;Michael&lt;/P&gt;</description>
      <pubDate>Fri, 05 Jun 2020 16:03:03 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184965#M6816</guid>
      <dc:creator>Michael_Intel</dc:creator>
      <dc:date>2020-06-05T16:03:03Z</dc:date>
    </item>
    <item>
      <title>Quote:jimdempseyatthecove</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184966#M6817</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;jimdempseyatthecove (Blackbelt) wrote:&lt;BR /&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;In the mean time, consider writing a script that uses "cset set -lr" piped to grep to obtain the set of interest (e.g. user), or lack thereof, extract the logical CPU number range(s)/list(s), then set environment variable I_MPI_PIN_PROCESSOR_LIST to conform to the selected cset.&lt;/P&gt;&lt;P&gt;See: &lt;A href="https://software.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-linux/top/environment-variable-reference/process-pinning/environment-variables-for-process-pinning.html"&gt;https://software.intel.com/content/www/us/en/develop/documentation/mpi-d...&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Jim Dempsey&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you Jim, it works!&lt;BR /&gt;cset set -lr&lt;BR /&gt;cset:&amp;nbsp;&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Name &amp;nbsp; &amp;nbsp; &amp;nbsp; CPUs-X &amp;nbsp; &amp;nbsp;MEMs-X Tasks Subs Path&lt;BR /&gt;&amp;nbsp;------------ ---------- - ------- - ----- ---- ----------&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;root &amp;nbsp; &amp;nbsp; &amp;nbsp;0-215 y &amp;nbsp; &amp;nbsp;0-11 y &amp;nbsp;2688 &amp;nbsp; &amp;nbsp;2 /&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;user &amp;nbsp; &amp;nbsp; 36-215 y &amp;nbsp; &amp;nbsp;1-11 y &amp;nbsp; &amp;nbsp; 2 &amp;nbsp; &amp;nbsp;0 /user&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;system &amp;nbsp; &amp;nbsp; &amp;nbsp; 0-35 y &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 y &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;0 /system&lt;/P&gt;&lt;P&gt;cset proc --move -p $$ /user&lt;/P&gt;&lt;P&gt;export I_MPI_PIN_PROCESSOR_LIST=36-179&lt;BR /&gt;mpirun -np 2 ./a.out&amp;nbsp;&lt;BR /&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 1 &amp;nbsp;"Hello, world!"&lt;BR /&gt;&amp;nbsp;6 June 2020 &amp;nbsp;11:10:31.607 AM&lt;/P&gt;&lt;P&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp;HELLO_MPI - Master process:&lt;BR /&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;FORTRAN90/MPI version&lt;BR /&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;An MPI test program.&lt;BR /&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;The number of MPI processes is &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;2&lt;BR /&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp;"Hello, world!"&lt;/P&gt;&lt;P&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp;HELLO_MPI - Master process:&lt;BR /&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;Normal end of execution: "Goodbye, world!".&lt;/P&gt;&lt;P&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;Elapsed wall clock time = &amp;nbsp; 0.202304E-03 seconds.&lt;/P&gt;&lt;P&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp;HELLO_MPI - Master process:&lt;BR /&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;Normal end of execution.&lt;/P&gt;&lt;P&gt;&amp;nbsp;6 June 2020 &amp;nbsp;11:10:31.608 AM&lt;/P&gt;&lt;P&gt;It seems that when cpus in&amp;nbsp;I_MPI_PIN_PROCESSOR_LIST are more than 144&amp;nbsp; a warning is issued. (144 is a strange number for my topology)&lt;/P&gt;&lt;P&gt;IPL WARN&amp;gt; ipl_pin_list_direct syntax error, 36-180 list member should be -1, single CPU number, or CPU number range&lt;/P&gt;&lt;P&gt;export I_MPI_PIN_PROCESSOR_LIST=36-180&lt;BR /&gt;mpirun -np 2 ./a.out&amp;nbsp;&lt;BR /&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 1 &amp;nbsp;"Hello, world!"&lt;BR /&gt;&amp;nbsp;6 June 2020 &amp;nbsp;11:12:26.181 AM&lt;/P&gt;&lt;P&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp;HELLO_MPI - Master process:&lt;BR /&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;FORTRAN90/MPI version&lt;BR /&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;An MPI test program.&lt;BR /&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;The number of MPI processes is &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;2&lt;BR /&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp;"Hello, world!"&lt;/P&gt;&lt;P&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp;HELLO_MPI - Master process:&lt;BR /&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;Normal end of execution: "Goodbye, world!".&lt;/P&gt;&lt;P&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;Elapsed wall clock time = &amp;nbsp; 0.244736E-03 seconds.&lt;/P&gt;&lt;P&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp;HELLO_MPI - Master process:&lt;BR /&gt;P &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0 &amp;nbsp; &amp;nbsp;Normal end of execution.&lt;/P&gt;&lt;P&gt;&amp;nbsp;6 June 2020 &amp;nbsp;11:12:26.181 AM&lt;BR /&gt;IPL WARN&amp;gt; ipl_pin_list_direct syntax error, 36-180 list member should be -1, single CPU number, or CPU number range&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 06 Jun 2020 09:20:46 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184966#M6817</guid>
      <dc:creator>lombardi__emanuele</dc:creator>
      <dc:date>2020-06-06T09:20:46Z</dc:date>
    </item>
    <item>
      <title>Quote:Michael (Intel) wrote:</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184967#M6818</link>
      <description>&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;Michael (Intel) wrote:&lt;BR /&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Hi&amp;nbsp;Emanuele,&lt;/P&gt;&lt;P&gt;You may try to use our internal system topology recognition via&amp;nbsp;I_MPI_HYDRA_TOPOLIB=ipl .&lt;/P&gt;&lt;P&gt;Otherwise I'd recommend you to wait for IMPI 2019 update 8 which is scheduled for mid July, since we addressed several issues regarding the topology recognition within that build.&lt;/P&gt;&lt;P&gt;Therefore please let me know if update 8 addressed your issue, once you had a chance to give it a try.&lt;/P&gt;&lt;P&gt;Best regards,&lt;/P&gt;&lt;P&gt;Michael&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you Michael,&lt;BR /&gt;export&amp;nbsp;&amp;nbsp;I_MPI_HYDRA_TOPOLIB=ipl&lt;BR /&gt;works up to 9 cores, from 10 on it results in the errors listed below.&lt;BR /&gt;I'll wait for update 8 and I'll let you know.&lt;/P&gt;&lt;P&gt;mpirun -np 10 ./a.out&amp;nbsp;&lt;BR /&gt;Assertion failed in file ../../src/util/intel/shm_heap/impi_shm_heap.c at line 917: group_id &amp;lt; group_num&lt;BR /&gt;Assertion failed in file ../../src/util/intel/shm_heap/impi_shm_heap.c at line 917: group_id &amp;lt; group_num&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so.12(MPL_backtrace_show+0x34) [0x7f0605da11d4]&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so.12(MPIR_Assert_fail+0x21) [0x7f0605529031]&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so.12(+0x44c505) [0x7f060586a505]&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so.12(+0x8ed86a) [0x7f0605d0b86a]&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so.12(+0x64cd70) [0x7f0605a6ad70]&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so.12(+0x1fe5fa) [0x7f060561c5fa]&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so.12(+0x4664b4) [0x7f06058844b4]&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so.12(MPI_Init+0x11b) [0x7f060587fc7b]&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi&lt;BR /&gt;Abort(1) on node 9: Internal error&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so.12(MPL_backtrace_show+0x34) [0x7f39a5e391d4]&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so.12(MPIR_Assert_fail+0x21) [0x7f39a55c1031]&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so.12(+0x44c505) [0x7f39a5902505]&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so.12(+0x8ed86a) [0x7f39a5da386a]&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so.12(+0x64cd70) [0x7f39a5b02d70]&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so.12(+0x1fe5fa) [0x7f39a56b45fa]&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so.12(+0x4664b4) [0x7f39a591c4b4]&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi/intel64/lib/release/libmpi.so.12(MPI_Init+0x11b) [0x7f39a5917c7b]&lt;BR /&gt;/opt/intel/compilers_and_libraries_2020.0.166/linux/mpi&lt;BR /&gt;Abort(1) on node 8: Internal error&lt;/P&gt;&lt;P&gt;===================================================================================&lt;BR /&gt;= &amp;nbsp; BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;= &amp;nbsp; RANK 0 PID 212868 RUNNING AT tiziano&lt;BR /&gt;= &amp;nbsp; KILLED BY SIGNAL: 9 (Killed)&lt;BR /&gt;===================================================================================&lt;/P&gt;&lt;P&gt;===================================================================================&lt;BR /&gt;= &amp;nbsp; BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;= &amp;nbsp; RANK 1 PID 212869 RUNNING AT tiziano&lt;BR /&gt;= &amp;nbsp; KILLED BY SIGNAL: 9 (Killed)&lt;BR /&gt;===================================================================================&lt;/P&gt;&lt;P&gt;===================================================================================&lt;BR /&gt;= &amp;nbsp; BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;= &amp;nbsp; RANK 2 PID 212870 RUNNING AT tiziano&lt;BR /&gt;= &amp;nbsp; KILLED BY SIGNAL: 9 (Killed)&lt;BR /&gt;===================================================================================&lt;/P&gt;&lt;P&gt;===================================================================================&lt;BR /&gt;= &amp;nbsp; BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;= &amp;nbsp; RANK 3 PID 212871 RUNNING AT tiziano&lt;BR /&gt;= &amp;nbsp; KILLED BY SIGNAL: 9 (Killed)&lt;BR /&gt;===================================================================================&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;===================================================================================&lt;BR /&gt;= &amp;nbsp; BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;= &amp;nbsp; RANK 4 PID 212872 RUNNING AT tiziano&lt;BR /&gt;= &amp;nbsp; KILLED BY SIGNAL: 9 (Killed)&lt;BR /&gt;===================================================================================&lt;/P&gt;&lt;P&gt;===================================================================================&lt;BR /&gt;= &amp;nbsp; BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;= &amp;nbsp; RANK 5 PID 212873 RUNNING AT tiziano&lt;BR /&gt;= &amp;nbsp; KILLED BY SIGNAL: 9 (Killed)&lt;BR /&gt;===================================================================================&lt;/P&gt;&lt;P&gt;===================================================================================&lt;BR /&gt;= &amp;nbsp; BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;= &amp;nbsp; RANK 6 PID 212874 RUNNING AT tiziano&lt;BR /&gt;= &amp;nbsp; KILLED BY SIGNAL: 9 (Killed)&lt;BR /&gt;===================================================================================&lt;/P&gt;&lt;P&gt;===================================================================================&lt;BR /&gt;= &amp;nbsp; BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;= &amp;nbsp; RANK 7 PID 212875 RUNNING AT tiziano&lt;BR /&gt;= &amp;nbsp; KILLED BY SIGNAL: 9 (Killed)&lt;BR /&gt;===================================================================================&lt;BR /&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 06 Jun 2020 09:28:38 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error-running-in-a-cpuset/m-p/1184967#M6818</guid>
      <dc:creator>lombardi__emanuele</dc:creator>
      <dc:date>2020-06-06T09:28:38Z</dc:date>
    </item>
  </channel>
</rss>

