<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic intel mpi failed with infiniband on new nodes of our cluster (G in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826412#M1227</link>
    <description>Hi,&lt;BR /&gt;&lt;BR /&gt;- do you mean --verbose, isn't it ? here the output with --verbose directly on n13:&lt;BR /&gt;[bash][16:20:41] denayer@n13 ~ $ mpirun --verbose -genv I_MPI_DEBUG 5 -genv I_MPI_FABRICS shm -n 1 -host n13 hostname : -n 1 -host n14 hostname
WARNING: Unable to read mpd.hosts or list of hosts isn't provided. MPI job will be run on the current machine only.
running mpdallexit on n13
LAUNCHED mpd on n13  via
RUNNING: mpd on n13
mpiexec: unable to start all procs; may have invalid machine names
    remaining specified hosts:
        192.168.0.14 (n14.marvin)
[/bash] here the output with --verbose from master :&lt;BR /&gt;&lt;BR /&gt;[bash][16:20:17] denayer@master ~ $ mpirun --verbose -genv I_MPI_DEBUG 5 -genv I_MPI_FABRICS shm -n 1 -host n13 hostname : -n 1 -host n14 hostname
WARNING: Unable to read mpd.hosts or list of hosts isn't provided. MPI job will be run on the current machine only.
running mpdallexit on master
LAUNCHED mpd on master  via
RUNNING: mpd on master
mpiexec: unable to start all procs; may have invalid machine names
    remaining specified hosts:
        192.168.0.13 (n13.marvin)
        192.168.0.14 (n14.marvin)
[/bash] &lt;BR /&gt;&lt;BR /&gt;- For your ssh question:&lt;BR /&gt;from master to n13: ok.&lt;BR /&gt;from master to n14: ok.&lt;BR /&gt;from n13 to master: ok&lt;BR /&gt;from n14 to master: ok&lt;BR /&gt;from n13 to n14: ok&lt;BR /&gt;from n14 to n13: ok&lt;BR /&gt;&lt;BR /&gt;Regards</description>
    <pubDate>Tue, 22 May 2012 14:25:40 GMT</pubDate>
    <dc:creator>Guillaume_De_Nayer</dc:creator>
    <dc:date>2012-05-22T14:25:40Z</dc:date>
    <item>
      <title>intel mpi failed with infiniband on new nodes of our cluster (Got FATAL event 3)</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826408#M1223</link>
      <description>&lt;P&gt;Hi,&lt;BR /&gt;&lt;BR /&gt;We got new nodes on our cluster. On the first 12 old nodes intel mpi (intel cluster studio 2010, 2011, 2012) works without any problems. The 4 new nodes are exactly the same OS than the 12 old and the same installation (node image). It is the same hardware too.&lt;BR /&gt;&lt;BR /&gt;We have I_MPI_FABRICS=shm:ofa&lt;BR /&gt;&lt;BR /&gt;If I start mpirun on the 12 old nodes, it works without problems.&lt;BR /&gt;If I try to start a parallel job with one of the new node I get:&lt;BR /&gt;&lt;BR /&gt;[bash]send desc error [1] Abort: Got FATAL event 3 at line 861 in file ../../ofa_utility.c [/bash]&lt;BR /&gt;If I try to start a local job on one of the new node, it works.&lt;BR /&gt;So It is linked with infiniband.&lt;BR /&gt;&lt;BR /&gt;Strange, because a run with openmpi with infiniband works with the new nodes.&lt;BR /&gt;&lt;BR /&gt;If I'm using I_MPI_FABRICS=shm:dapl with the new nodes it works.&lt;BR /&gt;&lt;BR /&gt;Ideas ?&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;Guillaume&lt;/P&gt;</description>
      <pubDate>Fri, 11 May 2012 09:45:46 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826408#M1223</guid>
      <dc:creator>Guillaume_De_Nayer</dc:creator>
      <dc:date>2012-05-11T09:45:46Z</dc:date>
    </item>
    <item>
      <title>intel mpi failed with infiniband on new nodes of our cluster (G</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826409#M1224</link>
      <description>Hi Guillaume,&lt;BR /&gt;&lt;BR /&gt;What happens if you try to run with a non-parallel command?&lt;BR /&gt;&lt;BR /&gt;[bash]mpirun -genv I_MPI_FABRICS shm:ofa -n 1 -host old_node hostname : -n 1 -host new_node hostname[/bash]&lt;BR /&gt;Also, on the parallel job, what is the output with -verbose and I_MPI_DEBUG=5?&lt;BR /&gt;&lt;BR /&gt;Sincerely,&lt;BR /&gt;James Tullos&lt;BR /&gt;Technical Consulting Engineer&lt;BR /&gt;Intel Cluster Tools</description>
      <pubDate>Fri, 11 May 2012 13:47:29 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826409#M1224</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2012-05-11T13:47:29Z</dc:date>
    </item>
    <item>
      <title>intel mpi failed with infiniband on new nodes of our cluster (G</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826410#M1225</link>
      <description>Sorry for the delay...&lt;BR /&gt;&lt;BR /&gt;I tried your test. It does not work:&lt;BR /&gt;- under PBS/Torque I get: "-host (or -ghost) and -machinefile are incompatible"&lt;BR /&gt;&lt;BR /&gt;- in a terminal I get: &lt;BR /&gt;mpiexec: unable to start all procs; may have invalid machine names&lt;BR /&gt; remaining specified hosts:&lt;BR /&gt; 192.168.0.13 (n13.blabla)&lt;BR /&gt; 192.168.0.14 (n14.blabla)&lt;BR /&gt;&lt;BR /&gt;It do that on all the nodes...but the machine names are correct. So I don't understand.&lt;BR /&gt;&lt;BR /&gt;Best regards</description>
      <pubDate>Tue, 22 May 2012 07:20:31 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826410#M1225</guid>
      <dc:creator>Guillaume_De_Nayer</dc:creator>
      <dc:date>2012-05-22T07:20:31Z</dc:date>
    </item>
    <item>
      <title>intel mpi failed with infiniband on new nodes of our cluster (G</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826411#M1226</link>
      <description>Hi Guillaume,&lt;BR /&gt;&lt;BR /&gt;The first error message is likely due to a lack of tight integration with Torque*. Could you please send me the output from running the same command with -verbose added? Are you able to ssh from an old node to a new node, or the reverse?&lt;BR /&gt;&lt;BR /&gt;Sincerely,&lt;BR /&gt;James Tullos&lt;BR /&gt;Technical Consulting Engineer&lt;BR /&gt;Intel Cluster Tools</description>
      <pubDate>Tue, 22 May 2012 14:18:25 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826411#M1226</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2012-05-22T14:18:25Z</dc:date>
    </item>
    <item>
      <title>intel mpi failed with infiniband on new nodes of our cluster (G</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826412#M1227</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;- do you mean --verbose, isn't it ? here the output with --verbose directly on n13:&lt;BR /&gt;[bash][16:20:41] denayer@n13 ~ $ mpirun --verbose -genv I_MPI_DEBUG 5 -genv I_MPI_FABRICS shm -n 1 -host n13 hostname : -n 1 -host n14 hostname
WARNING: Unable to read mpd.hosts or list of hosts isn't provided. MPI job will be run on the current machine only.
running mpdallexit on n13
LAUNCHED mpd on n13  via
RUNNING: mpd on n13
mpiexec: unable to start all procs; may have invalid machine names
    remaining specified hosts:
        192.168.0.14 (n14.marvin)
[/bash] here the output with --verbose from master :&lt;BR /&gt;&lt;BR /&gt;[bash][16:20:17] denayer@master ~ $ mpirun --verbose -genv I_MPI_DEBUG 5 -genv I_MPI_FABRICS shm -n 1 -host n13 hostname : -n 1 -host n14 hostname
WARNING: Unable to read mpd.hosts or list of hosts isn't provided. MPI job will be run on the current machine only.
running mpdallexit on master
LAUNCHED mpd on master  via
RUNNING: mpd on master
mpiexec: unable to start all procs; may have invalid machine names
    remaining specified hosts:
        192.168.0.13 (n13.marvin)
        192.168.0.14 (n14.marvin)
[/bash] &lt;BR /&gt;&lt;BR /&gt;- For your ssh question:&lt;BR /&gt;from master to n13: ok.&lt;BR /&gt;from master to n14: ok.&lt;BR /&gt;from n13 to master: ok&lt;BR /&gt;from n14 to master: ok&lt;BR /&gt;from n13 to n14: ok&lt;BR /&gt;from n14 to n13: ok&lt;BR /&gt;&lt;BR /&gt;Regards</description>
      <pubDate>Tue, 22 May 2012 14:25:40 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826412#M1227</guid>
      <dc:creator>Guillaume_De_Nayer</dc:creator>
      <dc:date>2012-05-22T14:25:40Z</dc:date>
    </item>
    <item>
      <title>intel mpi failed with infiniband on new nodes of our cluster (G</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826413#M1228</link>
      <description>Hi Guillaume,&lt;BR /&gt;&lt;BR /&gt;What is the value of I_MPI_PROCESS_MANAGER? Which version of the Intel MPI Library are you using?&lt;BR /&gt;&lt;BR /&gt;Sincerely,&lt;BR /&gt;James Tullos&lt;BR /&gt;Technical Consulting Engineer&lt;BR /&gt;Intel Cluster Tools</description>
      <pubDate>Tue, 22 May 2012 14:50:18 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826413#M1228</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2012-05-22T14:50:18Z</dc:date>
    </item>
    <item>
      <title>intel mpi failed with infiniband on new nodes of our cluster (G</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826414#M1229</link>
      <description>We have 3 different one:&lt;BR /&gt;intel cluster toolkit 2010&lt;BR /&gt;intel cluster studio 2011&lt;BR /&gt;intel cluster studio 2012.&lt;BR /&gt;&lt;BR /&gt;The errors above are with intel cluster studio 2011:&lt;BR /&gt;&lt;BR /&gt;[13:44:33] denayer@master ~ $ mpirun -version&lt;BR /&gt;Intel MPI Library for Linux Version 4.0 Update 1&lt;BR /&gt;Build 20100910 Platform Intel 64 64-bit applications&lt;BR /&gt;Copyright (C) 2003-2010 Intel Corporation. All rights reserved&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I_MPI_PROCESS_MANAGER has no value in my shell.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;</description>
      <pubDate>Wed, 23 May 2012 11:46:41 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826414#M1229</guid>
      <dc:creator>Guillaume_De_Nayer</dc:creator>
      <dc:date>2012-05-23T11:46:41Z</dc:date>
    </item>
    <item>
      <title>intel mpi failed with infiniband on new nodes of our cluster (G</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826415#M1230</link>
      <description>Hi Guillaume,&lt;BR /&gt;&lt;BR /&gt;What happens with Intel Cluster Studio 2012 (which contains Intel MPI Library 4.0 Update 3)?&lt;BR /&gt;&lt;BR /&gt;Sincerely,&lt;BR /&gt;James Tullos&lt;BR /&gt;Technical Consulting Engineer&lt;BR /&gt;Intel Cluster Tools</description>
      <pubDate>Wed, 23 May 2012 13:51:04 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826415#M1230</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2012-05-23T13:51:04Z</dc:date>
    </item>
    <item>
      <title>intel mpi failed with infiniband on new nodes of our cluster (G</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826416#M1231</link>
      <description>with Intel Cluster Studio 2013:&lt;BR /&gt;15:52:56] denayer@master ~ $ mpirun -version&lt;BR /&gt;Intel MPI Library for Linux* OS, Version 4.0 Update 3 Build 20110824&lt;BR /&gt;Copyright (C) 2003-2011, Intel Corporation. All rights reserved.&lt;BR /&gt;&lt;BR /&gt;your command works:&lt;BR /&gt;[bash][15:53:48] denayer@master ~ $ mpirun -genv I_MPI_DEBUG 5 -genv I_MPI_FABRICS shm -n 1 -host n13 hostname : -n 1 -host n14 hostname
n14
n13
[/bash] &lt;BR /&gt;with --verbose:&lt;BR /&gt;&lt;BR /&gt;[bash][15:52:58] denayer@master ~ $ mpirun --verbose -genv I_MPI_DEBUG 5 -genv I_MPI_FABRICS shm -n 1 -host n13 hostname : -n 1 -host n14 hostname

==================================================================================================
mpiexec options:
----------------
  Base path: /opt/intel/ics_2012/impi/4.0.3.008/intel64/bin/
  Bootstrap server: ssh
  Debug level: 1
  Enable X: -1

  Global environment:
  -------------------
    I_MPI_PERHOST=allcores
    MODULE_VERSION_STACK=3.2.5
    MKLROOT=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl
    MANPATH=/opt/intel/ics_2012/itac/8.0.3.007/man:/opt/intel/ics_2012/impi/4.0.3.008/man:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/man/en_US:/opt/intel/ics_2012/vtune_amplifier_xe_2011/man:/opt/modules/Modules/default/share/man:/opt/pbs/man:/opt/env-switcher/man:/usr/man:/usr/share/man:/usr/local/man:/usr/local/share/man:/usr/X11R6/man:/opt/c3-4/man
    HOSTNAME=master
    VT_MPI=impi4
    I_MPI_PIN=0
    INTEL_LICENSE_FILE=/opt/intel/licenses
    IPPROOT=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp
    I_MPI_F77=ifort
    SHELL=/bin/bash
    TERM=xterm
    HISTSIZE=200000
    I_MPI_FABRICS=shm:dapl
    SSH_CLIENT=139.11.215.121 5290 22
    LIBRARY_PATH=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/compiler/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/../compiler/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21
    CVSROOT=:ext:fhpout@laplace.lstm.uni-erlangen.de:/data/linux/proj_tape/LSTM/fhpdev
    MODULE_SHELL=sh
    FPATH=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/include
    SSH_TTY=/dev/pts/5
    USER=denayer
    MODULE_OSCAR_USER=denayer
    LD_LIBRARY_PATH=/opt/intel/ics_2012/itac/8.0.3.007/itac/slib_impi4:/opt/intel/ics_2012/impi/4.0.3.008/intel64/lib:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/debugger/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mpirt/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/compiler/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/../compiler/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21:/home/denayer/FSI_new/FSI/Software/carat20/libraries/rlog-1.4/lib/:/home/denayer/FSI_new/FSI/Software/carat20/libraries/atlas/lib/:/opt/maui/lib:/opt/tecplot/tec360_2010/lib
    LS_COLORS=no=00:fi=00:di=01;35:ln=01;36:pi=40;33:so=01;33:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=01;32:*.cmd=01;32:*.exe=01;32:*.com=01;32:*.btm=01;32:*.bat=01;32:*.sh=01;32:*.csh=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.gz=01;31:*.bz2=01;31:*.bz=01;31:*.tz=01;31:*.rpm=01;31:*.cpio=01;31:*.jpg=01;35:*.gif=01;35:*.bmp=01;35:*.xbm=01;35:*.xpm=01;35:*.png=01;35:*.tif=01;35:
    ENV=/home/denayer/.bashrc
    CPATH=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/include:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/tbb/include
    TMOUT=36000
    MSM_PRODUCT=MSM
    NLSPATH=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/debugger/intel64/locale/en_US:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/compiler/lib/intel64/locale/en_US:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/lib/intel64/locale/en_US:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/lib/intel64/locale/en_US
    PATH=/opt/intel/ics_2012/itac/8.0.3.007/bin:/opt/intel/ics_2012/impi/4.0.3.008/intel64/bin:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/bin/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mpirt/bin/intel64:/opt/intel/ics_2012/vtune_amplifier_xe_2011/bin64:/usr/kerberos/bin:/opt/maui/bin:/opt/tecplot/tec360_2010/bin:/usr/local/bin:/bin:/usr/bin:/opt/pbs/bin:/opt/pbs/lib/xpbs/bin:/opt/env-switcher/bin:/opt/ansys_inc/shared_files/licensing/lic_admin:/opt/ansys_inc/v130/icemcfd/linux64_amd/bin:/opt/ansys_inc/v130/Framework/bin/Linux64:/opt/ansys_inc/v130/CFX/bin:/opt/c3-4/:/home/denayer/bin:.:/opt/gid/gid_9:/opt/matlab/r2011a/bin
    MAIL=/var/spool/mail/denayer
    MODULE_VERSION=3.2.5
    VT_ADD_LIBS=-ldwarf -lelf -lvtunwind -lnsl -lm -ldl -lpthread
    I_MPI_TUNER_DATA_DIR=/opt/intel/ics_2012/impi/4.0.3.008/etc64/
    TBBROOT=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/tbb
    PWD=/home/denayer
    _LMFILES_=/opt/modules/oscar-modulefiles/torque-oscar/2.1.10:/opt/env-switcher/share/env-switcher/ansys/ansys-13.0:/opt/env-switcher/share/env-switcher/tecplot/tec360-2010:/opt/modules/oscar-modulefiles/switcher/1.0.13:/opt/modules/oscar-modulefiles/default-manpath/1.0.1:/opt/modules/oscar-modulefiles/maui/3.2.6:/opt/modules/modulefiles/oscar-modules/1.0.5:/opt/modules/Modules/3.2.5/modulefiles/dot:/opt/env-switcher/share/env-switcher/tools/intel-vtune-2011:/opt/env-switcher/share/env-switcher/gid/gid-9.0.6:/opt/env-switcher/share/env-switcher/matlab/matlab-r2011a:/opt/env-switcher/share/env-switcher/compiler/intel-compiler-12.1:/opt/env-switcher/share/env-switcher/mpi/intel-cluster-toolkit-2012.0.032
    CARAT_LIC_PATH=/home/denayer/FSI_new/FSI/Software/carat20/exe
    EDITOR=/usr/bin/emacs
    LANG=en_US.UTF-8
    MODULEPATH=/opt/env-switcher/share/env-switcher:/opt/modules/oscar-modulefiles:/opt/modules/version:/opt/modules/Modules/$MODULE_VERSION/modulefiles:/opt/modules/modulefiles:
    LOADEDMODULES=torque-oscar/2.1.10:ansys/ansys-13.0:tecplot/tec360-2010:switcher/1.0.13:default-manpath/1.0.1:maui/3.2.6:oscar-modules/1.0.5:dot:tools/intel-vtune-2011:gid/gid-9.0.6:matlab/matlab-r2011a:compiler/intel-compiler-12.1:mpi/intel-cluster-toolkit-2012.0.032
    VT_LIB_DIR=/opt/intel/ics_2012/itac/8.0.3.007/itac/lib_impi4
    I_MPI_F90=ifort
    MPIROOTDIR=/opt/intel/impi/4.0.1/intel64/lib
    I_MPI_CC=icc
    VT_ROOT=/opt/intel/ics_2012/itac/8.0.3.007
    SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass
    HOME=/home/denayer
    SHLVL=2
    I_MPI_HYDRA_BOOTSTRAP_EXEC=ssh
    I_MPI_CXX=icpc
    I_MPI_MPD_RSH=ssh
    MSM_HOME=/usr/local/MegaRAID Storage Manager
    FHPSYSTEM=INTEL64
    VT_SLIB_DIR=/opt/intel/ics_2012/itac/8.0.3.007/itac/slib_impi4
    I_MPI_FC=ifort
    LOGNAME=denayer
    CVS_RSH=ssh
    SSH_CONNECTION=139.11.215.121 5290 139.11.215.117 22
    CLASSPATH=/opt/intel/ics_2012/itac/8.0.3.007/itac/lib_impi4
    MODULESHOME=/opt/modules/Modules/3.2.5
    CPRO_PATH=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233
    LESSOPEN=|/usr/bin/lesspipe.sh %s
    CVSEDITOR=emacs
    FHPTARGET=parallel
    INCLUDE=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/include:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/include
    G_BROKEN_FILENAMES=1
    I_MPI_ROOT=/opt/intel/ics_2012/impi/4.0.3.008
    _=/opt/intel/ics_2012/impi/4.0.3.008/intel64/bin/mpiexec.hydra

  User set environment:
  ---------------------
    I_MPI_DEBUG=5
    I_MPI_FABRICS=shm


    Proxy information:
    *********************
      Proxy ID:  1
      -----------------
        Proxy name: n13
        Process count: 1
        Start PID: 0

        Proxy exec list:
        ....................
          Exec: hostname; Process count: 1
      Proxy ID:  2
      -----------------
        Proxy name: n14
        Process count: 1
        Start PID: 1

        Proxy exec list:
        ....................
          Exec: hostname; Process count: 1

==================================================================================================

[mpiexec@master] Timeout set to -1 (-1 means infinite)
[mpiexec@master] Got a control port string of master:47174

Proxy launch args: /opt/intel/ics_2012/impi/4.0.3.008/intel64/bin/pmi_proxy --control-port master:47174 --debug --pmi-connect lazy-cache --pmi-aggregate -s 0 --bootstrap ssh --bootstrap-exec ssh --demux poll --pgid 0 --enable-stdin 1 --proxy-id

[mpiexec@master] PMI FD: (null); PMI PORT: (null); PMI ID/RANK: -1
Arguments being passed to proxy 0:
--version 1.3 --interface-env-name MPICH_INTERFACE_HOSTNAME --hostname n13 --global-core-count 2 --global-process-count 2 --auto-cleanup 1 --pmi-rank -1 --pmi-kvsname kvs_21039_0 --pmi-process-mapping (vector,(0,2,1)) --binding mode=off --bindlib ipl --ckpoint-num -1 --global-inherited-env 70 'I_MPI_PERHOST=allcores' 'MODULE_VERSION_STACK=3.2.5' 'MKLROOT=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl' 'MANPATH=/opt/intel/ics_2012/itac/8.0.3.007/man:/opt/intel/ics_2012/impi/4.0.3.008/man:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/man/en_US:/opt/intel/ics_2012/vtune_amplifier_xe_2011/man:/opt/modules/Modules/default/share/man:/opt/pbs/man:/opt/env-switcher/man:/usr/man:/usr/share/man:/usr/local/man:/usr/local/share/man:/usr/X11R6/man:/opt/c3-4/man' 'HOSTNAME=master' 'VT_MPI=impi4' 'I_MPI_PIN=0' 'INTEL_LICENSE_FILE=/opt/intel/licenses' 'IPPROOT=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp' 'I_MPI_F77=ifort' 'SHELL=/bin/bash' 'TERM=xterm' 'HISTSIZE=200000' 'I_MPI_FABRICS=shm:dapl' 'SSH_CLIENT=139.11.215.121 5290 22' 'LIBRARY_PATH=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/compiler/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/../compiler/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21' 'CVSROOT=:ext:fhpout@laplace.lstm.uni-erlangen.de:/data/linux/proj_tape/LSTM/fhpdev' 'MODULE_SHELL=sh' 'FPATH=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/include' 'SSH_TTY=/dev/pts/5' 'USER=denayer' 'MODULE_OSCAR_USER=denayer' 'LD_LIBRARY_PATH=/opt/intel/ics_2012/itac/8.0.3.007/itac/slib_impi4:/opt/intel/ics_2012/impi/4.0.3.008/intel64/lib:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/debugger/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mpirt/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/compiler/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/../compiler/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21:/home/denayer/FSI_new/FSI/Software/carat20/libraries/rlog-1.4/lib/:/home/denayer/FSI_new/FSI/Software/carat20/libraries/atlas/lib/:/opt/maui/lib:/opt/tecplot/tec360_2010/lib' 'LS_COLORS=no=00:fi=00:di=01;35:ln=01;36:pi=40;33:so=01;33:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=01;32:*.cmd=01;32:*.exe=01;32:*.com=01;32:*.btm=01;32:*.bat=01;32:*.sh=01;32:*.csh=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.gz=01;31:*.bz2=01;31:*.bz=01;31:*.tz=01;31:*.rpm=01;31:*.cpio=01;31:*.jpg=01;35:*.gif=01;35:*.bmp=01;35:*.xbm=01;35:*.xpm=01;35:*.png=01;35:*.tif=01;35:' 'ENV=/home/denayer/.bashrc' 'CPATH=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/include:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/tbb/include' 'TMOUT=36000' 'MSM_PRODUCT=MSM' 'NLSPATH=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/debugger/intel64/locale/en_US:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/compiler/lib/intel64/locale/en_US:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/lib/intel64/locale/en_US:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/lib/intel64/locale/en_US' 'PATH=/opt/intel/ics_2012/itac/8.0.3.007/bin:/opt/intel/ics_2012/impi/4.0.3.008/intel64/bin:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/bin/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mpirt/bin/intel64:/opt/intel/ics_2012/vtune_amplifier_xe_2011/bin64:/usr/kerberos/bin:/opt/maui/bin:/opt/tecplot/tec360_2010/bin:/usr/local/bin:/bin:/usr/bin:/opt/pbs/bin:/opt/pbs/lib/xpbs/bin:/opt/env-switcher/bin:/opt/ansys_inc/shared_files/licensing/lic_admin:/opt/ansys_inc/v130/icemcfd/linux64_amd/bin:/opt/ansys_inc/v130/Framework/bin/Linux64:/opt/ansys_inc/v130/CFX/bin:/opt/c3-4/:/home/denayer/bin:.:/opt/gid/gid_9:/opt/matlab/r2011a/bin' 'MAIL=/var/spool/mail/denayer' 'MODULE_VERSION=3.2.5' 'VT_ADD_LIBS=-ldwarf -lelf -lvtunwind -lnsl -lm -ldl -lpthread' 'I_MPI_TUNER_DATA_DIR=/opt/intel/ics_2012/impi/4.0.3.008/etc64/' 'TBBROOT=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/tbb' 'PWD=/home/denayer' '_LMFILES_=/opt/modules/oscar-modulefiles/torque-oscar/2.1.10:/opt/env-switcher/share/env-switcher/ansys/ansys-13.0:/opt/env-switcher/share/env-switcher/tecplot/tec360-2010:/opt/modules/oscar-modulefiles/switcher/1.0.13:/opt/modules/oscar-modulefiles/default-manpath/1.0.1:/opt/modules/oscar-modulefiles/maui/3.2.6:/opt/modules/modulefiles/oscar-modules/1.0.5:/opt/modules/Modules/3.2.5/modulefiles/dot:/opt/env-switcher/share/env-switcher/tools/intel-vtune-2011:/opt/env-switcher/share/env-switcher/gid/gid-9.0.6:/opt/env-switcher/share/env-switcher/matlab/matlab-r2011a:/opt/env-switcher/share/env-switcher/compiler/intel-compiler-12.1:/opt/env-switcher/share/env-switcher/mpi/intel-cluster-toolkit-2012.0.032' 'CARAT_LIC_PATH=/home/denayer/FSI_new/FSI/Software/carat20/exe' 'EDITOR=/usr/bin/emacs' 'LANG=en_US.UTF-8' 'MODULEPATH=/opt/env-switcher/share/env-switcher:/opt/modules/oscar-modulefiles:/opt/modules/version:/opt/modules/Modules/$MODULE_VERSION/modulefiles:/opt/modules/modulefiles:' 'LOADEDMODULES=torque-oscar/2.1.10:ansys/ansys-13.0:tecplot/tec360-2010:switcher/1.0.13:default-manpath/1.0.1:maui/3.2.6:oscar-modules/1.0.5:dot:tools/intel-vtune-2011:gid/gid-9.0.6:matlab/matlab-r2011a:compiler/intel-compiler-12.1:mpi/intel-cluster-toolkit-2012.0.032' 'VT_LIB_DIR=/opt/intel/ics_2012/itac/8.0.3.007/itac/lib_impi4' 'I_MPI_F90=ifort' 'MPIROOTDIR=/opt/intel/impi/4.0.1/intel64/lib' 'I_MPI_CC=icc' 'VT_ROOT=/opt/intel/ics_2012/itac/8.0.3.007' 'SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass' 'HOME=/home/denayer' 'SHLVL=2' 'I_MPI_HYDRA_BOOTSTRAP_EXEC=ssh' 'I_MPI_CXX=icpc' 'I_MPI_MPD_RSH=ssh' 'MSM_HOME=/usr/local/MegaRAID Storage Manager' 'FHPSYSTEM=INTEL64' 'VT_SLIB_DIR=/opt/intel/ics_2012/itac/8.0.3.007/itac/slib_impi4' 'I_MPI_FC=ifort' 'LOGNAME=denayer' 'CVS_RSH=ssh' 'SSH_CONNECTION=139.11.215.121 5290 139.11.215.117 22' 'CLASSPATH=/opt/intel/ics_2012/itac/8.0.3.007/itac/lib_impi4' 'MODULESHOME=/opt/modules/Modules/3.2.5' 'CPRO_PATH=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233' 'LESSOPEN=|/usr/bin/lesspipe.sh %s' 'CVSEDITOR=emacs' 'FHPTARGET=parallel' 'INCLUDE=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/include:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/include' 'G_BROKEN_FILENAMES=1' 'I_MPI_ROOT=/opt/intel/ics_2012/impi/4.0.3.008' '_=/opt/intel/ics_2012/impi/4.0.3.008/intel64/bin/mpiexec.hydra' --global-user-env 2 'I_MPI_DEBUG=5' 'I_MPI_FABRICS=shm' --global-system-env 0 --start-pid 0 --proxy-core-count 1 --exec --exec-appnum 0 --exec-proc-count 1 --exec-local-env 0 --exec-wdir /home/denayer --exec-args 1 hostname

[mpiexec@master] PMI FD: (null); PMI PORT: (null); PMI ID/RANK: -1
Arguments being passed to proxy 1:
--version 1.3 --interface-env-name MPICH_INTERFACE_HOSTNAME --hostname n14 --global-core-count 2 --global-process-count 2 --auto-cleanup 1 --pmi-rank -1 --pmi-kvsname kvs_21039_0 --pmi-process-mapping (vector,(0,2,1)) --binding mode=off --bindlib ipl --ckpoint-num -1 --global-inherited-env 70 'I_MPI_PERHOST=allcores' 'MODULE_VERSION_STACK=3.2.5' 'MKLROOT=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl' 'MANPATH=/opt/intel/ics_2012/itac/8.0.3.007/man:/opt/intel/ics_2012/impi/4.0.3.008/man:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/man/en_US:/opt/intel/ics_2012/vtune_amplifier_xe_2011/man:/opt/modules/Modules/default/share/man:/opt/pbs/man:/opt/env-switcher/man:/usr/man:/usr/share/man:/usr/local/man:/usr/local/share/man:/usr/X11R6/man:/opt/c3-4/man' 'HOSTNAME=master' 'VT_MPI=impi4' 'I_MPI_PIN=0' 'INTEL_LICENSE_FILE=/opt/intel/licenses' 'IPPROOT=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp' 'I_MPI_F77=ifort' 'SHELL=/bin/bash' 'TERM=xterm' 'HISTSIZE=200000' 'I_MPI_FABRICS=shm:dapl' 'SSH_CLIENT=139.11.215.121 5290 22' 'LIBRARY_PATH=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/compiler/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/../compiler/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21' 'CVSROOT=:ext:fhpout@laplace.lstm.uni-erlangen.de:/data/linux/proj_tape/LSTM/fhpdev' 'MODULE_SHELL=sh' 'FPATH=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/include' 'SSH_TTY=/dev/pts/5' 'USER=denayer' 'MODULE_OSCAR_USER=denayer' 'LD_LIBRARY_PATH=/opt/intel/ics_2012/itac/8.0.3.007/itac/slib_impi4:/opt/intel/ics_2012/impi/4.0.3.008/intel64/lib:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/debugger/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mpirt/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/compiler/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/../compiler/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/lib/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/tbb/lib/intel64/cc4.1.0_libc2.4_kernel2.6.16.21:/home/denayer/FSI_new/FSI/Software/carat20/libraries/rlog-1.4/lib/:/home/denayer/FSI_new/FSI/Software/carat20/libraries/atlas/lib/:/opt/maui/lib:/opt/tecplot/tec360_2010/lib' 'LS_COLORS=no=00:fi=00:di=01;35:ln=01;36:pi=40;33:so=01;33:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=01;32:*.cmd=01;32:*.exe=01;32:*.com=01;32:*.btm=01;32:*.bat=01;32:*.sh=01;32:*.csh=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.gz=01;31:*.bz2=01;31:*.bz=01;31:*.tz=01;31:*.rpm=01;31:*.cpio=01;31:*.jpg=01;35:*.gif=01;35:*.bmp=01;35:*.xbm=01;35:*.xpm=01;35:*.png=01;35:*.tif=01;35:' 'ENV=/home/denayer/.bashrc' 'CPATH=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/include:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/tbb/include' 'TMOUT=36000' 'MSM_PRODUCT=MSM' 'NLSPATH=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/debugger/intel64/locale/en_US:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/compiler/lib/intel64/locale/en_US:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/lib/intel64/locale/en_US:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/lib/intel64/locale/en_US' 'PATH=/opt/intel/ics_2012/itac/8.0.3.007/bin:/opt/intel/ics_2012/impi/4.0.3.008/intel64/bin:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/bin/intel64:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mpirt/bin/intel64:/opt/intel/ics_2012/vtune_amplifier_xe_2011/bin64:/usr/kerberos/bin:/opt/maui/bin:/opt/tecplot/tec360_2010/bin:/usr/local/bin:/bin:/usr/bin:/opt/pbs/bin:/opt/pbs/lib/xpbs/bin:/opt/env-switcher/bin:/opt/ansys_inc/shared_files/licensing/lic_admin:/opt/ansys_inc/v130/icemcfd/linux64_amd/bin:/opt/ansys_inc/v130/Framework/bin/Linux64:/opt/ansys_inc/v130/CFX/bin:/opt/c3-4/:/home/denayer/bin:.:/opt/gid/gid_9:/opt/matlab/r2011a/bin' 'MAIL=/var/spool/mail/denayer' 'MODULE_VERSION=3.2.5' 'VT_ADD_LIBS=-ldwarf -lelf -lvtunwind -lnsl -lm -ldl -lpthread' 'I_MPI_TUNER_DATA_DIR=/opt/intel/ics_2012/impi/4.0.3.008/etc64/' 'TBBROOT=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/tbb' 'PWD=/home/denayer' '_LMFILES_=/opt/modules/oscar-modulefiles/torque-oscar/2.1.10:/opt/env-switcher/share/env-switcher/ansys/ansys-13.0:/opt/env-switcher/share/env-switcher/tecplot/tec360-2010:/opt/modules/oscar-modulefiles/switcher/1.0.13:/opt/modules/oscar-modulefiles/default-manpath/1.0.1:/opt/modules/oscar-modulefiles/maui/3.2.6:/opt/modules/modulefiles/oscar-modules/1.0.5:/opt/modules/Modules/3.2.5/modulefiles/dot:/opt/env-switcher/share/env-switcher/tools/intel-vtune-2011:/opt/env-switcher/share/env-switcher/gid/gid-9.0.6:/opt/env-switcher/share/env-switcher/matlab/matlab-r2011a:/opt/env-switcher/share/env-switcher/compiler/intel-compiler-12.1:/opt/env-switcher/share/env-switcher/mpi/intel-cluster-toolkit-2012.0.032' 'CARAT_LIC_PATH=/home/denayer/FSI_new/FSI/Software/carat20/exe' 'EDITOR=/usr/bin/emacs' 'LANG=en_US.UTF-8' 'MODULEPATH=/opt/env-switcher/share/env-switcher:/opt/modules/oscar-modulefiles:/opt/modules/version:/opt/modules/Modules/$MODULE_VERSION/modulefiles:/opt/modules/modulefiles:' 'LOADEDMODULES=torque-oscar/2.1.10:ansys/ansys-13.0:tecplot/tec360-2010:switcher/1.0.13:default-manpath/1.0.1:maui/3.2.6:oscar-modules/1.0.5:dot:tools/intel-vtune-2011:gid/gid-9.0.6:matlab/matlab-r2011a:compiler/intel-compiler-12.1:mpi/intel-cluster-toolkit-2012.0.032' 'VT_LIB_DIR=/opt/intel/ics_2012/itac/8.0.3.007/itac/lib_impi4' 'I_MPI_F90=ifort' 'MPIROOTDIR=/opt/intel/impi/4.0.1/intel64/lib' 'I_MPI_CC=icc' 'VT_ROOT=/opt/intel/ics_2012/itac/8.0.3.007' 'SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass' 'HOME=/home/denayer' 'SHLVL=2' 'I_MPI_HYDRA_BOOTSTRAP_EXEC=ssh' 'I_MPI_CXX=icpc' 'I_MPI_MPD_RSH=ssh' 'MSM_HOME=/usr/local/MegaRAID Storage Manager' 'FHPSYSTEM=INTEL64' 'VT_SLIB_DIR=/opt/intel/ics_2012/itac/8.0.3.007/itac/slib_impi4' 'I_MPI_FC=ifort' 'LOGNAME=denayer' 'CVS_RSH=ssh' 'SSH_CONNECTION=139.11.215.121 5290 139.11.215.117 22' 'CLASSPATH=/opt/intel/ics_2012/itac/8.0.3.007/itac/lib_impi4' 'MODULESHOME=/opt/modules/Modules/3.2.5' 'CPRO_PATH=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233' 'LESSOPEN=|/usr/bin/lesspipe.sh %s' 'CVSEDITOR=emacs' 'FHPTARGET=parallel' 'INCLUDE=/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/mkl/include:/opt/intel/ics_2012/composer_xe_2011_sp1.6.233/ipp/include' 'G_BROKEN_FILENAMES=1' 'I_MPI_ROOT=/opt/intel/ics_2012/impi/4.0.3.008' '_=/opt/intel/ics_2012/impi/4.0.3.008/intel64/bin/mpiexec.hydra' --global-user-env 2 'I_MPI_DEBUG=5' 'I_MPI_FABRICS=shm' --global-system-env 0 --start-pid 1 --proxy-core-count 1 --exec --exec-appnum 1 --exec-proc-count 1 --exec-local-env 0 --exec-wdir /home/denayer --exec-args 1 hostname

[mpiexec@master] Launch arguments: ssh -x -q n13 /opt/intel/ics_2012/impi/4.0.3.008/intel64/bin/pmi_proxy --control-port master:47174 --debug --pmi-connect lazy-cache --pmi-aggregate -s 0 --bootstrap ssh --bootstrap-exec ssh --demux poll --pgid 0 --enable-stdin 1 --proxy-id 0
[mpiexec@master] Launch arguments: ssh -x -q n14 /opt/intel/ics_2012/impi/4.0.3.008/intel64/bin/pmi_proxy --control-port master:47174 --debug --pmi-connect lazy-cache --pmi-aggregate -s 0 --bootstrap ssh --bootstrap-exec ssh --demux poll --pgid 0 --enable-stdin 1 --proxy-id 1
[mpiexec@master] STDIN will be redirected to 1 fd(s): 7
[proxy:0:0@n13] Start PMI_proxy 0
[proxy:0:0@n13] STDIN will be redirected to 1 fd(s): 7
[proxy:0:1@n14] Start PMI_proxy 1
[proxy:0:0@n13] got crush from 4, 0
n13
[proxy:0:1@n14] got crush from 4, 0
n14
[/bash] &lt;BR /&gt;I did the tests with -genv I_MPI_FABRICS shm:ofa, and it works too.&lt;BR /&gt;&lt;BR /&gt;Do you see interesting infos to solve our original problem ?&lt;BR /&gt;&lt;BR /&gt;Thx a lot&lt;BR /&gt;</description>
      <pubDate>Wed, 23 May 2012 13:55:42 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826416#M1231</guid>
      <dc:creator>Guillaume_De_Nayer</dc:creator>
      <dc:date>2012-05-23T13:55:42Z</dc:date>
    </item>
    <item>
      <title>intel mpi failed with infiniband on new nodes of our cluster (G</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826417#M1232</link>
      <description>Hi Guillaume,&lt;BR /&gt;&lt;BR /&gt;Do you have a systemwide mpd.hosts file? Make certain it contains the old nodes and the new nodes.&lt;BR /&gt;&lt;BR /&gt;Sincerely,&lt;BR /&gt;James Tullos&lt;BR /&gt;Technical Consulting Engineer&lt;BR /&gt;Intel Cluster Tools</description>
      <pubDate>Wed, 23 May 2012 13:59:11 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826417#M1232</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2012-05-23T13:59:11Z</dc:date>
    </item>
    <item>
      <title>intel mpi failed with infiniband on new nodes of our cluster (G</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826418#M1233</link>
      <description>No, there is no mpd.hosts file. find or locate give 0 entry.&lt;BR /&gt;&lt;BR /&gt;Where is this file normally ?&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;</description>
      <pubDate>Wed, 23 May 2012 14:02:37 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826418#M1233</guid>
      <dc:creator>Guillaume_De_Nayer</dc:creator>
      <dc:date>2012-05-23T14:02:37Z</dc:date>
    </item>
    <item>
      <title>intel mpi failed with infiniband on new nodes of our cluster (G</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826419#M1234</link>
      <description>Hi Guillaume,&lt;BR /&gt;&lt;BR /&gt;Generally, there wouldn't be one, I wanted to make certain that there wasn't one. Back to the original error, did you get that error from all versions of Intel MPI Library?&lt;BR /&gt;&lt;BR /&gt;Sincerely,&lt;BR /&gt;James Tullos&lt;BR /&gt;Technical Consulting Engineer&lt;BR /&gt;Intel Cluster Tools</description>
      <pubDate>Wed, 23 May 2012 14:15:20 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/intel-mpi-failed-with-infiniband-on-new-nodes-of-our-cluster-Got/m-p/826419#M1234</guid>
      <dc:creator>James_T_Intel</dc:creator>
      <dc:date>2012-05-23T14:15:20Z</dc:date>
    </item>
  </channel>
</rss>

