<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Hello Rashawn, in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/Seg-Fault-when-using-US-NFS-install-of-MPI-5-1-0-038-from-site/m-p/1052573#M4423</link>
    <description>&lt;P&gt;Hello Rashawn,&lt;/P&gt;

&lt;P&gt;Could you please try to reproduce the failure with '-v' mpirun's option and provide the output?&lt;/P&gt;</description>
    <pubDate>Fri, 14 Aug 2015 08:07:41 GMT</pubDate>
    <dc:creator>Artem_R_Intel1</dc:creator>
    <dc:date>2015-08-14T08:07:41Z</dc:date>
    <item>
      <title>Seg Fault when using US NFS install of MPI 5.1.0.038 from site in Russia</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Seg-Fault-when-using-US-NFS-install-of-MPI-5-1-0-038-from-site/m-p/1052572#M4422</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;

&lt;P&gt;One of my team members from&amp;nbsp;Russia&amp;nbsp;is accessing a NFS installation of MPI 5.1.0.038 located at a US site. When this team member runs the simple ring application test.c, she encounters a segmentation fault when running with four processes and one process per node.&amp;nbsp;This does not happen for the team members&amp;nbsp;based at US sites.&amp;nbsp; The seg fault does not happen when the application is executed on only&amp;nbsp;a single&amp;nbsp;node, the login node.&lt;/P&gt;

&lt;P&gt;The test.c application was compiled by each team member&amp;nbsp;in this way (in a user-specific scratch space in the US NFS allocation) :&lt;/P&gt;

&lt;PRE class="brush:bash;"&gt;	mpiicc –g -o testc-intelMPI test.c&lt;/PRE&gt;

&lt;P&gt;To run the executable, we use:&lt;/P&gt;

&lt;PRE class="brush:bash;"&gt;	mpirun -n 4 -perhost 1 -env I_MPI_FABRICS tcp -hostfile /nfs/&amp;lt;pathTo&amp;gt;/machines.LINUX ./testc-intelMPI&lt;/PRE&gt;

&lt;P&gt;For the U.S based team members, the output is as follows:&lt;/P&gt;

&lt;PRE class="brush:bash;"&gt;	Hello world: rank 0 of 4 running on &amp;lt;hostname1&amp;gt;
	Hello world: rank 1 of 4 running on &amp;lt;hostname2&amp;gt;
	Hello world: rank 2 of 4 running on &amp;lt;hostname3&amp;gt;
	Hello world: rank 3 of 4 running on &amp;lt;hostname4&amp;gt;&lt;/PRE&gt;

&lt;P&gt;When my Russian team member executes this in the same manner, the segmentation fault message states:&lt;/P&gt;

&lt;PRE class="brush:bash;"&gt;	/nfs/&amp;lt;pathTo&amp;gt;/intel-5.1.0.038/compilers_and_libraries_2016.0.079/linux/mpi/intel64/bin/mpirun: line 241:&amp;nbsp; 7902 Segmentation fault&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (core dumped) mpiexec.hydra "$@" 0&amp;lt;&amp;amp;0&lt;/PRE&gt;

&lt;P&gt;When using gdb, we learn the following:&lt;/P&gt;

&lt;PRE class="brush:bash;"&gt;	Program received signal SIGSEGV, Segmentation fault.
	mfile_fn (arg=0x0, argv=0x49cdc8) at ../../ui/mpich/utils.c:448
&lt;/PRE&gt;

&lt;P&gt;&lt;BR /&gt;
	We do not have the source files with this installation and are unable to inspect utils.c.&lt;/P&gt;

&lt;P&gt;Conversely, to run on just the login node with:&lt;/P&gt;

&lt;PRE class="brush:bash;"&gt;	mpirun -n 4 -perhost 1 ./testc-intelMPI&lt;/PRE&gt;

&lt;P&gt;No segmentation fault happens:&lt;/P&gt;

&lt;PRE class="brush:bash;"&gt;	Hello world: rank 0 of 4 running on &amp;lt;loginHostname&amp;gt;
	Hello world: rank 1 of 4 running on &amp;lt;loginHostname&amp;gt;
	Hello world: rank 2 of 4 running on &amp;lt;loginHostname&amp;gt;
	Hello world: rank 3 of 4 running on &amp;lt;loginHostname&amp;gt;
&lt;/PRE&gt;

&lt;P&gt;Let me know of&amp;nbsp;any suggestions for how I can change the environment&amp;nbsp;to enable my Russian team member to run this code correctly.&lt;/P&gt;

&lt;P&gt;Thank you,&lt;/P&gt;

&lt;P&gt;Rashawn Knapp&lt;/P&gt;</description>
      <pubDate>Thu, 13 Aug 2015 18:01:55 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Seg-Fault-when-using-US-NFS-install-of-MPI-5-1-0-038-from-site/m-p/1052572#M4422</guid>
      <dc:creator>Rashawn_K_Intel1</dc:creator>
      <dc:date>2015-08-13T18:01:55Z</dc:date>
    </item>
    <item>
      <title>Hello Rashawn,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Seg-Fault-when-using-US-NFS-install-of-MPI-5-1-0-038-from-site/m-p/1052573#M4423</link>
      <description>&lt;P&gt;Hello Rashawn,&lt;/P&gt;

&lt;P&gt;Could you please try to reproduce the failure with '-v' mpirun's option and provide the output?&lt;/P&gt;</description>
      <pubDate>Fri, 14 Aug 2015 08:07:41 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Seg-Fault-when-using-US-NFS-install-of-MPI-5-1-0-038-from-site/m-p/1052573#M4423</guid>
      <dc:creator>Artem_R_Intel1</dc:creator>
      <dc:date>2015-08-14T08:07:41Z</dc:date>
    </item>
    <item>
      <title>Hello Artem and others,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Seg-Fault-when-using-US-NFS-install-of-MPI-5-1-0-038-from-site/m-p/1052574#M4424</link>
      <description>&lt;P&gt;Hello Artem and others,&lt;/P&gt;

&lt;P&gt;Thank you for your suggestion.&amp;nbsp; We resolved the issue earlier today.&amp;nbsp; The original execution by the&amp;nbsp;team member&amp;nbsp;had a typo;&amp;nbsp;when repeated&amp;nbsp;today with the '-v' option and the correct&amp;nbsp;mpirun&amp;nbsp;parameters, it&amp;nbsp;ran as expected.&lt;/P&gt;

&lt;P&gt;Regards,&lt;/P&gt;

&lt;P&gt;Rashawn&lt;/P&gt;</description>
      <pubDate>Fri, 14 Aug 2015 16:10:37 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Seg-Fault-when-using-US-NFS-install-of-MPI-5-1-0-038-from-site/m-p/1052574#M4424</guid>
      <dc:creator>Rashawn_K_Intel1</dc:creator>
      <dc:date>2015-08-14T16:10:37Z</dc:date>
    </item>
  </channel>
</rss>

