<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic mpirun init fatal error for hello world example in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-init-fatal-error-for-hello-world-example/m-p/1186051#M6845</link>
    <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I'm running into a fatal error when trying to run the simple Hello world test with mpirun -np 2 and above. It works fine when using only one process. See the output below. Do you have an idea what the problem is?&lt;/P&gt;&lt;BLOCKQUOTE&gt;
&lt;PRE class="brush:bash; class-name:dark;"&gt;lion@sol48: ~/FHI-aims/09_06_20/testcases/H2O-relaxation $ mpiifort -v
mpiifort for the Intel(R) MPI Library 2019 for Linux*
Copyright 2003-2018, Intel Corporation.
ifort version 19.0.0.117

lion@sol48: ~/FHI-aims/09_06_20/testcases/H2O-relaxation $ cat test.f90
!
! Copyright 2003-2018 Intel Corporation.
! 
! This software and the related documents are Intel copyrighted materials, and
! your use of them is governed by the express license under which they were
! provided to you (License). Unless the License provides otherwise, you may
! not use, modify, copy, publish, distribute, disclose or transmit this
! software or the related documents without Intel's prior written permission.
! 
! This software and the related documents are provided as is, with no express
! or implied warranties, other than those that are expressly stated in the
! License.
!
        program main
        use mpi
        implicit none

        integer i, size, rank, namelen, ierr
        character (len=MPI_MAX_PROCESSOR_NAME) :: name
        integer stat(MPI_STATUS_SIZE)

        call MPI_INIT (ierr)

        call MPI_COMM_SIZE (MPI_COMM_WORLD, size, ierr)
        call MPI_COMM_RANK (MPI_COMM_WORLD, rank, ierr)
        call MPI_GET_PROCESSOR_NAME (name, namelen, ierr)

        if (rank.eq.0) then

            print *, 'Hello world: rank ', rank, ' of ', size, ' running on ', name

            do i = 1, size - 1
                call MPI_RECV (rank, 1, MPI_INTEGER, i, 1, MPI_COMM_WORLD, stat, ierr)
                call MPI_RECV (size, 1, MPI_INTEGER, i, 1, MPI_COMM_WORLD, stat, ierr)
                call MPI_RECV (namelen, 1, MPI_INTEGER, i, 1, MPI_COMM_WORLD, stat, ierr)
                name = ''
                call MPI_RECV (name, namelen, MPI_CHARACTER, i, 1, MPI_COMM_WORLD, stat, ierr)
                print *, 'Hello world: rank ', rank, ' of ', size, ' running on ', name
            enddo

        else

            call MPI_SEND (rank, 1, MPI_INTEGER, 0, 1, MPI_COMM_WORLD, ierr)
            call MPI_SEND (size, 1, MPI_INTEGER, 0, 1, MPI_COMM_WORLD, ierr)
            call MPI_SEND (namelen, 1, MPI_INTEGER, 0, 1, MPI_COMM_WORLD, ierr)
            call MPI_SEND (name, namelen, MPI_CHARACTER, 0, 1, MPI_COMM_WORLD, ierr)

        endif

        call MPI_FINALIZE (ierr)

        end
lion@sol48: ~/FHI-aims/09_06_20/testcases/H2O-relaxation $ mpiifort test.f90
lion@sol48: ~/FHI-aims/09_06_20/testcases/H2O-relaxation $ mpirun -np 1 ./a.out
 Hello world: rank            0  of            1  running on 
 sol48                                                                          
                                                 
lion@sol48: ~/FHI-aims/09_06_20/testcases/H2O-relaxation $ mpirun -np 2 ./a.out
Abort(1093903) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(607)..........: 
MPID_Init(731).................: 
MPIR_NODEMAP_build_nodemap(710): PMI_KVS_Get returned 4
In: PMI_Abort(1093903, Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(607)..........: 
MPID_Init(731).................: 
MPIR_NODEMAP_build_nodemap(710): PMI_KVS_Get returned 4)
Abort(1093903) on node 1 (rank 1 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(607)..........: 
MPID_Init(731).................: 
MPIR_NODEMAP_build_nodemap(710): PMI_KVS_Get returned 4
In: PMI_Abort(1093903, Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(607)..........: 
MPID_Init(731).................: 
MPIR_NODEMAP_build_nodemap(710): PMI_KVS_Get returned 4)&lt;/PRE&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;</description>
    <pubDate>Tue, 09 Jun 2020 17:20:21 GMT</pubDate>
    <dc:creator>Lion__Konstantin</dc:creator>
    <dc:date>2020-06-09T17:20:21Z</dc:date>
    <item>
      <title>mpirun init fatal error for hello world example</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-init-fatal-error-for-hello-world-example/m-p/1186051#M6845</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I'm running into a fatal error when trying to run the simple Hello world test with mpirun -np 2 and above. It works fine when using only one process. See the output below. Do you have an idea what the problem is?&lt;/P&gt;&lt;BLOCKQUOTE&gt;
&lt;PRE class="brush:bash; class-name:dark;"&gt;lion@sol48: ~/FHI-aims/09_06_20/testcases/H2O-relaxation $ mpiifort -v
mpiifort for the Intel(R) MPI Library 2019 for Linux*
Copyright 2003-2018, Intel Corporation.
ifort version 19.0.0.117

lion@sol48: ~/FHI-aims/09_06_20/testcases/H2O-relaxation $ cat test.f90
!
! Copyright 2003-2018 Intel Corporation.
! 
! This software and the related documents are Intel copyrighted materials, and
! your use of them is governed by the express license under which they were
! provided to you (License). Unless the License provides otherwise, you may
! not use, modify, copy, publish, distribute, disclose or transmit this
! software or the related documents without Intel's prior written permission.
! 
! This software and the related documents are provided as is, with no express
! or implied warranties, other than those that are expressly stated in the
! License.
!
        program main
        use mpi
        implicit none

        integer i, size, rank, namelen, ierr
        character (len=MPI_MAX_PROCESSOR_NAME) :: name
        integer stat(MPI_STATUS_SIZE)

        call MPI_INIT (ierr)

        call MPI_COMM_SIZE (MPI_COMM_WORLD, size, ierr)
        call MPI_COMM_RANK (MPI_COMM_WORLD, rank, ierr)
        call MPI_GET_PROCESSOR_NAME (name, namelen, ierr)

        if (rank.eq.0) then

            print *, 'Hello world: rank ', rank, ' of ', size, ' running on ', name

            do i = 1, size - 1
                call MPI_RECV (rank, 1, MPI_INTEGER, i, 1, MPI_COMM_WORLD, stat, ierr)
                call MPI_RECV (size, 1, MPI_INTEGER, i, 1, MPI_COMM_WORLD, stat, ierr)
                call MPI_RECV (namelen, 1, MPI_INTEGER, i, 1, MPI_COMM_WORLD, stat, ierr)
                name = ''
                call MPI_RECV (name, namelen, MPI_CHARACTER, i, 1, MPI_COMM_WORLD, stat, ierr)
                print *, 'Hello world: rank ', rank, ' of ', size, ' running on ', name
            enddo

        else

            call MPI_SEND (rank, 1, MPI_INTEGER, 0, 1, MPI_COMM_WORLD, ierr)
            call MPI_SEND (size, 1, MPI_INTEGER, 0, 1, MPI_COMM_WORLD, ierr)
            call MPI_SEND (namelen, 1, MPI_INTEGER, 0, 1, MPI_COMM_WORLD, ierr)
            call MPI_SEND (name, namelen, MPI_CHARACTER, 0, 1, MPI_COMM_WORLD, ierr)

        endif

        call MPI_FINALIZE (ierr)

        end
lion@sol48: ~/FHI-aims/09_06_20/testcases/H2O-relaxation $ mpiifort test.f90
lion@sol48: ~/FHI-aims/09_06_20/testcases/H2O-relaxation $ mpirun -np 1 ./a.out
 Hello world: rank            0  of            1  running on 
 sol48                                                                          
                                                 
lion@sol48: ~/FHI-aims/09_06_20/testcases/H2O-relaxation $ mpirun -np 2 ./a.out
Abort(1093903) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(607)..........: 
MPID_Init(731).................: 
MPIR_NODEMAP_build_nodemap(710): PMI_KVS_Get returned 4
In: PMI_Abort(1093903, Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(607)..........: 
MPID_Init(731).................: 
MPIR_NODEMAP_build_nodemap(710): PMI_KVS_Get returned 4)
Abort(1093903) on node 1 (rank 1 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(607)..........: 
MPID_Init(731).................: 
MPIR_NODEMAP_build_nodemap(710): PMI_KVS_Get returned 4
In: PMI_Abort(1093903, Fatal error in PMPI_Init: Other MPI error, error stack:
MPIR_Init_thread(607)..........: 
MPID_Init(731).................: 
MPIR_NODEMAP_build_nodemap(710): PMI_KVS_Get returned 4)&lt;/PRE&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;</description>
      <pubDate>Tue, 09 Jun 2020 17:20:21 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-init-fatal-error-for-hello-world-example/m-p/1186051#M6845</guid>
      <dc:creator>Lion__Konstantin</dc:creator>
      <dc:date>2020-06-09T17:20:21Z</dc:date>
    </item>
    <item>
      <title>Hi Konstantin,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-init-fatal-error-for-hello-world-example/m-p/1186052#M6846</link>
      <description>&lt;P&gt;Hi&amp;nbsp;Konstantin,&lt;/P&gt;&lt;P&gt;Thanks for reaching out to us!&lt;/P&gt;&lt;P&gt;We tried to reproduce the error which you are facing but we are unable to reproduce it.&lt;/P&gt;&lt;P&gt;We got the following when we ran the code.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;PRE class="brush:bash; class-name:dark;"&gt;u30009@s001-n179:~/goutham/forums/856899_mpirun_initfatal/test$ mpirun -np 1 ./a.out
 Hello world: rank            0  of            1  running on
 s001-n179

u30009@s001-n179:~/goutham/forums/856899_mpirun_initfatal/test$ mpirun -np 2 ./a.out
 Hello world: rank            0  of            2  running on
 s001-n179

 Hello world: rank            1  of            2  running on
 s001-n179

&lt;/PRE&gt;

&lt;P&gt;Please refer to the below thread which discusses issue&amp;nbsp;similar to your issue:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/799716"&gt;https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/799716&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Please let us know if the above link helps you.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks &amp;amp; Regards&lt;/P&gt;
&lt;P&gt;Goutham&lt;/P&gt;</description>
      <pubDate>Wed, 10 Jun 2020 12:16:29 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-init-fatal-error-for-hello-world-example/m-p/1186052#M6846</guid>
      <dc:creator>GouthamK_Intel</dc:creator>
      <dc:date>2020-06-10T12:16:29Z</dc:date>
    </item>
    <item>
      <title>PMI_KVS_Get failure 4:</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-init-fatal-error-for-hello-world-example/m-p/1186053#M6847</link>
      <description>&lt;P&gt;PMI_KVS_Get failure 4:&lt;/P&gt;&lt;P&gt;See:&lt;/P&gt;&lt;P&gt;&lt;A href="https://github.com/hpcng/singularity/issues/5118" target="_blank"&gt;https://github.com/hpcng/singularity/issues/5118&lt;/A&gt;&lt;/P&gt;&lt;P&gt;locate: mmiesch commented on Mar 16&lt;/P&gt;&lt;P&gt;Jim Dempsey&lt;/P&gt;</description>
      <pubDate>Fri, 12 Jun 2020 11:59:40 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-init-fatal-error-for-hello-world-example/m-p/1186053#M6847</guid>
      <dc:creator>jimdempseyatthecove</dc:creator>
      <dc:date>2020-06-12T11:59:40Z</dc:date>
    </item>
    <item>
      <title>Hi Konstantin,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-init-fatal-error-for-hello-world-example/m-p/1186054#M6848</link>
      <description>&lt;P&gt;Hi Konstantin,&lt;/P&gt;&lt;P&gt;Could you please let us know the status of the issue you are facing.&lt;/P&gt;&lt;P&gt;If the issue still persists, please provide the output of the below check.&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Please verify if you are able to access the nodes which are present in the node file. Depending on your environment(Job Scheduler) nodefile may vary.&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;Example:&amp;nbsp;&lt;/P&gt;&lt;P&gt;In our environment: mpirun uses $PBS_NODEFILE as a machine file.&amp;nbsp;&lt;/P&gt;&lt;P&gt;You may check your appropriate nodefile depending on your environment in below link:&lt;/P&gt;&lt;P&gt;&lt;A href="https://software.intel.com/content/www/us/en/develop/documentation/mpi-developer-guide-linux/top/running-applications/job-schedulers-support.html"&gt;https://software.intel.com/content/www/us/en/develop/documentation/mpi-developer-guide-linux/top/running-applications/job-schedulers-support.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;2. Run&amp;nbsp;the Intel® Cluster Checker:&lt;/P&gt;&lt;P&gt;•&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;source clckvars.sh&amp;nbsp;&lt;BR /&gt;•&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;clck -f &amp;lt;nodefile&amp;gt;&lt;/P&gt;&lt;P&gt;Please check the below link for more details.&lt;BR /&gt;&lt;A href="https://software.intel.com/content/www/us/en/develop/documentation/cluster-checker-user-guide/top/getting-started.html"&gt;https://software.intel.com/content/www/us/en/develop/documentation/cluster-checker-user-guide/top/getting-started.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks &amp;amp; Regards&lt;BR /&gt;Goutham&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 16 Jun 2020 11:04:01 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-init-fatal-error-for-hello-world-example/m-p/1186054#M6848</guid>
      <dc:creator>GouthamK_Intel</dc:creator>
      <dc:date>2020-06-16T11:04:01Z</dc:date>
    </item>
    <item>
      <title>Hi Konstantin,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-init-fatal-error-for-hello-world-example/m-p/1186055#M6849</link>
      <description>&lt;P&gt;Hi&amp;nbsp;Konstantin,&lt;/P&gt;&lt;P&gt;Could you please let us know the status of the issue you are facing?&lt;/P&gt;&lt;P&gt;If your issue still persists, please do let us know. So that we will be able to help you resolve your issue.&amp;nbsp;&lt;/P&gt;&lt;P&gt;if your issue is resolved, let us know whether we can close this thread.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks &amp;amp; Regards&lt;/P&gt;&lt;P&gt;Goutham&lt;/P&gt;</description>
      <pubDate>Mon, 22 Jun 2020 07:52:51 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-init-fatal-error-for-hello-world-example/m-p/1186055#M6849</guid>
      <dc:creator>GouthamK_Intel</dc:creator>
      <dc:date>2020-06-22T07:52:51Z</dc:date>
    </item>
    <item>
      <title>Re:mpirun init fatal error for hello world example</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-init-fatal-error-for-hello-world-example/m-p/1188405#M6888</link>
      <description>&lt;P&gt;Hi&amp;nbsp;Konstantin,&lt;/P&gt;&lt;P&gt;Could you please let us know the status of the issue you are facing?&lt;/P&gt;&lt;P&gt;If your issue still persists, please do let us know. So that we will be able to help you resolve your issue.&amp;nbsp;&lt;/P&gt;&lt;P&gt;if your issue is resolved, let us know whether we can close this thread.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;Goutham&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 30 Jun 2020 13:22:59 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-init-fatal-error-for-hello-world-example/m-p/1188405#M6888</guid>
      <dc:creator>GouthamK_Intel</dc:creator>
      <dc:date>2020-06-30T13:22:59Z</dc:date>
    </item>
    <item>
      <title>Re:mpirun init fatal error for hello world example</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-init-fatal-error-for-hello-world-example/m-p/1190997#M6908</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;We have not heard back from you so we are closing this inquiry now. If you need further assistance, please post a new question.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thanks &amp;amp; Regards&lt;/P&gt;&lt;P&gt;Goutham&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 09 Jul 2020 12:17:43 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-init-fatal-error-for-hello-world-example/m-p/1190997#M6908</guid>
      <dc:creator>GouthamK_Intel</dc:creator>
      <dc:date>2020-07-09T12:17:43Z</dc:date>
    </item>
  </channel>
</rss>

