<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Intel MPI issues: MPI_COMM_SIZE returns zero processes in Intel® Moderncode for Parallel Architectures</title>
    <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Intel-MPI-issues-MPI-COMM-SIZE-returns-zero-processes/m-p/1125075#M7606</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;

&lt;P&gt;I tried to build a simple MPI application using Intel MPI libraries and got in to issues at the execution stage. The subroutine&amp;nbsp;MPI_COMM_SIZE returns zero number of processes, while the rank returned by&amp;nbsp;MPI_COMM_RANK seems just fine. Further the message passing through the&amp;nbsp;MPI_SEND and&amp;nbsp;MPI_RECV does n't seem to happen, although it did not produce an error. I am currently using version 2018 and Update 3 of Intel MPI library. On the other hand it gives correct results when i use MS-MPI library components. The fact that building seems fine indicates that there is n't any issues with the functionalities. However the difference in launch tools (mpiexec) and program manager (hydra) is what i guess is causing the issue. Here is the sample code i was trying to execute.&lt;/P&gt;

&lt;P&gt;program array&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; include 'mpif.h'&lt;/P&gt;

&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; integer&amp;nbsp; &amp;nbsp;nprocs, MASTER&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; parameter (nprocs = 5)&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; parameter (MASTER = 0)&lt;/P&gt;

&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; integer&amp;nbsp; numtasks, taskid, ierr, dest, offset, i, tag2, source&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; real*8&amp;nbsp; &amp;nbsp;data(nprocs-1)&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; integer&amp;nbsp; status(MPI_STATUS_SIZE)&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&lt;SPAN style="font-size: 1em;"&gt;! ***** Initializations *****&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; call MPI_INIT(ierr)&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; call MPI_COMM_SIZE(MPI_COMM_WORLD, numtasks, ierr)&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; call MPI_COMM_RANK(MPI_COMM_WORLD, taskid, ierr)&lt;/P&gt;

&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; tag2 = 1&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; write(*,*)'numtasks',numtasks&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	!***** Master task only ******&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; if (taskid .eq. MASTER) then&lt;/P&gt;

&lt;P&gt;!&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Initialize array&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; do i=1, nprocs-1&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; data(i) = i * 1.0&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; end do&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;!&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Send each task an element of the array&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; do dest=1, numtasks-1&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; call MPI_SEND(data(dest), 1, MPI_DOUBLE_PRECISION, dest, &amp;amp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;tag2, MPI_COMM_WORLD, ierr)&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; end do&lt;BR /&gt;
	!&lt;BR /&gt;
	!&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Wait to receive results from each task&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; do i=1, numtasks-1&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; source = i&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; call MPI_RECV(data(i), 1, MPI_DOUBLE_PRECISION,&amp;nbsp; &amp;amp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;source, tag2, MPI_COMM_WORLD, status, ierr)&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; end do&amp;nbsp;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; write(*,*)'Data received at Master:',data&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; end if&lt;/P&gt;

&lt;P&gt;!***** Non-master tasks only *****&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="font-size: 1em;"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; if (taskid .gt. MASTER) then&amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	!&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Receive array elements from the master task */&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; call MPI_RECV(data(taskid), 1, MPI_DOUBLE_PRECISION, MASTER, &amp;amp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;tag2, MPI_COMM_WORLD, status, ierr)&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; write(*,*)'Data received at worker:',data(taskid)&lt;BR /&gt;
	!&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; data(taskid)=data(taskid)*10&lt;/P&gt;

&lt;P&gt;!&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Send my results back to the master&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; call MPI_SEND(data(taskid), 1, MPI_DOUBLE_PRECISION, MASTER, &amp;amp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;tag2, MPI_COMM_WORLD, ierr)&lt;BR /&gt;
	!&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; endif&lt;/P&gt;

&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; call MPI_FINALIZE(ierr)&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="font-size: 1em;"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; end&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;&lt;SPAN style="font-size: 1em;"&gt;Run it as:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-size: 13.008px;"&gt;&amp;gt;mpiexec -n 5 M&lt;/SPAN&gt;&lt;SPAN style="font-size: 13.008px;"&gt;PI.exe&lt;/SPAN&gt;&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;Results through link of MS-MPI library:&lt;/STRONG&gt;&lt;/P&gt;

&lt;P&gt;numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;5&lt;BR /&gt;
	numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;5&lt;BR /&gt;
	numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;5&lt;BR /&gt;
	numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;5&lt;BR /&gt;
	numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;5&lt;BR /&gt;
	Data received at worker:&amp;nbsp; &amp;nbsp;1.00000000000000&lt;BR /&gt;
	Data received at worker:&amp;nbsp; &amp;nbsp;2.00000000000000&lt;BR /&gt;
	Data received at worker:&amp;nbsp; &amp;nbsp;3.00000000000000&lt;BR /&gt;
	Data received at worker:&amp;nbsp; &amp;nbsp;4.00000000000000&lt;BR /&gt;
	Data received at Master:&amp;nbsp; &amp;nbsp;10.0000000000000&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 20.0000000000000&lt;BR /&gt;
	&amp;nbsp; 30.0000000000000&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 40.0000000000000&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="font-weight: 700; font-size: 13.008px;"&gt;Results through link of Intel MPI library:&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0&lt;BR /&gt;
	numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0&lt;BR /&gt;
	numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0&lt;BR /&gt;
	numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0&lt;BR /&gt;
	numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0&lt;BR /&gt;
	Data received at Master:&amp;nbsp; &amp;nbsp;1.00000000000000&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 2.00000000000000&lt;BR /&gt;
	&amp;nbsp; 3.00000000000000&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 4.00000000000000&lt;/P&gt;

&lt;P&gt;Tried all other tasks like registering mpiexec and installing hydra manager as pointed out in the reference manual. Also reverted to the previous release of Intel MPI library and to my dismay found the same issue. Any pointers in this regard would be extremely appreciated.&lt;/P&gt;

&lt;P&gt;Kind regards,&lt;/P&gt;

&lt;P&gt;Arun&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 07 Sep 2018 15:16:35 GMT</pubDate>
    <dc:creator>Balasubramanian__Aru</dc:creator>
    <dc:date>2018-09-07T15:16:35Z</dc:date>
    <item>
      <title>Intel MPI issues: MPI_COMM_SIZE returns zero processes</title>
      <link>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Intel-MPI-issues-MPI-COMM-SIZE-returns-zero-processes/m-p/1125075#M7606</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;

&lt;P&gt;I tried to build a simple MPI application using Intel MPI libraries and got in to issues at the execution stage. The subroutine&amp;nbsp;MPI_COMM_SIZE returns zero number of processes, while the rank returned by&amp;nbsp;MPI_COMM_RANK seems just fine. Further the message passing through the&amp;nbsp;MPI_SEND and&amp;nbsp;MPI_RECV does n't seem to happen, although it did not produce an error. I am currently using version 2018 and Update 3 of Intel MPI library. On the other hand it gives correct results when i use MS-MPI library components. The fact that building seems fine indicates that there is n't any issues with the functionalities. However the difference in launch tools (mpiexec) and program manager (hydra) is what i guess is causing the issue. Here is the sample code i was trying to execute.&lt;/P&gt;

&lt;P&gt;program array&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; include 'mpif.h'&lt;/P&gt;

&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; integer&amp;nbsp; &amp;nbsp;nprocs, MASTER&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; parameter (nprocs = 5)&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; parameter (MASTER = 0)&lt;/P&gt;

&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; integer&amp;nbsp; numtasks, taskid, ierr, dest, offset, i, tag2, source&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; real*8&amp;nbsp; &amp;nbsp;data(nprocs-1)&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; integer&amp;nbsp; status(MPI_STATUS_SIZE)&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&lt;SPAN style="font-size: 1em;"&gt;! ***** Initializations *****&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; call MPI_INIT(ierr)&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; call MPI_COMM_SIZE(MPI_COMM_WORLD, numtasks, ierr)&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; call MPI_COMM_RANK(MPI_COMM_WORLD, taskid, ierr)&lt;/P&gt;

&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; tag2 = 1&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; write(*,*)'numtasks',numtasks&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	!***** Master task only ******&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; if (taskid .eq. MASTER) then&lt;/P&gt;

&lt;P&gt;!&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Initialize array&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; do i=1, nprocs-1&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; data(i) = i * 1.0&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; end do&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;!&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Send each task an element of the array&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; do dest=1, numtasks-1&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; call MPI_SEND(data(dest), 1, MPI_DOUBLE_PRECISION, dest, &amp;amp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;tag2, MPI_COMM_WORLD, ierr)&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; end do&lt;BR /&gt;
	!&lt;BR /&gt;
	!&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Wait to receive results from each task&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; do i=1, numtasks-1&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; source = i&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; call MPI_RECV(data(i), 1, MPI_DOUBLE_PRECISION,&amp;nbsp; &amp;amp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;source, tag2, MPI_COMM_WORLD, status, ierr)&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; end do&amp;nbsp;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; write(*,*)'Data received at Master:',data&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; end if&lt;/P&gt;

&lt;P&gt;!***** Non-master tasks only *****&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="font-size: 1em;"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; if (taskid .gt. MASTER) then&amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	!&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Receive array elements from the master task */&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; call MPI_RECV(data(taskid), 1, MPI_DOUBLE_PRECISION, MASTER, &amp;amp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;tag2, MPI_COMM_WORLD, status, ierr)&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; write(*,*)'Data received at worker:',data(taskid)&lt;BR /&gt;
	!&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; data(taskid)=data(taskid)*10&lt;/P&gt;

&lt;P&gt;!&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;Send my results back to the master&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; call MPI_SEND(data(taskid), 1, MPI_DOUBLE_PRECISION, MASTER, &amp;amp;&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;tag2, MPI_COMM_WORLD, ierr)&lt;BR /&gt;
	!&lt;BR /&gt;
	&amp;nbsp; &amp;nbsp; &amp;nbsp; endif&lt;/P&gt;

&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; call MPI_FINALIZE(ierr)&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="font-size: 1em;"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; end&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;&lt;SPAN style="font-size: 1em;"&gt;Run it as:&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN style="font-size: 13.008px;"&gt;&amp;gt;mpiexec -n 5 M&lt;/SPAN&gt;&lt;SPAN style="font-size: 13.008px;"&gt;PI.exe&lt;/SPAN&gt;&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;Results through link of MS-MPI library:&lt;/STRONG&gt;&lt;/P&gt;

&lt;P&gt;numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;5&lt;BR /&gt;
	numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;5&lt;BR /&gt;
	numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;5&lt;BR /&gt;
	numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;5&lt;BR /&gt;
	numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;5&lt;BR /&gt;
	Data received at worker:&amp;nbsp; &amp;nbsp;1.00000000000000&lt;BR /&gt;
	Data received at worker:&amp;nbsp; &amp;nbsp;2.00000000000000&lt;BR /&gt;
	Data received at worker:&amp;nbsp; &amp;nbsp;3.00000000000000&lt;BR /&gt;
	Data received at worker:&amp;nbsp; &amp;nbsp;4.00000000000000&lt;BR /&gt;
	Data received at Master:&amp;nbsp; &amp;nbsp;10.0000000000000&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 20.0000000000000&lt;BR /&gt;
	&amp;nbsp; 30.0000000000000&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 40.0000000000000&lt;/P&gt;

&lt;P&gt;&lt;SPAN style="font-weight: 700; font-size: 13.008px;"&gt;Results through link of Intel MPI library:&lt;/SPAN&gt;&lt;/P&gt;

&lt;P&gt;numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0&lt;BR /&gt;
	numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0&lt;BR /&gt;
	numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0&lt;BR /&gt;
	numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0&lt;BR /&gt;
	numtasks&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;0&lt;BR /&gt;
	Data received at Master:&amp;nbsp; &amp;nbsp;1.00000000000000&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 2.00000000000000&lt;BR /&gt;
	&amp;nbsp; 3.00000000000000&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 4.00000000000000&lt;/P&gt;

&lt;P&gt;Tried all other tasks like registering mpiexec and installing hydra manager as pointed out in the reference manual. Also reverted to the previous release of Intel MPI library and to my dismay found the same issue. Any pointers in this regard would be extremely appreciated.&lt;/P&gt;

&lt;P&gt;Kind regards,&lt;/P&gt;

&lt;P&gt;Arun&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 07 Sep 2018 15:16:35 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Intel-MPI-issues-MPI-COMM-SIZE-returns-zero-processes/m-p/1125075#M7606</guid>
      <dc:creator>Balasubramanian__Aru</dc:creator>
      <dc:date>2018-09-07T15:16:35Z</dc:date>
    </item>
  </channel>
</rss>

