Intel® Moderncode for Parallel Architectures
Support for developing parallel programming applications on Intel® Architecture.
Announcements
Welcome to the Intel Community. If you get an answer you like, please mark it as an Accepted Solution to help others. Thank you!
For the latest information on Intel’s response to the Log4j/Log4Shell vulnerability, please see Intel-SA-00646

problem with mpich on windows HPC cluster

parsa_banihashemi
77 Views

Hi,

I have a cluster with windows HPC server 2007 service pack 1
I can not run communicative commands of mpi on the system.

when I run thefollowing simple codeon 2 nodes using one core on each:
--------------------
use mpi
implicit real*8(a-h,o-z)

call MPI_Init ( ierr )
call MPI_Comm_rank ( MPI_COMM_WORLD, my_id, ierr )
CALL MPI_COMM_SIZE ( MPI_COMM_WORLD, NUM_PROCS, IERR )

print*,my_id, NUM_PROCS

call mpi_barrier(mpi_comm_world,ierr)

print*,my_id, NUM_PROCS

call MPI_Finalize ( ierr )
end
--------------------



It does the first printing, but On mpi_barrier, It return error: rank 0 unable to connect to rank 1 using business card(33709,....)

mpi_barrier error,
I have the sam eproblem with all subroutines: mpi_bcast, mpi_reduce,.... and all subroutines that need communication.
But If I want it to print ranks, oreach cpu do some operations on its own, It does. I have run this code with the same version of mpich on a pc with windows xp, cammunicating through ethernet with a laptop with windows vista. please help me out.

Regards
Parsa

0 Kudos
1 Reply
Vladimir_T_Intel
Moderator
77 Views
Hi Parsa,

I think it's better to post yourquestion to the specific forum.

Reply