Intel® C++ Compiler
Community support and assistance for creating C++ code that runs on platforms based on Intel® processors.
7957 Discussions

Unable to execute programs with Intel MPI under Windows Subsystem Linux (WSL)

Li__Tianyi
Beginner
1,168 Views
The same issue has also been posted on WSL repository: https://github.com/Microsoft/WSL/issues/3231
 
* Your Windows build number:  Microsoft Windows [version 10.0.17134.48]
 
A simple test program provided along with Intel MPI is used.
#include "mpi.h"
#include <stdio.h>
#include <string.h>

int
main (int argc, char *argv[])
{
    int i, rank, size, namelen;
    char name[MPI_MAX_PROCESSOR_NAME];
    MPI_Status stat;

    MPI_Init (&argc, &argv);

    MPI_Comm_size (MPI_COMM_WORLD, &size);
    MPI_Comm_rank (MPI_COMM_WORLD, &rank);
    MPI_Get_processor_name (name, &namelen);

    if (rank == 0) {

printf ("Hello world: rank %d of %d running on %s\n", rank, size, name);

for (i = 1; i < size; i++) {
    MPI_Recv (&rank, 1, MPI_INT, i, 1, MPI_COMM_WORLD, &stat);
    MPI_Recv (&size, 1, MPI_INT, i, 1, MPI_COMM_WORLD, &stat);
    MPI_Recv (&namelen, 1, MPI_INT, i, 1, MPI_COMM_WORLD, &stat);
    MPI_Recv (name, namelen + 1, MPI_CHAR, i, 1, MPI_COMM_WORLD, &stat);
    printf ("Hello world: rank %d of %d running on %s\n", rank, size, name);
}

    } else {

MPI_Send (&rank, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
MPI_Send (&size, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
MPI_Send (&namelen, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
MPI_Send (name, namelen + 1, MPI_CHAR, 0, 1, MPI_COMM_WORLD);

    }

    MPI_Finalize ();

    return (0);
}
Compiling and executing with OpenMPI (installed via `apt install`, 1.10.2) raise no issues.
 
mpicc test.c; mpirun -np 4 ./a.out
Hello world: rank 0 of 4 running on tianyi
Hello world: rank 1 of 4 running on tianyi
Hello world: rank 2 of 4 running on tianyi
Hello world: rank 3 of 4 running on tianyi
 
However using Intel MPI (`compilers_and_libraries_2017.7.259`, before running I uninstalled OpenMPI to prevent possible include/lib conflits) I got the following error
===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 1734 RUNNING AT tianyi
=   EXIT CODE: 139
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================

===================================================================================
=   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
=   PID 1734 RUNNING AT tianyi
=   EXIT CODE: 11
=   CLEANING UP REMAINING PROCESSES
=   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
   Intel(R) MPI Library troubleshooting guide:
      https://software.intel.com/node/561764
===================================================================================
Using `mpiicc` gives the same result. Occasionally I could also get this
[proxy:0:0@tianyi] HYDU_sock_write (../../utils/sock/sock.c:418): write error (Broken pipe)
[proxy:0:0@tianyi] send_cmd_downstream (../../pm/pmiserv/pmip_pmi_v1.c:146): error writing PMI line [proxy:0:0@tianyi] fn_get_my_kvsname (../../pm/pmiserv/pmip_pmi_v1.c:491): error sending PMI response
[proxy:0:0@tianyi] pmi_cb (../../pm/pmiserv/pmip_cb.c:822): PMI handler returned error
[proxy:0:0@tianyi] HYDT_dmxu_poll_wait_for_event (../../tools/demux/demux_poll.c:76): callback returned error status
[proxy:0:0@tianyi] main (../../pm/pmiserv/pmip.c:558): demux engine error waiting for event
[mpiexec@tianyi] control_cb (../../pm/pmiserv/pmiserv_cb.c:798): connection to proxy 0 at host tianyi failed
[mpiexec@tianyi] HYDT_dmxu_poll_wait_for_event (../../tools/demux/demux_poll.c:76): callback returned error status
[mpiexec@tianyi] HYD_pmci_wait_for_completion (../../pm/pmiserv/pmiserv_pmci.c:501): error waiting for event
[mpiexec@tianyi] main (../../ui/mpich/mpiexec.c:1147): process manager error waiting for completion
By compiling a test Fortran program, the problem appears to happen during `MPI_Init`.
0 Kudos
0 Replies
Reply