Intel® MPI Library
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
2161 Discussions

<How to share an array between two processes running on the different nodes (hosts).

ArthurRatz
Novice
325 Views

Dear collegues,

Please help me to solve the following problem:

How to share an array between two processes running on the different nodes (hosts).

I have the following code:

#include <stdio.h>
#include <stdlib.h>
#include <conio.h>
#include <time.h>
#include <Windows.h>

#include <omp.h>
#include <mpi.h>

typedef unsigned long long ullong;

const ullong number_of_items = 100;

int main(int argc, char* argv[])
{
 //MPI_Init(&argc, &argv);
 int prov;
 MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPLE, &prov);

 ullong* array = NULL;
 double startwtime = 0.0, endwtime = 0.00;
 int namelen, numprocs, proc_rank, rank_sm, numprocs_sm;
 char processor_name[MPI_MAX_PROCESSOR_NAME];
 
 startwtime = MPI_Wtime();

 MPI_Win win_sm; MPI_Comm comm_sm;

 MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
 MPI_Comm_rank(MPI_COMM_WORLD, &proc_rank);

 MPI_Comm_split_type(MPI_COMM_WORLD, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL, &comm_sm);
 MPI_Comm_rank(comm_sm, &rank_sm);
 MPI_Comm_size(comm_sm, &numprocs_sm);

 MPI_Get_processor_name(processor_name, &namelen);

 int disp_size = sizeof(ullong);
 MPI_Aint array_size = number_of_items * disp_size;
 MPI_Win_allocate_shared(array_size, disp_size, MPI_INFO_NULL, comm_sm, &array, &win_sm);
 MPI_Win_shared_query(win_sm, 0, &array_size, &disp_size, &array);

 MPI_Barrier(comm_sm);

 if (rank_sm == 0)
 {
  for (ullong index = 0; index < number_of_items; index++)
   array[index] = rand() % 100;
 }

 MPI_Barrier(comm_sm);

 if (proc_rank == 0)
 {
  array[5] = 9999;
 }

 else
 {
  array[4] = 5555;
 }

 MPI_Barrier(MPI_COMM_WORLD);

 fprintf(stdout, "[Unsorted]:");
 for (ullong i = 0; i < number_of_items; i++)
  fprintf(stdout, "%llu ", array);
 fprintf(stdout, "\n\n");

 MPI_Barrier(MPI_COMM_WORLD);
 MPI_Finalize();

 return 0;
}

If I run this code on a single node, then it works correctly:

E:\>mpiexec -n 4 1.exe
[Unsorted]:41 67 34 0 5555 9999 78 58 62 64 5 45 81 27 61 91 95 42 27 36 91 4 2
53 92 82 21 16 18 95 47 26 71 38 69 12 67 99 35 94 3 11 22 33 73 64 41 11 53 68
47 44 62 57 37 59 23 41 29 78 16 35 90 42 88 6 40 42 64 48 46 5 90 29 70 50 6 1
93 48 29 23 84 54 56 40 66 76 31 8 44 39 26 23 37 38 18 82 29 41

[Unsorted]:41 67 34 0 5555 9999 78 58 62 64 5 45 81 27 61 91 95 42 27 36 91 4 2
53 92 82 21 16 18 95 47 26 71 38 69 12 67 99 35 94 3 11 22 33 73 64 41 11 53 68
47 44 62 57 37 59 23 41 29 78 16 35 90 42 88 6 40 42 64 48 46 5 90 29 70 50 6 1
93 48 29 23 84 54 56 40 66 76 31 8 44 39 26 23 37 38 18 82 29 41

[Unsorted]:41 67 34 0 5555 9999 78 58 62 64 5 45 81 27 61 91 95 42 27 36 91 4 2
53 92 82 21 16 18 95 47 26 71 38 69 12 67 99 35 94 3 11 22 33 73 64 41 11 53 68
47 44 62 57 37 59 23 41 29 78 16 35 90 42 88 6 40 42 64 48 46 5 90 29 70 50 6 1
93 48 29 23 84 54 56 40 66 76 31 8 44 39 26 23 37 38 18 82 29 41

[Unsorted]:41 67 34 0 5555 9999 78 58 62 64 5 45 81 27 61 91 95 42 27 36 91 4 2
53 92 82 21 16 18 95 47 26 71 38 69 12 67 99 35 94 3 11 22 33 73 64 41 11 53 68
47 44 62 57 37 59 23 41 29 78 16 35 90 42 88 6 40 42 64 48 46 5 90 29 70 50 6 1
93 48 29 23 84 54 56 40 66 76 31 8 44 39 26 23 37 38 18 82 29 41

Otherwise, if I run this code on two nodes (hosts) then the updates of the array are not shared between processes:

E:\>mpiexec -n 2 -ppn 1 -hosts 2 192.168.0.100 1 192.168.0.150 1 1.exe
[Unsorted]:41 67 34 0 69 9999 78 58 62 64 5 45 81 27 61 91 95 42 27 36 91 4 2 53
 92 82 21 16 18 95 47 26 71 38 69 12 67 99 35 94 3 11 22 33 73 64 41 11 53 68 47
 44 62 57 37 59 23 41 29 78 16 35 90 42 88 6 40 42 64 48 46 5 90 29 70 50 6 1 93
 48 29 23 84 54 56 40 66 76 31 8 44 39 26 23 37 38 18 82 29 41

[Unsorted]:41 67 34 0 5555 24 78 58 62 64 5 45 81 27 61 91 95 42 27 36 91 4 2 53
 92 82 21 16 18 95 47 26 71 38 69 12 67 99 35 94 3 11 22 33 73 64 41 11 53 68 47
 44 62 57 37 59 23 41 29 78 16 35 90 42 88 6 40 42 64 48 46 5 90 29 70 50 6 1 93
 48 29 23 84 54 56 40 66 76 31 8 44 39 26 23 37 38 18 82 29 41

 

0 Kudos
1 Reply
ArthurRatz
Novice
325 Views

Actually I can't figure out why the allocated memory cannot be shared across nodes (hosts) ?!

0 Kudos
Reply