Intel® oneAPI HPC Toolkit
Get help with building, analyzing, optimizing, and scaling high-performance computing (HPC) applications.
1986 Discussions

Linux lam/mpi to Windows OpenMP or MPI?

fort
Beginner
281 Views

I wish to port a Linux fortran application using lam/mpi to Windows.

Restricting the application to a single PC with 10 cores is acceptable.

I'm looking for recommendations as to whether OpenMP or MPI is preferred.

 

Any references to converting lam/mpi directives to OpenMP or MPI?

 

Thanks,

 

 

0 Kudos
1 Solution
jimdempseyatthecove
Black Belt
234 Views

Lam is the MPI cluster/network topology shell for the MPI application. Your MPI application should (or with relatively easy work) using mpirun/mpiexec.

 

Your first step, would be to leave the MPI application with as little (if any) changes as possible to get it to run with mpirun or mpiexec (this could be 1 node with 10 ranks).

If that gives you acceptable results (performance wise), then your work is done.

 

If you still want to go the OpenMP route....

Leave the MPI code alone. At some point in the future, you or your successor may need a distributed model.

Start with running your application from the command line (or from MS VS) without an mpirun launch. IOW the program should run as a standalone app, with the MPI code viewing it running as a Rank-1 world of one. 

Once you have your development system to run the MPI aware program outside of an mpi(run/exec) launch, you can then address incorporating OpenMP.

 

Note, with this arrangement, you can run the application with both MPI and OpenMP within rank if you so desire not only on your single PC, but also on a cluster (yours or somewhere else).

You can experiment on your system, say with two ranks, each occupying 5 cores. While you generally would run as one(no) rank, one process, 10 cores (20 threads?).

 

Then, after all this is working (note with as little modifications as possible), if you think it to your benefit add conditional compilation directives (.e.g. !dir$ if defined(USE_MPI) and !dir$ endif) to surround the MPI statements. This way, at some time later, you can restore MPI capability with a simple define.

Jim Dempsey

View solution in original post

2 Replies
jimdempseyatthecove
Black Belt
235 Views

Lam is the MPI cluster/network topology shell for the MPI application. Your MPI application should (or with relatively easy work) using mpirun/mpiexec.

 

Your first step, would be to leave the MPI application with as little (if any) changes as possible to get it to run with mpirun or mpiexec (this could be 1 node with 10 ranks).

If that gives you acceptable results (performance wise), then your work is done.

 

If you still want to go the OpenMP route....

Leave the MPI code alone. At some point in the future, you or your successor may need a distributed model.

Start with running your application from the command line (or from MS VS) without an mpirun launch. IOW the program should run as a standalone app, with the MPI code viewing it running as a Rank-1 world of one. 

Once you have your development system to run the MPI aware program outside of an mpi(run/exec) launch, you can then address incorporating OpenMP.

 

Note, with this arrangement, you can run the application with both MPI and OpenMP within rank if you so desire not only on your single PC, but also on a cluster (yours or somewhere else).

You can experiment on your system, say with two ranks, each occupying 5 cores. While you generally would run as one(no) rank, one process, 10 cores (20 threads?).

 

Then, after all this is working (note with as little modifications as possible), if you think it to your benefit add conditional compilation directives (.e.g. !dir$ if defined(USE_MPI) and !dir$ endif) to surround the MPI statements. This way, at some time later, you can restore MPI capability with a simple define.

Jim Dempsey

ShanmukhS_Intel
Moderator
202 Views

Hi,


Thanks for accepting the solution. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel.


Best Regards,

Shanmukh.SS



Reply