<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Linux lam/mpi to Windows OpenMP or MPI? in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/Linux-lam-mpi-to-Windows-OpenMP-or-MPI/m-p/1364805#M9232</link>
    <description>&lt;P&gt;Lam is the MPI cluster/network topology shell for the MPI application. Your MPI application should (or with relatively easy work) using mpirun/mpiexec.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Your first step, would be to leave the MPI application with as little (if any) changes as possible to get it to run with mpirun or mpiexec (this could be 1 node with 10 ranks).&lt;/P&gt;
&lt;P&gt;If that gives you acceptable results (performance wise), then your work is done.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you still want to go the OpenMP route....&lt;/P&gt;
&lt;P&gt;Leave the MPI code alone. At some point in the future, you or your successor may need a distributed model.&lt;/P&gt;
&lt;P&gt;Start with running your application from the command line (or from MS VS) without an mpirun launch. IOW the program should run as a standalone app, with the MPI code viewing it running as a Rank-1 world of one.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once you have your development system to run the MPI aware program outside of an mpi(run/exec) launch, you can then address incorporating OpenMP.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Note, with this arrangement, you can run the application with both MPI and OpenMP within rank if you so desire not only on your single PC, but also on a cluster (yours or somewhere else).&lt;/P&gt;
&lt;P&gt;You can experiment on your system, say with two ranks, each occupying 5 cores. While you generally would run as one(no) rank, one process, 10 cores (20 threads?).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Then, after all this is working (note with as little modifications as possible), if you think it to your benefit add conditional compilation directives (.e.g. !dir$ if defined(USE_MPI) and !dir$ endif) to surround the MPI statements. This way, at some time later, you can restore MPI capability with a simple define.&lt;/P&gt;
&lt;P&gt;Jim Dempsey&lt;/P&gt;</description>
    <pubDate>Tue, 01 Mar 2022 14:33:01 GMT</pubDate>
    <dc:creator>jimdempseyatthecove</dc:creator>
    <dc:date>2022-03-01T14:33:01Z</dc:date>
    <item>
      <title>Linux lam/mpi to Windows OpenMP or MPI?</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Linux-lam-mpi-to-Windows-OpenMP-or-MPI/m-p/1364555#M9231</link>
      <description>&lt;P&gt;I wish to port a Linux fortran application using lam/mpi to Windows.&lt;/P&gt;
&lt;P&gt;Restricting the application to a single PC with 10 cores is acceptable.&lt;/P&gt;
&lt;P&gt;I'm looking for recommendations as to whether OpenMP or MPI is preferred.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Any references to converting lam/mpi directives to OpenMP or MPI?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 28 Feb 2022 23:31:57 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Linux-lam-mpi-to-Windows-OpenMP-or-MPI/m-p/1364555#M9231</guid>
      <dc:creator>fort</dc:creator>
      <dc:date>2022-02-28T23:31:57Z</dc:date>
    </item>
    <item>
      <title>Re: Linux lam/mpi to Windows OpenMP or MPI?</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Linux-lam-mpi-to-Windows-OpenMP-or-MPI/m-p/1364805#M9232</link>
      <description>&lt;P&gt;Lam is the MPI cluster/network topology shell for the MPI application. Your MPI application should (or with relatively easy work) using mpirun/mpiexec.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Your first step, would be to leave the MPI application with as little (if any) changes as possible to get it to run with mpirun or mpiexec (this could be 1 node with 10 ranks).&lt;/P&gt;
&lt;P&gt;If that gives you acceptable results (performance wise), then your work is done.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you still want to go the OpenMP route....&lt;/P&gt;
&lt;P&gt;Leave the MPI code alone. At some point in the future, you or your successor may need a distributed model.&lt;/P&gt;
&lt;P&gt;Start with running your application from the command line (or from MS VS) without an mpirun launch. IOW the program should run as a standalone app, with the MPI code viewing it running as a Rank-1 world of one.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Once you have your development system to run the MPI aware program outside of an mpi(run/exec) launch, you can then address incorporating OpenMP.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Note, with this arrangement, you can run the application with both MPI and OpenMP within rank if you so desire not only on your single PC, but also on a cluster (yours or somewhere else).&lt;/P&gt;
&lt;P&gt;You can experiment on your system, say with two ranks, each occupying 5 cores. While you generally would run as one(no) rank, one process, 10 cores (20 threads?).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Then, after all this is working (note with as little modifications as possible), if you think it to your benefit add conditional compilation directives (.e.g. !dir$ if defined(USE_MPI) and !dir$ endif) to surround the MPI statements. This way, at some time later, you can restore MPI capability with a simple define.&lt;/P&gt;
&lt;P&gt;Jim Dempsey&lt;/P&gt;</description>
      <pubDate>Tue, 01 Mar 2022 14:33:01 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Linux-lam-mpi-to-Windows-OpenMP-or-MPI/m-p/1364805#M9232</guid>
      <dc:creator>jimdempseyatthecove</dc:creator>
      <dc:date>2022-03-01T14:33:01Z</dc:date>
    </item>
    <item>
      <title>Re:Linux lam/mpi to Windows OpenMP or MPI?</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Linux-lam-mpi-to-Windows-OpenMP-or-MPI/m-p/1365031#M9233</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thanks for accepting the solution. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;Shanmukh.SS&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 02 Mar 2022 06:40:47 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Linux-lam-mpi-to-Windows-OpenMP-or-MPI/m-p/1365031#M9233</guid>
      <dc:creator>ShanmukhS_Intel</dc:creator>
      <dc:date>2022-03-02T06:40:47Z</dc:date>
    </item>
  </channel>
</rss>

