<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic coarray fortran in Intel® Fortran Compiler</title>
    <link>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750536#M7115</link>
    <description>&lt;P&gt;See this &lt;A href="http://software.intel.com/en-us/articles/distributed-memory-coarray-programs-with-process-pinning/"&gt;article&lt;/A&gt;for a method to compile and run a distributed memory coarry program with process pinning to specific nodes/node processors.&lt;BR /&gt;Patrick Kennedy&lt;BR /&gt;Intel Developer Support&lt;/P&gt;</description>
    <pubDate>Tue, 01 Mar 2011 23:57:26 GMT</pubDate>
    <dc:creator>pbkenned1</dc:creator>
    <dc:date>2011-03-01T23:57:26Z</dc:date>
    <item>
      <title>coarray fortran</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750522#M7101</link>
      <description>Dear all,&lt;BR /&gt;&lt;BR /&gt;There seems to be a fortran coarray support in &lt;A href="https://community.intel.com/../articles/intel-composer-xe/"&gt;Intel&lt;SUP&gt;&lt;/SUP&gt; Fortran Composer XE 2011&lt;/A&gt;. Can someone please confirm whether it supports cluster or not?&lt;BR /&gt;&lt;BR /&gt;Thank you very much.&lt;BR /&gt;</description>
      <pubDate>Tue, 09 Nov 2010 16:47:14 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750522#M7101</guid>
      <dc:creator>homng</dc:creator>
      <dc:date>2010-11-09T16:47:14Z</dc:date>
    </item>
    <item>
      <title>coarray fortran</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750523#M7102</link>
      <description>Yes, it does (on Linux), if you have an Intel Custer Toolkit license as well. Please read the release notes for details.</description>
      <pubDate>Tue, 09 Nov 2010 17:03:12 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750523#M7102</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2010-11-09T17:03:12Z</dc:date>
    </item>
    <item>
      <title>coarray fortran</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750524#M7103</link>
      <description>great! thanks!</description>
      <pubDate>Tue, 09 Nov 2010 17:28:05 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750524#M7103</guid>
      <dc:creator>homng</dc:creator>
      <dc:date>2010-11-09T17:28:05Z</dc:date>
    </item>
    <item>
      <title>coarray fortran</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750525#M7104</link>
      <description>I am trying to evaluate coarray fortran in a cluster. I am little confused. Let say I want to use total of 64 cores. I compile a program with the command: ifort -coarray -coarray-num-images=64 test.f90 -o test&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;So running the program just with ./test is enough or should Iinvokesome mpi commands?&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;Thank you.&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Wed, 10 Nov 2010 14:56:26 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750525#M7104</guid>
      <dc:creator>homng</dc:creator>
      <dc:date>2010-11-10T14:56:26Z</dc:date>
    </item>
    <item>
      <title>coarray fortran</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750526#M7105</link>
      <description>If you want it distributed across a cluster, use -coarray=distributed and do not use -coarray-num-images. It will use your established MPI "ring", or you can specify an MPI configuration file as an option. Please read the documentation for information on use of these options.&lt;BR /&gt;&lt;BR /&gt;Yes, you just start the program with ./test.</description>
      <pubDate>Wed, 10 Nov 2010 19:25:08 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750526#M7105</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2010-11-10T19:25:08Z</dc:date>
    </item>
    <item>
      <title>coarray fortran</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750527#M7106</link>
      <description>thank you so much!</description>
      <pubDate>Wed, 10 Nov 2010 19:29:29 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750527#M7106</guid>
      <dc:creator>homng</dc:creator>
      <dc:date>2010-11-10T19:29:29Z</dc:date>
    </item>
    <item>
      <title>coarray fortran</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750528#M7107</link>
      <description>Please let us know how it works for you.</description>
      <pubDate>Wed, 10 Nov 2010 19:57:09 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750528#M7107</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2010-11-10T19:57:09Z</dc:date>
    </item>
    <item>
      <title>coarray fortran</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750529#M7108</link>
      <description>Certainly. For shared memory case it worked very nice. However, for distributed memory model I could not yet really figure out the right command (I think). I tried to search the documents but I didn't find the right information. I have an access to a big cluster. I have installed both intel fortran composer XE and intel cluster toolkit.&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;First I compile my program with the command: ifort -coarray=distributed sem3dcaf.f90 -o sem3dcaf&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;Then I have the following lines in my batch file which I submit:&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV id="_mcePaste"&gt;# Number of cores:&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;#SBATCH --nodes=8 --ntasks-per-node=8&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;## Set up job environment&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;source /site/bin/jobsetup&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;#start mpd:&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;mpdboot&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;## Run the program:&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;../bin/sem3dcaf ../input/test_nproc64_sf2.psem&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am trying to use 64 cores. Program runs fine but it seems to slower than I expect. Am I doing something wrong? I added mpdboot command otherwise it would give me the error that mpd has not started.&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I would be grateful for the suggestion.&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;HNG&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Thu, 11 Nov 2010 13:42:36 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750529#M7108</guid>
      <dc:creator>homng</dc:creator>
      <dc:date>2010-11-11T13:42:36Z</dc:date>
    </item>
    <item>
      <title>coarray fortran</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750530#M7109</link>
      <description>Our expert on this is out of the office today - I'll ask her to help you here.</description>
      <pubDate>Thu, 11 Nov 2010 15:47:20 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750530#M7109</guid>
      <dc:creator>Steven_L_Intel1</dc:creator>
      <dc:date>2010-11-11T15:47:20Z</dc:date>
    </item>
    <item>
      <title>coarray fortran</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750531#M7110</link>
      <description>That would be great! thank you.</description>
      <pubDate>Fri, 12 Nov 2010 11:04:18 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750531#M7110</guid>
      <dc:creator>homng</dc:creator>
      <dc:date>2010-11-12T11:04:18Z</dc:date>
    </item>
    <item>
      <title>coarray fortran</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750532#M7111</link>
      <description>I went further to test with 16 cores. First I compile the program with:&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;ifort -coarray=distributed -coarray-num-images=16 -O3 test.f90 -o test&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;Now I have a job script which contains following lines:&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV id="_mcePaste"&gt;# Number of cores:&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;#SBATCH --nodes=2 --ntasks-per-node=8&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;## Set up job environment&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;source /site/bin/jobsetup&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;source ~/intel/bin/compilervars.sh intel64&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;mpdboot -n 2 -r ssh -f $PBS_NODEFILE&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;mpdtrace&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;## Do some work:&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;../bin/sem3dcaf ../input/slope3d_new_ngll5_leastt_nproc16.psem&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;mpdtrace command shows 2 nodes correctly. But the program running is utterly slow even for the serial part where I don't need any communication at all. I think there is something wrong, but I don't know what it is. I also have a MPI version of the same code, which runs very fast without any problem with openmpi. I could not correctly run the program without using the option -coarray-num-images=16, because only 8 images were detected during run time! May be there is something wrong.&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;thanks.&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Fri, 12 Nov 2010 14:25:42 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750532#M7111</guid>
      <dc:creator>homng</dc:creator>
      <dc:date>2010-11-12T14:25:42Z</dc:date>
    </item>
    <item>
      <title>coarray fortran</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750533#M7112</link>
      <description>Well ... I was going to give you an MPI-based program to compare with, but you beat me to it!&lt;BR /&gt;&lt;BR /&gt;There are two things here.&lt;BR /&gt;&lt;BR /&gt;First, I believe you when you say yousaw a problem with only 8 images being started up. I saw that myself on one of our internal clusters, but then the problem "went away" before I could provide a reproducer to our MPI developers. I was never able to reproduce it again, and assumed that we fixed something internally in the Fortran runtime support. It's quite interesting to me that you saw it too. I'll look at that one again.&lt;BR /&gt;&lt;BR /&gt;Second, about the program being utterly slow. Would you be willing to share your program with us? The best way is through Premier Support, so that we can keep a good "paper" trail. &lt;BR /&gt;&lt;BR /&gt; thank you --&lt;BR /&gt;&lt;BR /&gt; -- Lorri</description>
      <pubDate>Fri, 12 Nov 2010 18:41:21 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750533#M7112</guid>
      <dc:creator>Lorri_M_Intel</dc:creator>
      <dc:date>2010-11-12T18:41:21Z</dc:date>
    </item>
    <item>
      <title>coarray fortran</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750534#M7113</link>
      <description>Sorry for the delayed response. Thank you for the suggestion. I have submitted an issue via premier support.&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;Thanks&lt;/DIV&gt;</description>
      <pubDate>Thu, 02 Dec 2010 11:58:02 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750534#M7113</guid>
      <dc:creator>homng</dc:creator>
      <dc:date>2010-12-02T11:58:02Z</dc:date>
    </item>
    <item>
      <title>coarray fortran</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750535#M7114</link>
      <description>I have been having the same problem with mpd. I could not get my distributed memory coarray example program to run just calling it (like manual states), with mpiexec or with mpiexec.hydra. It works with mpirun though, or through manual load of the mpdboot like homng above. Full listing of terminal commands and outputs is below.&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;However, I cannot choose a custom placement of the images in the cores. The -perhost or -ppn flags are ignored by mpirun. Is there a way to control the placement of images to cores?&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;Thank you.&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;DIV&gt;=================== TERMINAL OUTPUT FOLLOWS:&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV id="_mcePaste"&gt;tadrian@eiger201:~/codes/CAF/tests_caf&amp;gt; which ifort&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;/apps/eiger/Intel-CTK-2011/bin/ifort&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;tadrian@eiger201:~/codes/CAF/tests_caf&amp;gt; ifort -coarray=distributed -coarray-num-images=12 -o hi_caf_12 hi_caf.f90&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;tadrian@eiger201:~/codes/CAF/tests_caf&amp;gt; ifort -coarray=distributed -coarray-num-images=24 -o hi_caf_24 hi_caf.f90&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;SPAN style="font-size: 10.8333px;"&gt;=== THIS IS INSIDE AN INTERACTIVE PBS SESSION WITH TWO 12 CORES NODES ALLOCATED, eiger201 and eiger202, HENCE 24 CORES IN TOTAL&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;tadrian@eiger201:~/codes/CAF/tests_caf&amp;gt; cat $PBS_NODEFILE&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;eiger202&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;=== JUST CALLING THE EXECUTABLE (AS MANUAL SUGGESTS) WILL GIVE THE mpd ERROR&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;tadrian@eiger201:~/codes/CAF/tests_caf&amp;gt; ./hi_caf_12&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;mpiexec_eiger201: cannot connect to local mpd (/tmp/pbs.10805.eiger170/mpd2.console_eiger201_tadrian); possible causes:&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;1. no mpd is running on this host&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;2. an mpd is running but was started without a "console" (-n option)&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;==== mpiexec DOES NOT WORK EITHER&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;SPAN style="font-size: 10.8333px;"&gt;&lt;DIV style="font-family: verdana, sans-serif; padding: 0px; margin: 0px;" id="_mcePaste"&gt;tadrian@eiger201:~/codes/CAF/tests_caf&amp;gt; which mpiexec&lt;/DIV&gt;&lt;DIV style="font-family: verdana, sans-serif; padding: 0px; margin: 0px;" id="_mcePaste"&gt;/apps/eiger/Intel-CTK-2011/impi/4.0.1.007/bin64/mpiexec&lt;/DIV&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;tadrian@eiger201:~/codes/CAF/tests_caf&amp;gt; mpiexec ./hi_caf_12&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;mpiexec_eiger201: cannot connect to local mpd (/tmp/pbs.10805.eiger170/mpd2.console_eiger201_tadrian); possible causes:&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;1. no mpd is running on this host&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;2. an mpd is running but was started without a "console" (-n option)&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;==== mpiexec.hydra DOES NOT WORK EITHER&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;tadrian@eiger201:~/codes/CAF/tests_caf&amp;gt; which mpiexec.hydra&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;/apps/eiger/Intel-CTK-2011/impi/4.0.1.007/bin64/mpiexec.hydra&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;tadrian@eiger201:~/codes/CAF/tests_caf&amp;gt; mpiexec.hydra ./hi_caf_12&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;./hi_caf_12: error while loading shared libraries: libicaf.so: cannot open shared object file: No such file or directory&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;./hi_caf_12: error while loading shared libraries: libicaf.so: cannot open shared object file: No such file or directory&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;./hi_caf_12: error while loading shared libraries: libicaf.so: cannot open shared object file: No such file or directory&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;./hi_caf_12: error while loading shared libraries: libicaf.so: cannot open shared object file: No such file or directory&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;./hi_caf_12: error while loading shared libraries: libicaf.so: cannot open shared object file: No such file or directory&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;./hi_caf_12: error while loading shared libraries: libicaf.so: cannot open shared object file: No such file or directory&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;./hi_caf_12: error while loading shared libraries: libicaf.so: cannot open shared object file: No such file or directory&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;./hi_caf_12: error while loading shared libraries: libicaf.so: cannot open shared object file: No such file or directory&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;./hi_caf_12: error while loading shared libraries: libicaf.so: cannot open shared object file: No such file or directory&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;./hi_caf_12: error while loading shared libraries: libicaf.so: cannot open shared object file: No such file or directory&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;./hi_caf_12: error while loading shared libraries: libicaf.so: cannot open shared object file: No such file or directory&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;./hi_caf_12: error while loading shared libraries: libicaf.so: cannot open shared object file: No such file or directory&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;=== mpirun WORKS FOR DEFAULT CONFIG: WITH 12 IMAGES IT FILLS THE FIRST NODE&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;tadrian@eiger201:~/codes/CAF/tests_caf&amp;gt; mpirun ./hi_caf_12&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Running with 12 num images&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 1 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 10 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 7 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 6 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 5 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 3 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 9 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 11 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 8 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 12 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 2 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 4 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;SPAN style="font-size: 10.8333px;"&gt;=== mpirun WORKS FOR DEFAULT CONFIG: WITH 24 IMAGES IT FILLS THE TWO NODES&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;tadrian@eiger201:~/codes/CAF/tests_caf&amp;gt; mpirun ./hi_caf_24&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 3 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 4 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 7 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 5 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 8 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 6 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 2 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 9 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 10 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 11 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Running with 24 num images&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 1 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 12 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 24 on host eiger202&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 20 on host eiger202&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 23 on host eiger202&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 15 on host eiger202&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 22 on host eiger202&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 18 on host eiger202&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 13 on host eiger202&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 19 on host eiger202&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 21 on host eiger202&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 17 on host eiger202&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 16 on host eiger202&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 14 on host eiger202&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;==== PLACEMENT FLAG -ppn IS IGNORED THOUGH. HERE ALL IMAGES ARE BOUND TO THE FIRST NODE'S CORES&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;tadrian@eiger201:~/codes/CAF/tests_caf&amp;gt; mpirun -ppn 6 ./hi_caf_12&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Running with 12 num images&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 1 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 8 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 4 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 10 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 5 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 12 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 7 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 6 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 2 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 11 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 9 on host eiger201&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;I am image 3 on host eiger201&lt;/DIV&gt;&lt;DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Thu, 20 Jan 2011 17:20:24 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750535#M7114</guid>
      <dc:creator>Adrian_Tineo</dc:creator>
      <dc:date>2011-01-20T17:20:24Z</dc:date>
    </item>
    <item>
      <title>coarray fortran</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750536#M7115</link>
      <description>&lt;P&gt;See this &lt;A href="http://software.intel.com/en-us/articles/distributed-memory-coarray-programs-with-process-pinning/"&gt;article&lt;/A&gt;for a method to compile and run a distributed memory coarry program with process pinning to specific nodes/node processors.&lt;BR /&gt;Patrick Kennedy&lt;BR /&gt;Intel Developer Support&lt;/P&gt;</description>
      <pubDate>Tue, 01 Mar 2011 23:57:26 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/coarray-fortran/m-p/750536#M7115</guid>
      <dc:creator>pbkenned1</dc:creator>
      <dc:date>2011-03-01T23:57:26Z</dc:date>
    </item>
  </channel>
</rss>

