<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic This problem is already in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107408#M5289</link>
    <description>&lt;P&gt;This problem is already solved.&amp;nbsp;Thank you.&lt;/P&gt;

&lt;P&gt;Really, I've got another question: under unknown reason&amp;nbsp;my MPI program doesn't work (hangs) when you launch processes on different nodes (hosts). In my program I use &lt;FONT face="Consolas" size="2"&gt;&lt;FONT face="Consolas" size="2"&gt;MPI_Win_allocate_shared function to allocate shared memory using RMA window. And I'm wondering&amp;nbsp;what is the possible cause why my program doesn't work.&amp;nbsp;Do&amp;nbsp;I actually&amp;nbsp;need to implement intercommunicators for that purpose?&lt;/FONT&gt;&lt;/FONT&gt;&lt;/P&gt;

&lt;P&gt;&lt;FONT face="Consolas" size="2"&gt;&lt;FONT face="Consolas" size="2"&gt;I'm sorry but I can't provide any sources yet.&lt;/FONT&gt;&lt;/FONT&gt;&lt;/P&gt;

&lt;P&gt;Waiting for your reply.&lt;/P&gt;

&lt;P&gt;Cheers, Arthur.&lt;/P&gt;</description>
    <pubDate>Sat, 30 Jan 2016 09:06:02 GMT</pubDate>
    <dc:creator>ArthurRatz</dc:creator>
    <dc:date>2016-01-30T09:06:02Z</dc:date>
    <item>
      <title>Can you help me to figure out why this program cannot be executed using more than 2 processors ?</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107406#M5287</link>
      <description>&lt;P&gt;Dear Collegues,&lt;/P&gt;

&lt;P&gt;Recently, I've developed an&amp;nbsp;MPI&amp;nbsp;program that performs sorting of an array of N=10^6 data items. All data items have a type of __int64. The entire sorting is workshared between multiple &amp;gt; 10 processes. To share data between processes I've created an MPI window using MPI_Win_allocate_shared&amp;nbsp;function. While performing an actual sorting of the array of N=10^6 data items running&amp;nbsp;it&amp;nbsp;using 10 or more processes the program hangs (e.g. the sorting process is endless). The program performs the correct sorting only by running it using&amp;nbsp;no more than 2 processes.&lt;/P&gt;

&lt;P&gt;Can you help me to figure out why this program cannot be executed using more than 2 processors (see attachment) ?&lt;/P&gt;

&lt;P&gt;I've compiled and run the program as follows:&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;mpiicpc -o sortmpi_shared.exe sortmpi_shared.cpp&lt;/STRONG&gt;&lt;/P&gt;

&lt;P&gt;&lt;STRONG&gt;mpiexec -np 10 sortmpi_shared.exe&lt;/STRONG&gt;&lt;/P&gt;

&lt;P&gt;Thanks a lot. Waiting for your reply.&lt;/P&gt;

&lt;P&gt;Cheers, Arthur.&lt;/P&gt;</description>
      <pubDate>Thu, 28 Jan 2016 05:18:39 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107406#M5287</guid>
      <dc:creator>ArthurRatz</dc:creator>
      <dc:date>2016-01-28T05:18:39Z</dc:date>
    </item>
    <item>
      <title>Hello Arthur,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107407#M5288</link>
      <description>&lt;P&gt;Hello Arthur,&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&amp;nbsp; Can you provide a source? I found only executable and cfg files&amp;nbsp;in&amp;nbsp;your zip file.&lt;/P&gt;

&lt;P&gt;Thanks,&lt;/P&gt;

&lt;P&gt;Mark&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 29 Jan 2016 22:01:35 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107407#M5288</guid>
      <dc:creator>Mark_L_Intel</dc:creator>
      <dc:date>2016-01-29T22:01:35Z</dc:date>
    </item>
    <item>
      <title>This problem is already</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107408#M5289</link>
      <description>&lt;P&gt;This problem is already solved.&amp;nbsp;Thank you.&lt;/P&gt;

&lt;P&gt;Really, I've got another question: under unknown reason&amp;nbsp;my MPI program doesn't work (hangs) when you launch processes on different nodes (hosts). In my program I use &lt;FONT face="Consolas" size="2"&gt;&lt;FONT face="Consolas" size="2"&gt;MPI_Win_allocate_shared function to allocate shared memory using RMA window. And I'm wondering&amp;nbsp;what is the possible cause why my program doesn't work.&amp;nbsp;Do&amp;nbsp;I actually&amp;nbsp;need to implement intercommunicators for that purpose?&lt;/FONT&gt;&lt;/FONT&gt;&lt;/P&gt;

&lt;P&gt;&lt;FONT face="Consolas" size="2"&gt;&lt;FONT face="Consolas" size="2"&gt;I'm sorry but I can't provide any sources yet.&lt;/FONT&gt;&lt;/FONT&gt;&lt;/P&gt;

&lt;P&gt;Waiting for your reply.&lt;/P&gt;

&lt;P&gt;Cheers, Arthur.&lt;/P&gt;</description>
      <pubDate>Sat, 30 Jan 2016 09:06:02 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107408#M5289</guid>
      <dc:creator>ArthurRatz</dc:creator>
      <dc:date>2016-01-30T09:06:02Z</dc:date>
    </item>
    <item>
      <title>You do not need to implement</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107409#M5290</link>
      <description>&lt;P&gt;You do not need to implement intercommunicators. This paper&lt;/P&gt;

&lt;P&gt;&lt;A href="http://goparallel.sourceforge.net/wp-content/uploads/2015/06/PUM21-2-An_Introduction_to_MPI-3.pdf"&gt;http://goparallel.sourceforge.net/wp-content/uploads/2015/06/PUM21-2-An_Introduction_to_MPI-3.pdf&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;contains links to the downloadable sources illustrating&amp;nbsp;MPI-3 Shared Memory&amp;nbsp;programming model&amp;nbsp;in the multi-node setting, e.g.:&lt;/P&gt;

&lt;P&gt;&lt;A href="http://tinyurl.com/MPI-SHM-example"&gt;http://tinyurl.com/MPI-SHM-example&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;Could you&amp;nbsp;try to run&amp;nbsp;this&amp;nbsp;first example from the paper&amp;nbsp;on your cluster (and provide results)?&lt;/P&gt;

&lt;P&gt;Here is another quote from the paper that might help: "The function MPI_Comm_split_type enables programmers to determine the maximum groups of MPI ranks that allow such memory sharing. This function has a powerful capability to create “islands” of processes on each node that belong to the output communicator shmcomm". Do you use this function?&lt;/P&gt;

&lt;P&gt;You'd also need to distinguish between ranks on the node versus ranks belonging to different nodes. As you can see, we used MPI_Group_translate_ranks for this purpose.&lt;/P&gt;

&lt;P&gt;Cheers,&lt;/P&gt;

&lt;P&gt;Mark&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 02 Feb 2016 02:29:54 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107409#M5290</guid>
      <dc:creator>Mark_L_Intel</dc:creator>
      <dc:date>2016-02-02T02:29:54Z</dc:date>
    </item>
    <item>
      <title>Hello, Mark.</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107410#M5291</link>
      <description>&lt;P&gt;Hello, Mark.&lt;/P&gt;

&lt;P&gt;I've already tested the following example you have provided on my cluster. Here's results:&lt;/P&gt;

&lt;P&gt;E:\&amp;gt;mpiexec -n 4 -ppn 2 -hosts 2 192.168.0.100 1 192.168.0.150 1 1.exe&lt;BR /&gt;
	Fatal error in MPI_Win_lock_all: Invalid MPI_Win, error stack:&lt;BR /&gt;
	MPI_Win_lock_all(158): MPI_Win_lock_all(MPI_MODE_NOCHECK, win=0x0) failed&lt;BR /&gt;
	MPI_Win_lock_all(103): Invalid MPI_Win&lt;BR /&gt;
	Fatal error in MPI_Win_lock_all: Invalid MPI_Win, error stack:&lt;BR /&gt;
	MPI_Win_lock_all(158): MPI_Win_lock_all(MPI_MODE_NOCHECK, win=0x0) failed&lt;BR /&gt;
	MPI_Win_lock_all(103): Invalid MPI_Win&lt;BR /&gt;
	Fatal error in MPI_Win_lock_all: Invalid MPI_Win, error stack:&lt;BR /&gt;
	MPI_Win_lock_all(158): MPI_Win_lock_all(MPI_MODE_NOCHECK, win=0x5f) failed&lt;BR /&gt;
	MPI_Win_lock_all(103): Invalid MPI_Win&lt;BR /&gt;
	Fatal error in MPI_Win_lock_all: Invalid MPI_Win, error stack:&lt;BR /&gt;
	MPI_Win_lock_all(158): MPI_Win_lock_all(MPI_MODE_NOCHECK, win=0x98) failed&lt;BR /&gt;
	MPI_Win_lock_all(103): Invalid MPI_Win&lt;/P&gt;

&lt;P&gt;E:\&amp;gt;mpiexec -n 4 1.exe&lt;BR /&gt;
	i'm rank 2 with 2 intranode partners, 1 (1), 3 (3)&lt;BR /&gt;
	load MPI/SHM values from neighbour: rank 1, numtasks 4 on COMP-PC.MYHOME.NET&lt;BR /&gt;
	load MPI/SHM values from neighbour: rank 3, numtasks 4 on COMP-PC.MYHOME.NET&lt;BR /&gt;
	i'm rank 3 with 2 intranode partners, 2 (2), 0 (0)&lt;BR /&gt;
	load MPI/SHM values from neighbour: rank 2, numtasks 4 on COMP-PC.MYHOME.NET&lt;BR /&gt;
	load MPI/SHM values from neighbour: rank 0, numtasks 4 on COMP-PC.MYHOME.NET&lt;BR /&gt;
	i'm rank 1 with 2 intranode partners, 0 (0), 2 (2)&lt;BR /&gt;
	load MPI/SHM values from neighbour: rank 0, numtasks 4 on COMP-PC.MYHOME.NET&lt;BR /&gt;
	load MPI/SHM values from neighbour: rank 2, numtasks 4 on COMP-PC.MYHOME.NET&lt;BR /&gt;
	i'm rank 0 with 2 intranode partners, 3 (3), 1 (1)&lt;BR /&gt;
	load MPI/SHM values from neighbour: rank 3, numtasks 4 on COMP-PC.MYHOME.NET&lt;BR /&gt;
	load MPI/SHM values from neighbour: rank 1, numtasks 4 on COMP-PC.MYHOME.NET&lt;/P&gt;

&lt;P&gt;*BUT*,&amp;nbsp;I actually can't figure out how the following sample can be used to solve the problem I stated ?&lt;/P&gt;

&lt;P&gt;Solving this problem my goal is not to use MPI_Send/MPI_Recv between processes of different&amp;nbsp;nodes.&lt;/P&gt;

&lt;P&gt;Normally I need to use MPI_Comm_split_type, &lt;FONT face="Consolas" size="2"&gt;&lt;FONT face="Consolas" size="2"&gt;MPI_Win_allocate_shared, MPI_Win_shared_query functions.&lt;/FONT&gt;&lt;/FONT&gt;&lt;/P&gt;

&lt;P&gt;&lt;FONT face="Consolas" size="2"&gt;&lt;FONT face="Consolas" size="2"&gt;In your recent post, you've told me that &lt;/FONT&gt;&lt;/FONT&gt;MPI_Comm_split_type has a powerful capability to create process islands on&lt;/P&gt;

&lt;P&gt;different nodes (hosts). Can you tell me or provide a sample how to do it ?&lt;/P&gt;

&lt;P&gt;Thanks in advance. Waiting for your reply.&lt;/P&gt;

&lt;P&gt;Cheers, Arthur.&lt;/P&gt;</description>
      <pubDate>Tue, 02 Feb 2016 07:58:40 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107410#M5291</guid>
      <dc:creator>ArthurRatz</dc:creator>
      <dc:date>2016-02-02T07:58:40Z</dc:date>
    </item>
    <item>
      <title> </title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107411#M5292</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;I'd&amp;nbsp;need to reproduce this error.&lt;/P&gt;

&lt;P&gt;Quick comments regarding your questions.&lt;/P&gt;

&lt;P&gt;MPI-3 SHM&amp;nbsp;should not be confused with&amp;nbsp;PGAS (with its global address space) or one-sided/RMA even it&amp;nbsp;relies on MPI-3 RMA framework.&amp;nbsp;MPI-3 SHM programming model&amp;nbsp;enables MPI ranks &lt;EM&gt;within a shared memory domain (typically processes on the same node)&amp;nbsp;&lt;/EM&gt;to allocate shared memory for &lt;EM&gt;direct &lt;/EM&gt;load/store access. In this sense, it is&amp;nbsp;exactly like&amp;nbsp;hybrid MPI +OpenMP (or threads) model. So, when you said that you do not want&amp;nbsp;to use &amp;nbsp;MPI_Send/MPI_Recv between the nodes - what&amp;nbsp;mechanism/functions&amp;nbsp;then do&amp;nbsp;you want to use instead?&lt;/P&gt;

&lt;P&gt;The sample and paper (I referenced in my previous post)already contain all API functions you mentioned including recommended use model for&amp;nbsp;MPI_Comm_split_type. Figure 2 in the paper hopefully might be&amp;nbsp;helpful too. Said that, please do&amp;nbsp;not hesitate to ask additional questions.&lt;/P&gt;

&lt;P&gt;Mark&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;DIV style="left: 224.6px; top: 925.33px; font-family: sans-serif; font-size: 16.6px; transform: scaleX(0.93492);" data-canvas-width="576.377920833333"&gt;&amp;nbsp;&lt;/DIV&gt;

&lt;P&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 03 Feb 2016 02:46:28 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107411#M5292</guid>
      <dc:creator>Mark_L_Intel</dc:creator>
      <dc:date>2016-02-03T02:46:28Z</dc:date>
    </item>
    <item>
      <title>Mark, Thanks a lot for your</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107412#M5293</link>
      <description>&lt;P&gt;Mark, Thanks a lot for your answer. I'd so much appreciate it.&lt;/P&gt;</description>
      <pubDate>Wed, 03 Feb 2016 02:57:00 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107412#M5293</guid>
      <dc:creator>ArthurRatz</dc:creator>
      <dc:date>2016-02-03T02:57:00Z</dc:date>
    </item>
    <item>
      <title>And one more question: is it</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107413#M5294</link>
      <description>&lt;P&gt;And one more question: is it possible to implement global address space shared between multiple nodes (hosts) using MPI and not using PGAS&amp;nbsp;? Can you point me at what particular framework like MPI-3 RMA can be used for that purpose ? Thanks in advance.&lt;/P&gt;</description>
      <pubDate>Wed, 03 Feb 2016 03:00:30 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107413#M5294</guid>
      <dc:creator>ArthurRatz</dc:creator>
      <dc:date>2016-02-03T03:00:30Z</dc:date>
    </item>
    <item>
      <title>And the last question: how</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107414#M5295</link>
      <description>&lt;P&gt;And the last question: how can PGAS be used along with&amp;nbsp;MPI library ? Can you post an example if it's possible ?&lt;/P&gt;</description>
      <pubDate>Wed, 03 Feb 2016 03:02:27 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107414#M5295</guid>
      <dc:creator>ArthurRatz</dc:creator>
      <dc:date>2016-02-03T03:02:27Z</dc:date>
    </item>
    <item>
      <title>And one more thing, recently</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107415#M5296</link>
      <description>&lt;P&gt;And one more thing, recently I've tried to allocate&amp;nbsp;shared memory on multiple nodes using RMA window.using MPI_Win_Create, MPI_Get, MPI_Put functions and it worked for me similarly as if I've used MPI_Send, MPI_Recv function. Can you explain me why it doesn't work only&amp;nbsp;when you use MPI_Win_allocate_shared, MPI_Comm_split_type functions ??&lt;/P&gt;</description>
      <pubDate>Wed, 03 Feb 2016 04:32:33 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107415#M5296</guid>
      <dc:creator>ArthurRatz</dc:creator>
      <dc:date>2016-02-03T04:32:33Z</dc:date>
    </item>
    <item>
      <title> Yes, PGAS can be implemented</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107416#M5297</link>
      <description>&lt;P&gt;&amp;nbsp;Yes, PGAS can be implemented using MPI-3 RMA, e.g., please see (and references therein)&lt;/P&gt;

&lt;P&gt;&lt;A href="http://arxiv.org/pdf/1507.01773.pdf"&gt;DART: http://arxiv.org/pdf/1507.01773.pdf&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;OpenSHMEM: &lt;A href="http://www.csm.ornl.gov/workshops/openshmem2013/documents/ImplementingOpenSHMEM%20UsingMPI-3.pdf"&gt;http://www.csm.ornl.gov/workshops/openshmem2013/documents/ImplementingOpenSHMEM%20UsingMPI-3.pdf&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;&lt;A href="http://mug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/2014/hammond.pdf"&gt;http://mug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/2014/hammond.pdf&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;These two&amp;nbsp; preprints from ANL are also excellent:&lt;/P&gt;

&lt;P&gt;&lt;A href="http://www.mcs.anl.gov/uploads/cels/papers/P4014-0113.pdf"&gt;http://www.mcs.anl.gov/uploads/cels/papers/P4014-0113.pdf&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;&lt;A href="http://www.mcs.anl.gov/papers/P4062-0413_1.pdf"&gt;http://www.mcs.anl.gov/papers/P4062-0413_1.pdf&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;Yes, PGAS can be used along with&amp;nbsp;MPI, e.g., MVAPICH team at OSU, supports such MPI/PGAS hybrid model through its MVAPICH2-X offering :&lt;/P&gt;

&lt;P&gt;&lt;A href="http://mvapich.cse.ohio-state.edu/"&gt;http://mvapich.cse.ohio-state.edu/&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;This is a good presentation from this group on the subject:&lt;/P&gt;

&lt;P&gt;&lt;A href="http://mvapich.cse.ohio-state.edu/static/media/talks/slide/osc_theater-PGAS.pdf"&gt;http://mvapich.cse.ohio-state.edu/static/media/talks/slide/osc_theater-PGAS.pdf&lt;/A&gt;&lt;/P&gt;

&lt;P&gt;on your last question:&lt;/P&gt;

&lt;P&gt;"I've tried to allocate&amp;nbsp;shared memory on multiple nodes using RMA window.using MPI_Win_Create, MPI_Get, MPI_Put functions and it worked for me similarly as if I've used MPI_Send, MPI_Recv function. Can you explain me why it doesn't work only&amp;nbsp;when you use MPI_Win_allocate_shared, MPI_Comm_split_type functions ??"&lt;/P&gt;

&lt;P&gt;As I said above, MPI-3 SHM model (using MPI_Win_allocate_shared, MPI_Comm_split_type, etc.)&amp;nbsp;is closer to the hybrid MPI + Open MP model rather than&amp;nbsp;to RMA even it relies on RMA. If you look under the hood, MPI-3 SHM provides direct load/store &amp;nbsp;memory access exactly like in the case of threads (btw with all its well-known pitfalls such as data races, etc.).&lt;/P&gt;

&lt;P&gt;Citing &lt;A href="http://www.mcs.anl.gov/~thakur/papers/shmem-win.pdf"&gt;http://www.mcs.anl.gov/~thakur/papers/shmem-win.pdf&lt;/A&gt;,&lt;/P&gt;

&lt;P&gt;while in&lt;/P&gt;

&lt;P&gt;"one-sided communication interface, the user allocates memory and then exposes it in a window. This model of window creation is not compatible with the inter-process shared-memory support provided by most operating systems",&lt;/P&gt;

&lt;P&gt;in MPI-3 SHM, through the mechanism described in this last paper,&amp;nbsp; we end up with the&amp;nbsp;truly shared&amp;nbsp;memory environment, so for example, &amp;nbsp;&lt;/P&gt;

&lt;P&gt;"Load/store operations do not pass through the MPI library; and, as a result, MPI is unaware of which locations were accessed and whether data was updated"&lt;/P&gt;

&lt;P&gt;Best,&lt;/P&gt;

&lt;P&gt;Mark&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;

&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 05 Feb 2016 02:42:40 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107416#M5297</guid>
      <dc:creator>Mark_L_Intel</dc:creator>
      <dc:date>2016-02-05T02:42:40Z</dc:date>
    </item>
    <item>
      <title>Thanks for reference links,</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107417#M5298</link>
      <description>&lt;P&gt;Thanks for reference links, Mark. I'm going to read this documentation.&lt;/P&gt;</description>
      <pubDate>Fri, 05 Feb 2016 02:54:36 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107417#M5298</guid>
      <dc:creator>ArthurRatz</dc:creator>
      <dc:date>2016-02-05T02:54:36Z</dc:date>
    </item>
    <item>
      <title>Can you give me an example of</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107418#M5299</link>
      <description>&lt;P&gt;Can you give me an example of using openshmem along with MPI library?&lt;/P&gt;</description>
      <pubDate>Wed, 10 Feb 2016 03:20:19 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/Can-you-help-me-to-figure-out-why-this-program-cannot-be/m-p/1107418#M5299</guid>
      <dc:creator>ArthurRatz</dc:creator>
      <dc:date>2016-02-10T03:20:19Z</dc:date>
    </item>
  </channel>
</rss>

