<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: mpirun error in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error/m-p/1427569#M9994</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks for posting in the Intel forums.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;New window objects can be created by calling the "MPI.Win.Create()" method within a communicator and by specifying a memory buffer.&lt;/P&gt;
&lt;P&gt;When a window instance is no longer needed, the "MPI.Win.Free()" method should be called.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;So we should use MPI.Win.Free() method to free the new window objects that have been created. Please refer to the below code:&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;import mpi4py
import time
import numpy as np
from mpi4py import MPI

#comm = MPI.COMM_WORLD
#rank = comm.Get_rank()
#print("rank",rank)


if __name__ == '__main__':
    comm = MPI.COMM_WORLD
    rank = comm.Get_rank()
    print("rank",rank)

    if rank == 0:
        mem = np.array([0], dtype='i')
        win = MPI.Win.Create(mem, comm=comm)
        MPI.Win.Free(win)
    else:
        win = MPI.Win.Create(None, comm=comm)
        MPI.Win.Free(win)
    print(rank, "end")
&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Command to run the code:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;mpirun -n 2 python -u test.py&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Observed output:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="SantoshY_Intel_0-1667534872452.png" style="width: 534px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/34833i0F59BCDDAAB7BE78/image-dimensions/534x87?v=v2&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" width="534" height="87" role="button" title="SantoshY_Intel_0-1667534872452.png" alt="SantoshY_Intel_0-1667534872452.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks &amp;amp; Regards,&lt;/P&gt;
&lt;P&gt;Santosh&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 04 Nov 2022 04:08:21 GMT</pubDate>
    <dc:creator>SantoshY_Intel</dc:creator>
    <dc:date>2022-11-04T04:08:21Z</dc:date>
    <item>
      <title>mpirun error</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error/m-p/1426821#M9990</link>
      <description>&lt;P&gt;Code:&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;import mpi4py
import time
import numpy as np
from mpi4py import MPI

comm = MPI.COMM_WORLD
rank = comm.Get_rank()
print("rank",rank)


if __name__ == '__main__':
    if rank == 0:
        mem = np.array([0], dtype='i')
        win = MPI.Win.Create(mem, comm=comm)
    else:
        win = MPI.Win.Create(None, comm=comm)
    print(rank, "end")&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;(py3.6.8) ➜ ~ mpirun -n 2 python -u test.py&lt;BR /&gt;rank 0&lt;BR /&gt;rank 1&lt;BR /&gt;0 end&lt;BR /&gt;1 end&lt;BR /&gt;Abort(806449679): Fatal error in internal_Finalize: Other MPI error, error stack:&lt;BR /&gt;internal_Finalize(50)...........: MPI_Finalize failed&lt;BR /&gt;MPII_Finalize(345)..............:&lt;BR /&gt;MPID_Finalize(511)..............:&lt;BR /&gt;MPIDI_OFI_mpi_finalize_hook(895):&lt;BR /&gt;destroy_vni_context(1137).......: OFI domain close failed (ofi_init.c:1137:destroy_vni_context:Device or resource busy)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Why is this happening? How to debug? This error is not reported on the other machine.&lt;/P&gt;</description>
      <pubDate>Tue, 01 Nov 2022 17:11:33 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error/m-p/1426821#M9990</guid>
      <dc:creator>GoodLuck</dc:creator>
      <dc:date>2022-11-01T17:11:33Z</dc:date>
    </item>
    <item>
      <title>Re: mpirun error</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error/m-p/1427569#M9994</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks for posting in the Intel forums.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;New window objects can be created by calling the "MPI.Win.Create()" method within a communicator and by specifying a memory buffer.&lt;/P&gt;
&lt;P&gt;When a window instance is no longer needed, the "MPI.Win.Free()" method should be called.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;So we should use MPI.Win.Free() method to free the new window objects that have been created. Please refer to the below code:&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;import mpi4py
import time
import numpy as np
from mpi4py import MPI

#comm = MPI.COMM_WORLD
#rank = comm.Get_rank()
#print("rank",rank)


if __name__ == '__main__':
    comm = MPI.COMM_WORLD
    rank = comm.Get_rank()
    print("rank",rank)

    if rank == 0:
        mem = np.array([0], dtype='i')
        win = MPI.Win.Create(mem, comm=comm)
        MPI.Win.Free(win)
    else:
        win = MPI.Win.Create(None, comm=comm)
        MPI.Win.Free(win)
    print(rank, "end")
&lt;/LI-CODE&gt;
&lt;P&gt;&lt;STRONG&gt;Command to run the code:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;mpirun -n 2 python -u test.py&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Observed output:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="SantoshY_Intel_0-1667534872452.png" style="width: 534px;"&gt;&lt;img src="https://community.intel.com/t5/image/serverpage/image-id/34833i0F59BCDDAAB7BE78/image-dimensions/534x87?v=v2&amp;amp;whitelist-exif-data=Orientation%2CResolution%2COriginalDefaultFinalSize%2CCopyright" width="534" height="87" role="button" title="SantoshY_Intel_0-1667534872452.png" alt="SantoshY_Intel_0-1667534872452.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks &amp;amp; Regards,&lt;/P&gt;
&lt;P&gt;Santosh&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 04 Nov 2022 04:08:21 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error/m-p/1427569#M9994</guid>
      <dc:creator>SantoshY_Intel</dc:creator>
      <dc:date>2022-11-04T04:08:21Z</dc:date>
    </item>
    <item>
      <title>Re: mpirun error</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error/m-p/1427573#M9995</link>
      <description>&lt;P&gt;It works. thank you very much.&lt;/P&gt;</description>
      <pubDate>Fri, 04 Nov 2022 04:14:41 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error/m-p/1427573#M9995</guid>
      <dc:creator>GoodLuck</dc:creator>
      <dc:date>2022-11-04T04:14:41Z</dc:date>
    </item>
    <item>
      <title>Re: mpirun error</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error/m-p/1427578#M9997</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Glad to know that your issue is resolved. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Since your issue is resolved, make sure to accept my previous post as a solution. This would help others with similar issues. Thank you!&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Best Regards,&lt;/P&gt;
&lt;P&gt;Santosh&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 04 Nov 2022 04:32:10 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/mpirun-error/m-p/1427578#M9997</guid>
      <dc:creator>SantoshY_Intel</dc:creator>
      <dc:date>2022-11-04T04:32:10Z</dc:date>
    </item>
  </channel>
</rss>

