<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re:performance drop in RMA (MPI_PUT&amp;amp;MPI_GET) with mlx provider in Intel® MPI Library</title>
    <link>https://community.intel.com/t5/Intel-MPI-Library/performance-drop-in-RMA-MPI-PUT-amp-MPI-GET-with-mlx-provider/m-p/1549482#M11198</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;We have informed the team concerned regarding your issue and we are working on it internally. We will get back to you soon.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thanks and regards,&lt;/P&gt;&lt;P&gt;Aishwarya&lt;/P&gt;&lt;BR /&gt;</description>
    <pubDate>Fri, 01 Dec 2023 06:49:12 GMT</pubDate>
    <dc:creator>AishwaryaCV_Intel</dc:creator>
    <dc:date>2023-12-01T06:49:12Z</dc:date>
    <item>
      <title>performance drop in RMA (MPI_PUT&amp;MPI_GET) with mlx provider</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/performance-drop-in-RMA-MPI-PUT-amp-MPI-GET-with-mlx-provider/m-p/1544453#M11142</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I discovered a performance issue with RMA, which is described as follows:&lt;/P&gt;&lt;P&gt;When my window size exceeds 2GB, I discovered the performance of MPI_PUT and MPI_GET will be very low when using provider=mlx(compare to verbs Or psm3),&lt;BR /&gt;And My test code is listing as follows:&lt;BR /&gt;module mpi_data&lt;BR /&gt;integer rank,np,ierr,winData&lt;BR /&gt;end module&lt;/P&gt;&lt;P&gt;program main&lt;BR /&gt;use mpi_data&lt;BR /&gt;implicit none&lt;BR /&gt;integer i&lt;BR /&gt;include 'mpif.h'&lt;BR /&gt;call init_mpi(ierr)&lt;BR /&gt;call mpi_comm_rank(mpi_comm_world,rank,ierr)&lt;BR /&gt;call mpi_comm_size(mpi_comm_world,np,ierr)&lt;BR /&gt;call mpi_main&lt;BR /&gt;call finish_mpi(ierr)&lt;BR /&gt;stop&lt;BR /&gt;end&lt;BR /&gt;subroutine mpi_main()&lt;BR /&gt;use mpi_data&lt;BR /&gt;implicit none&lt;BR /&gt;include 'mpif.h'&lt;BR /&gt;complex,allocatable ::cdata(:),data_tmp(:)&lt;BR /&gt;integer*8 n8,s8,d8&lt;BR /&gt;integer repeat,i&lt;BR /&gt;n8 = 1024*1024*1024*0.2&lt;BR /&gt;repeat = 100000&lt;BR /&gt;d8 = 0&lt;BR /&gt;s8 = 1000&lt;BR /&gt;if(rank==2)then&lt;BR /&gt;allocate(cdata(n8),data_tmp(s8),STAT=ierr)&lt;BR /&gt;cdata(1:n8)=0.0&lt;BR /&gt;call MPI_Win_create(cdata,int8(8*n8),8,MPI_INFO_NULL,mpi_comm_world,winData,ierr)&lt;BR /&gt;else&lt;BR /&gt;allocate(cdata(1),data_tmp(s8),STAT=ierr)&lt;BR /&gt;call MPI_Win_create(cdata,int8(8*1),8,MPI_INFO_NULL,mpi_comm_world,winData,ierr)&lt;BR /&gt;end if&lt;/P&gt;&lt;P&gt;call MPI_Win_fence( 0 , winData,ierr)&lt;BR /&gt;do i=1,repeat&lt;BR /&gt;write(*,*)i,repeat&lt;BR /&gt;if(mod(i,3)==0)call data_gpa(data_tmp,d8,s8,0)&lt;BR /&gt;if(mod(i,3)==1)call data_gpa(data_tmp,d8,s8,1)&lt;BR /&gt;if(mod(i,3)==2)call data_gpa(data_tmp,d8,s8,2)&lt;BR /&gt;end do&lt;BR /&gt;call MPI_Win_fence( 0 , winData,ierr)&lt;/P&gt;&lt;P&gt;deallocate(cdata,data_tmp)&lt;BR /&gt;call MPI_Win_free(winData,ierr)&lt;BR /&gt;end subroutine&lt;/P&gt;&lt;P&gt;subroutine data_gpa(data_tmp,d8,s8,type0)&lt;BR /&gt;use mpi_data&lt;BR /&gt;implicit none&lt;BR /&gt;include 'mpif.h'&lt;BR /&gt;integer level,type0&lt;BR /&gt;integer*8 d8,s8,i8&lt;BR /&gt;complex data_tmp(*)&lt;BR /&gt;if(rank.ne.2)then&lt;BR /&gt;if(type0==0)call MPI_Win_lock(MPI_LOCK_SHARED,2,0,winData,ierr)&lt;BR /&gt;if(type0==1)call MPI_Win_lock(MPI_LOCK_EXCLUSIVE,2,0,winData,ierr)&lt;BR /&gt;if(type0==2)call MPI_Win_lock(MPI_LOCK_EXCLUSIVE,2,0,winData,ierr)&lt;BR /&gt;if(type0==0)call mpi_get(data_tmp,s8,MPI_COMPLEX,2,d8,s8,MPI_COMPLEX,winData,ierr)&lt;BR /&gt;if(type0==1)call mpi_put(data_tmp,s8,MPI_COMPLEX,2,d8,s8,MPI_COMPLEX,winData,ierr)&lt;BR /&gt;if(type0==2)call mpi_accumulate(data_tmp,s8,MPI_COMPLEX,2,d8,s8,MPI_COMPLEX,mpi_sum,winData,ierr)&lt;BR /&gt;call MPI_Win_unlock(2,winData,ierr)&lt;BR /&gt;endif&lt;BR /&gt;end subroutine&lt;/P&gt;</description>
      <pubDate>Thu, 16 Nov 2023 04:35:20 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/performance-drop-in-RMA-MPI-PUT-amp-MPI-GET-with-mlx-provider/m-p/1544453#M11142</guid>
      <dc:creator>Csea1122</dc:creator>
      <dc:date>2023-11-16T04:35:20Z</dc:date>
    </item>
    <item>
      <title>Re: performance drop in RMA (MPI_PUT&amp;MPI_GET) with mlx provider</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/performance-drop-in-RMA-MPI-PUT-amp-MPI-GET-with-mlx-provider/m-p/1545476#M11159</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thank you for posting in Intel Community.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Could you please provide the following details , so that we can reproduce the issue at our end:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;OS and Hardware details.&lt;/LI&gt;
&lt;LI&gt;CPU details.&lt;/LI&gt;
&lt;LI&gt;Intel MPI version.&lt;/LI&gt;
&lt;LI&gt;Compiler used to run the test code.&lt;/LI&gt;
&lt;LI&gt;Steps followed to run and execute the test code.&lt;/LI&gt;
&lt;LI&gt;Could you please inform us about the methods you employed to assess the performance of mlx in comparison to other providers?&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks and regards,&lt;/P&gt;
&lt;P&gt;Aishwarya&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 20 Nov 2023 06:38:05 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/performance-drop-in-RMA-MPI-PUT-amp-MPI-GET-with-mlx-provider/m-p/1545476#M11159</guid>
      <dc:creator>AishwaryaCV_Intel</dc:creator>
      <dc:date>2023-11-20T06:38:05Z</dc:date>
    </item>
    <item>
      <title>Re: performance drop in RMA (MPI_PUT&amp;MPI_GET) with mlx provider</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/performance-drop-in-RMA-MPI-PUT-amp-MPI-GET-with-mlx-provider/m-p/1545529#M11160</link>
      <description>&lt;P&gt;&lt;SPAN&gt;OS :&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;centos7.6&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;#run.sh&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;export UCX_NET_DEVICES=mlx5_0:1 &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;export I_MPI_FABRICS=shm:ofi&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;export FI_PROVIDER=verbs&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;mpirun -np 120 -machinefile ./host9_11 ./main &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;host9_11:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;comput9&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;comput10&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;comput11&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;compiler&lt;/SPAN&gt;：&lt;SPAN&gt;intel-2021.3.0&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;mpi&lt;/SPAN&gt;：&lt;SPAN&gt;intelmpi-2021.10.0 or intelmpi-2021.3.0&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;compiler options:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;ifort_flags=-g -Wall -O3 -fp-model precise -qopenmp –c&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;lscpu:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Architecture:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; x86_64&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;CPU op-mode(s):&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 32-bit, 64-bit&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Byte Order:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Little Endian&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;CPU(s):&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 64&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;On-line CPU(s) list:&amp;nbsp;&amp;nbsp; 0-63&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Thread(s) per core:&amp;nbsp;&amp;nbsp;&amp;nbsp; 1&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Core(s) per socket:&amp;nbsp;&amp;nbsp;&amp;nbsp; 32&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Socket(s):&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 2&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;NUMA node(s):&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 8&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Vendor ID:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; AuthenticAMD&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;CPU family:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 25&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Model:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Model name:&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;AMD EPYC 7543 32-Core Processor&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Stepping:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;CPU MHz:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 2800.000&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;CPU max MHz:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 2800.0000&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;CPU min MHz:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1500.0000&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;BogoMIPS:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 5600.05&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Virtualization:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; AMD-V&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;L1d cache:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 32K&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;L1i cache:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 32K&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;L2 cache:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 512K&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;L3 cache:&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 32768K&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;NUMA node0 CPU(s):&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0-7&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;NUMA node1 CPU(s):&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 8-15&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;NUMA node2 CPU(s):&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 16-23&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;NUMA node3 CPU(s):&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 24-31&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;NUMA node4 CPU(s):&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 32-39&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;NUMA node5 CPU(s):&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 40-47&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;NUMA node6 CPU(s):&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 48-55&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;NUMA node7 CPU(s):&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 56-63&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 20 Nov 2023 10:39:34 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/performance-drop-in-RMA-MPI-PUT-amp-MPI-GET-with-mlx-provider/m-p/1545529#M11160</guid>
      <dc:creator>Csea1122</dc:creator>
      <dc:date>2023-11-20T10:39:34Z</dc:date>
    </item>
    <item>
      <title>Re: performance drop in RMA (MPI_PUT&amp;MPI_GET) with mlx provider</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/performance-drop-in-RMA-MPI-PUT-amp-MPI-GET-with-mlx-provider/m-p/1546818#M11183</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We have used code provided by you , and added MPI_Wtime() to get the timing for comparing performance between different providers. Please find the attached zip file for code.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We have compiled the code with Intel MPI 2021.10 version as following:&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;mpiifort -g -Wall -O3 -fp-model precise -qopenmp mpi_putget.f90 -o putget_new.out
bash run1.sh&lt;/LI-CODE&gt;
&lt;P&gt;Could you please let us know if you have also followed the same step's and method to compare performance between the providers ? If not , please let us know the method you used for comparing the performance?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Thanks and regards,&lt;/P&gt;
&lt;P&gt;Aishwarya&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Nov 2023 11:10:44 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/performance-drop-in-RMA-MPI-PUT-amp-MPI-GET-with-mlx-provider/m-p/1546818#M11183</guid>
      <dc:creator>AishwaryaCV_Intel</dc:creator>
      <dc:date>2023-11-23T11:10:44Z</dc:date>
    </item>
    <item>
      <title>Re: performance drop in RMA (MPI_PUT&amp;MPI_GET) with mlx provider</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/performance-drop-in-RMA-MPI-PUT-amp-MPI-GET-with-mlx-provider/m-p/1547040#M11185</link>
      <description>&lt;P&gt;I have used the three scripts(&lt;SPAN&gt;run_with_mlx.rar/run_with_verbs.rar/run_with_psm3.rar&lt;/SPAN&gt;) in the attachment for performance comparison.The large process number(&amp;gt;=100 or &amp;gt;=300) have huge performance difference between the three scripts.&lt;/P&gt;&lt;P&gt;And the hosts are list below.&lt;/P&gt;</description>
      <pubDate>Fri, 24 Nov 2023 05:36:51 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/performance-drop-in-RMA-MPI-PUT-amp-MPI-GET-with-mlx-provider/m-p/1547040#M11185</guid>
      <dc:creator>Csea1122</dc:creator>
      <dc:date>2023-11-24T05:36:51Z</dc:date>
    </item>
    <item>
      <title>Re:performance drop in RMA (MPI_PUT&amp;MPI_GET) with mlx provider</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/performance-drop-in-RMA-MPI-PUT-amp-MPI-GET-with-mlx-provider/m-p/1549482#M11198</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;We have informed the team concerned regarding your issue and we are working on it internally. We will get back to you soon.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thanks and regards,&lt;/P&gt;&lt;P&gt;Aishwarya&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 01 Dec 2023 06:49:12 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/performance-drop-in-RMA-MPI-PUT-amp-MPI-GET-with-mlx-provider/m-p/1549482#M11198</guid>
      <dc:creator>AishwaryaCV_Intel</dc:creator>
      <dc:date>2023-12-01T06:49:12Z</dc:date>
    </item>
    <item>
      <title>Re: Re:performance drop in RMA (MPI_PUT&amp;MPI_GET) with mlx provider</title>
      <link>https://community.intel.com/t5/Intel-MPI-Library/performance-drop-in-RMA-MPI-PUT-amp-MPI-GET-with-mlx-provider/m-p/1602301#M11739</link>
      <description>&lt;P&gt;have your team resolved the problem yet?&lt;/P&gt;</description>
      <pubDate>Thu, 30 May 2024 07:52:26 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-MPI-Library/performance-drop-in-RMA-MPI-PUT-amp-MPI-GET-with-mlx-provider/m-p/1602301#M11739</guid>
      <dc:creator>Csea1122</dc:creator>
      <dc:date>2024-05-30T07:52:26Z</dc:date>
    </item>
  </channel>
</rss>

