<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>Intel® Fortran Compiler中的主题 GPU-aware MPI not working with IFX on Intel GPU</title>
    <link>https://community.intel.com/t5/Intel-Fortran-Compiler/GPU-aware-MPI-not-working-with-IFX-on-Intel-GPU/m-p/1733070#M177996</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I am testing the code POT3D (&lt;A href="http://github.com/predsci/pot3d" target="_blank" rel="noopener"&gt;github.com/predsci/pot3d&lt;/A&gt;) to see if it can run on a Intel B580 GPU.&amp;nbsp; POT3D is a Fortran code that uses "do concurrent" for offload, along with OpenMP Target directives for data movement.&amp;nbsp; I have previously been successful at running a similar code (&lt;A href="http://github.com/predsci/hipft" target="_blank" rel="noopener"&gt;HipFT&lt;/A&gt;) on a B580.&lt;/P&gt;&lt;P&gt;I am building using the &lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;intel_gpu_psi.conf&lt;/FONT&gt; &lt;/STRONG&gt;configuration file that uses &lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;mpiifx&lt;/FONT&gt;&lt;/STRONG&gt; with:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;-O3 -xHost -fp-model precise -heap-arrays -fopenmp-target-do-concurrent -fiopenmp&amp;nbsp;&lt;SPAN&gt;-fopenmp-targets=spir64 -fopenmp-do-concurrent-maptype-modifier=present&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I am using IFX version:&amp;nbsp; &lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;2025.2.2 20251210&lt;/FONT&gt;&lt;/STRONG&gt; on &lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;Ubuntu 24.04.3 LTS&lt;/FONT&gt; &lt;/STRONG&gt;with kernel &lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;6.14.0-37-generic&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;The code uses GPU-aware MPI calls and sets the pointers to the device versions.&amp;nbsp; An example of this is:&lt;/SPAN&gt;&lt;/P&gt;&lt;LI-CODE lang="fortran"&gt;!$omp target data use_device_addr(a)
call MPI_Isend (a(:,:,np-1),lbuf,ntype_real,iproc_pp,tag, comm_all,reqs(1),ierr)
call MPI_Isend (a(:,:, 2),lbuf,ntype_real,iproc_pm,tag, comm_all,reqs(2),ierr)
call MPI_Irecv (a(:,:, 1),lbuf,ntype_real,iproc_pm,tag, comm_all,reqs(3),ierr)
call MPI_Irecv (a(:,:,np),lbuf,ntype_real,iproc_pp,tag, comm_all,reqs(4),ierr)
call MPI_Waitall (4,reqs,MPI_STATUSES_IGNORE,ierr)
!$omp end target data&lt;/LI-CODE&gt;&lt;P&gt;&lt;SPAN&gt;The code compiles fine, but when I try to run it, I get:&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;LI-CODE lang="bash"&gt;===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 0 PID 366896 RUNNING AT 
= KILLED BY SIGNAL: 11 (Segmentation fault)
===================================================================================&lt;/LI-CODE&gt;&lt;P&gt;&lt;SPAN&gt;If I try to activate GPU-aware MPI with&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;&lt;SPAN&gt;export I_MPI_OFFLOAD=1&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;the code just hangs.&amp;nbsp; If I CRTL-C, I get:&lt;/P&gt;&lt;LI-CODE lang="none"&gt;forrtl: error (69): process interrupted (SIGINT)
Image PC Routine Line Source
libc.so.6 0000763A39845330 Unknown Unknown Unknown
libc.so.6 0000763A3990E80B __sched_yield Unknown Unknown
libze_intel_gpu.s 0000763A35386FB6 Unknown Unknown Unknown
libze_intel_gpu.s 0000763A34F9C927 Unknown Unknown Unknown
libomptarget.so 0000763A3C68B526 Unknown Unknown Unknown
libomptarget.so 0000763A3C6B60BC Unknown Unknown Unknown
libomptarget.so 0000763A3C512BB8 Unknown Unknown Unknown
libomptarget.so 0000763A3C51A465 Unknown Unknown Unknown
libomptarget.so 0000763A3C51E4AB Unknown Unknown Unknown
libomptarget.so 0000763A3C4D5FA7 Unknown Unknown Unknown
libomptarget.so 0000763A3C4EF0F1 Unknown Unknown Unknown
libomptarget.so 0000763A3C4DC9A1 __tgt_target_kern Unknown Unknown
pot3d 0000000000435551 Unknown Unknown Unknown
pot3d 0000000000434685 Unknown Unknown Unknown
pot3d 0000000000430497 Unknown Unknown Unknown
pot3d 00000000004155D6 Unknown Unknown Unknown
pot3d 000000000040D71D Unknown Unknown Unknown
libc.so.6 0000763A3982A1CA Unknown Unknown Unknown
libc.so.6 0000763A3982A28B __libc_start_main Unknown Unknown
pot3d 000000000040D635 Unknown Unknown Unknown&lt;/LI-CODE&gt;&lt;P&gt;&lt;SPAN&gt;One issue I could think of is that I use MPI calls with CPU arrays as well as with GPU arrays, with all the GPU MPI calls using &lt;SPAN&gt;&lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;use_device_addr.&lt;/FONT&gt;&lt;/STRONG&gt;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;SPAN&gt;Could it be that the&amp;nbsp; &lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;I_MPI_OFFLOAD&lt;/FONT&gt;&lt;/STRONG&gt; environment variable is an "all or nothing" and either my CPU or GPU MPI calls will be wrong?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;Note also that if I swap in the source file in the subfolder &lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;src/no_gpu_mpi/&lt;/FONT&gt;&lt;/STRONG&gt; which manually copies the GPU data back and forth around MPI calls, than the code runs correctly (but is slower than it should be due to the manual transfers).&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;This means it is an issue with the GPU arrays in the MPI calls.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;Also note that even though I am only running on 1 GPU, the MPI calls are still used due to the periodic domain seam that uses MPI as well as some other calls.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;Thanks!&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;&amp;nbsp;- Ron Caplan&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Fri, 09 Jan 2026 21:41:59 GMT</pubDate>
    <dc:creator>caplanr</dc:creator>
    <dc:date>2026-01-09T21:41:59Z</dc:date>
    <item>
      <title>GPU-aware MPI not working with IFX on Intel GPU</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/GPU-aware-MPI-not-working-with-IFX-on-Intel-GPU/m-p/1733070#M177996</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I am testing the code POT3D (&lt;A href="http://github.com/predsci/pot3d" target="_blank" rel="noopener"&gt;github.com/predsci/pot3d&lt;/A&gt;) to see if it can run on a Intel B580 GPU.&amp;nbsp; POT3D is a Fortran code that uses "do concurrent" for offload, along with OpenMP Target directives for data movement.&amp;nbsp; I have previously been successful at running a similar code (&lt;A href="http://github.com/predsci/hipft" target="_blank" rel="noopener"&gt;HipFT&lt;/A&gt;) on a B580.&lt;/P&gt;&lt;P&gt;I am building using the &lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;intel_gpu_psi.conf&lt;/FONT&gt; &lt;/STRONG&gt;configuration file that uses &lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;mpiifx&lt;/FONT&gt;&lt;/STRONG&gt; with:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;-O3 -xHost -fp-model precise -heap-arrays -fopenmp-target-do-concurrent -fiopenmp&amp;nbsp;&lt;SPAN&gt;-fopenmp-targets=spir64 -fopenmp-do-concurrent-maptype-modifier=present&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I am using IFX version:&amp;nbsp; &lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;2025.2.2 20251210&lt;/FONT&gt;&lt;/STRONG&gt; on &lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;Ubuntu 24.04.3 LTS&lt;/FONT&gt; &lt;/STRONG&gt;with kernel &lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;6.14.0-37-generic&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;The code uses GPU-aware MPI calls and sets the pointers to the device versions.&amp;nbsp; An example of this is:&lt;/SPAN&gt;&lt;/P&gt;&lt;LI-CODE lang="fortran"&gt;!$omp target data use_device_addr(a)
call MPI_Isend (a(:,:,np-1),lbuf,ntype_real,iproc_pp,tag, comm_all,reqs(1),ierr)
call MPI_Isend (a(:,:, 2),lbuf,ntype_real,iproc_pm,tag, comm_all,reqs(2),ierr)
call MPI_Irecv (a(:,:, 1),lbuf,ntype_real,iproc_pm,tag, comm_all,reqs(3),ierr)
call MPI_Irecv (a(:,:,np),lbuf,ntype_real,iproc_pp,tag, comm_all,reqs(4),ierr)
call MPI_Waitall (4,reqs,MPI_STATUSES_IGNORE,ierr)
!$omp end target data&lt;/LI-CODE&gt;&lt;P&gt;&lt;SPAN&gt;The code compiles fine, but when I try to run it, I get:&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;LI-CODE lang="bash"&gt;===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= RANK 0 PID 366896 RUNNING AT 
= KILLED BY SIGNAL: 11 (Segmentation fault)
===================================================================================&lt;/LI-CODE&gt;&lt;P&gt;&lt;SPAN&gt;If I try to activate GPU-aware MPI with&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;&lt;SPAN&gt;export I_MPI_OFFLOAD=1&lt;/SPAN&gt;&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;the code just hangs.&amp;nbsp; If I CRTL-C, I get:&lt;/P&gt;&lt;LI-CODE lang="none"&gt;forrtl: error (69): process interrupted (SIGINT)
Image PC Routine Line Source
libc.so.6 0000763A39845330 Unknown Unknown Unknown
libc.so.6 0000763A3990E80B __sched_yield Unknown Unknown
libze_intel_gpu.s 0000763A35386FB6 Unknown Unknown Unknown
libze_intel_gpu.s 0000763A34F9C927 Unknown Unknown Unknown
libomptarget.so 0000763A3C68B526 Unknown Unknown Unknown
libomptarget.so 0000763A3C6B60BC Unknown Unknown Unknown
libomptarget.so 0000763A3C512BB8 Unknown Unknown Unknown
libomptarget.so 0000763A3C51A465 Unknown Unknown Unknown
libomptarget.so 0000763A3C51E4AB Unknown Unknown Unknown
libomptarget.so 0000763A3C4D5FA7 Unknown Unknown Unknown
libomptarget.so 0000763A3C4EF0F1 Unknown Unknown Unknown
libomptarget.so 0000763A3C4DC9A1 __tgt_target_kern Unknown Unknown
pot3d 0000000000435551 Unknown Unknown Unknown
pot3d 0000000000434685 Unknown Unknown Unknown
pot3d 0000000000430497 Unknown Unknown Unknown
pot3d 00000000004155D6 Unknown Unknown Unknown
pot3d 000000000040D71D Unknown Unknown Unknown
libc.so.6 0000763A3982A1CA Unknown Unknown Unknown
libc.so.6 0000763A3982A28B __libc_start_main Unknown Unknown
pot3d 000000000040D635 Unknown Unknown Unknown&lt;/LI-CODE&gt;&lt;P&gt;&lt;SPAN&gt;One issue I could think of is that I use MPI calls with CPU arrays as well as with GPU arrays, with all the GPU MPI calls using &lt;SPAN&gt;&lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;use_device_addr.&lt;/FONT&gt;&lt;/STRONG&gt;&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&lt;SPAN&gt;Could it be that the&amp;nbsp; &lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;I_MPI_OFFLOAD&lt;/FONT&gt;&lt;/STRONG&gt; environment variable is an "all or nothing" and either my CPU or GPU MPI calls will be wrong?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;Note also that if I swap in the source file in the subfolder &lt;STRONG&gt;&lt;FONT face="courier new,courier"&gt;src/no_gpu_mpi/&lt;/FONT&gt;&lt;/STRONG&gt; which manually copies the GPU data back and forth around MPI calls, than the code runs correctly (but is slower than it should be due to the manual transfers).&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;This means it is an issue with the GPU arrays in the MPI calls.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;Also note that even though I am only running on 1 GPU, the MPI calls are still used due to the periodic domain seam that uses MPI as well as some other calls.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;Thanks!&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;&amp;nbsp;- Ron Caplan&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 09 Jan 2026 21:41:59 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/GPU-aware-MPI-not-working-with-IFX-on-Intel-GPU/m-p/1733070#M177996</guid>
      <dc:creator>caplanr</dc:creator>
      <dc:date>2026-01-09T21:41:59Z</dc:date>
    </item>
    <item>
      <title>Re: GPU-aware MPI not working with IFX on Intel GPU</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/GPU-aware-MPI-not-working-with-IFX-on-Intel-GPU/m-p/1738449#M178394</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It has been a while since this post.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The issue remains.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any idea on how to proceed?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;- Ron&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 24 Feb 2026 21:44:54 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/GPU-aware-MPI-not-working-with-IFX-on-Intel-GPU/m-p/1738449#M178394</guid>
      <dc:creator>caplanr</dc:creator>
      <dc:date>2026-02-24T21:44:54Z</dc:date>
    </item>
    <item>
      <title>Re: GPU-aware MPI not working with IFX on Intel GPU</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/GPU-aware-MPI-not-working-with-IFX-on-Intel-GPU/m-p/1739946#M178471</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Here is some more information on reproducing this problem:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The code can be obtained at:&lt;/P&gt;&lt;P&gt;github.com/predsci/pot3d&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Install and activate the Intel HPC SDK&amp;nbsp;2025.2.2 (the 2025.3 has a bug - that is a separate forum post).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;You then need to have an hdf5 library compiled with the Intel compiler (before version 2.0.0; Version 1.14.3 is known to work).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;To build for the Intel GPU, modify the file "conf/&lt;SPAN&gt;intel_gpu_psi.conf" to point to your installation of hdf5.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;Then,&amp;nbsp;run:&lt;/P&gt;&lt;P&gt;./build.sh conf/&lt;SPAN&gt;intel_gpu_psi.conf&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;You can then go to the "examples/potential_field_source_surface" folder and run the code with:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;mpiexec -np 1 ../../bin/pot3d&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;For me, the run begins and then seg faults with:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;### COMMENT from POT3D:&lt;BR /&gt;### Starting PCG solve.&lt;/P&gt;&lt;P&gt;===================================================================================&lt;BR /&gt;= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES&lt;BR /&gt;= RANK 0 PID 321062 RUNNING AT matana&lt;BR /&gt;= KILLED BY SIGNAL: 11 (Segmentation fault)&lt;BR /&gt;===================================================================================&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;- Ron&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 06 Mar 2026 21:00:42 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/GPU-aware-MPI-not-working-with-IFX-on-Intel-GPU/m-p/1739946#M178471</guid>
      <dc:creator>caplanr</dc:creator>
      <dc:date>2026-03-06T21:00:42Z</dc:date>
    </item>
    <item>
      <title>Re: GPU-aware MPI not working with IFX on Intel GPU</title>
      <link>https://community.intel.com/t5/Intel-Fortran-Compiler/GPU-aware-MPI-not-working-with-IFX-on-Intel-GPU/m-p/1745935#M178747</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Update:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;With the following ENV variables, the code runs correctly:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="bash"&gt;export ZE_FLAT_DEVICE_HIERARCHY=COMPOSITE
export I_MPI_OFFLOAD=1
export I_MPI_OFFLOAD_SYMMETRIC=1
export I_MPI_OFFLOAD_TOPOLIB=none
export I_MPI_OFFLOAD_DOMAIN_SIZE=1
#export LIBOMPTARGET_DEVICES=SUBDEVICE&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;(the last one is needed for MAX 1550 GPUs, but on my single B580 makes it not work).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This works on the 2025.2 compiler and the new 2026.0 compiler.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;- Ron&lt;/P&gt;</description>
      <pubDate>Mon, 27 Apr 2026 17:04:37 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Fortran-Compiler/GPU-aware-MPI-not-working-with-IFX-on-Intel-GPU/m-p/1745935#M178747</guid>
      <dc:creator>caplanr</dc:creator>
      <dc:date>2026-04-27T17:04:37Z</dc:date>
    </item>
  </channel>
</rss>

