<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Question About High Throughput Networking in Intel® Xeon® Processor and Server Products</title>
    <link>https://community.intel.com/t5/Intel-Xeon-Processor-and-Server/Question-About-High-Throughput-Networking/m-p/1744587#M27295</link>
    <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I am comparing two different host memory layouts for receiving packet data from an FPGA over PCIe, focusing specifically on CPU-side processing requirements. For my use case, throughput is very important, and I am trying to minimize packet loss as much as possible, ideally down to zero.&lt;/P&gt;&lt;P&gt;In the first approach (chunk-based / stream-like), data is written to host memory in fixed 64-byte blocks. Each block contains its own metadata, such as SOP (start of packet), EOP (end of packet), and possibly a sequence number. A single packet is split across multiple such chunks. On the CPU side, processing involves reading each chunk sequentially, parsing the metadata for every block, detecting packet boundaries using SOP/EOP flags, and reassembling the full packet from multiple chunks before further processing. Also, the packets may be copied to another thread to do the reassembling.&lt;/P&gt;&lt;P&gt;In the second approach, host memory is divided into 2 sections. One is used to hold the packets without any processing, and the other is used as a ring buffer that holds the start of each packet in the first section. This would add another L3 lookup to fetch the address of each 8 packets(the reason for 8 packets is that 8 addresses fit in a cache line). The other overheads of this approach are that it adds a PCIe memory write operation for every 8 packets to write the addresses of the packets, and also it requires another PCIe memory write to change the producer of the ring buffer.&amp;nbsp;&lt;/P&gt;&lt;P&gt;I would like to get your feedback on these two approaches. I am having some trouble comparing the overheads of the two approaches while optimizing for the throughput, looking from the CPU perspective.&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;Alperen&amp;nbsp;Burkay&amp;nbsp;Sevim&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 16 Apr 2026 11:08:12 GMT</pubDate>
    <dc:creator>Alperen_A</dc:creator>
    <dc:date>2026-04-16T11:08:12Z</dc:date>
    <item>
      <title>Question About High Throughput Networking</title>
      <link>https://community.intel.com/t5/Intel-Xeon-Processor-and-Server/Question-About-High-Throughput-Networking/m-p/1744587#M27295</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I am comparing two different host memory layouts for receiving packet data from an FPGA over PCIe, focusing specifically on CPU-side processing requirements. For my use case, throughput is very important, and I am trying to minimize packet loss as much as possible, ideally down to zero.&lt;/P&gt;&lt;P&gt;In the first approach (chunk-based / stream-like), data is written to host memory in fixed 64-byte blocks. Each block contains its own metadata, such as SOP (start of packet), EOP (end of packet), and possibly a sequence number. A single packet is split across multiple such chunks. On the CPU side, processing involves reading each chunk sequentially, parsing the metadata for every block, detecting packet boundaries using SOP/EOP flags, and reassembling the full packet from multiple chunks before further processing. Also, the packets may be copied to another thread to do the reassembling.&lt;/P&gt;&lt;P&gt;In the second approach, host memory is divided into 2 sections. One is used to hold the packets without any processing, and the other is used as a ring buffer that holds the start of each packet in the first section. This would add another L3 lookup to fetch the address of each 8 packets(the reason for 8 packets is that 8 addresses fit in a cache line). The other overheads of this approach are that it adds a PCIe memory write operation for every 8 packets to write the addresses of the packets, and also it requires another PCIe memory write to change the producer of the ring buffer.&amp;nbsp;&lt;/P&gt;&lt;P&gt;I would like to get your feedback on these two approaches. I am having some trouble comparing the overheads of the two approaches while optimizing for the throughput, looking from the CPU perspective.&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;Alperen&amp;nbsp;Burkay&amp;nbsp;Sevim&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 16 Apr 2026 11:08:12 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Xeon-Processor-and-Server/Question-About-High-Throughput-Networking/m-p/1744587#M27295</guid>
      <dc:creator>Alperen_A</dc:creator>
      <dc:date>2026-04-16T11:08:12Z</dc:date>
    </item>
    <item>
      <title>Re:Question About High Throughput Networking</title>
      <link>https://community.intel.com/t5/Intel-Xeon-Processor-and-Server/Question-About-High-Throughput-Networking/m-p/1744658#M27297</link>
      <description>&lt;P&gt;Hi &lt;A href="https://isvc.lightning.force.com/lightning/r/LiSFIntegration__Li_Community_User__c/a4lVz00000X58LKIAZ/view" rel="noopener noreferrer" target="_blank" style="font-size: 14px;"&gt;&lt;U&gt;Alperen_A&lt;/U&gt;&lt;/A&gt;,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Greetings of the day.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Hope you are doing well.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Could you please share more about the product that you are using.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;So that we can proceed further accordingly.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;Mohammed Ali CM&lt;/P&gt;&lt;P&gt;Intel Customer Support Technician&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 16 Apr 2026 20:59:47 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Xeon-Processor-and-Server/Question-About-High-Throughput-Networking/m-p/1744658#M27297</guid>
      <dc:creator>MACM</dc:creator>
      <dc:date>2026-04-16T20:59:47Z</dc:date>
    </item>
    <item>
      <title>Re:Question About High Throughput Networking</title>
      <link>https://community.intel.com/t5/Intel-Xeon-Processor-and-Server/Question-About-High-Throughput-Networking/m-p/1745118#M27305</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;A href="https://isvc.lightning.force.com/lightning/r/LiSFIntegration__Li_Community_User__c/a4lVz00000X58LKIAZ/view" rel="noopener noreferrer" target="_blank" style="font-size: 14px;"&gt;&lt;U&gt;Alperen_A&lt;/U&gt;&lt;/A&gt;,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Greetings of the day.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Hope you are doing well.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;This is a follow up. Could you please share more about the product that you are using.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;So that we can proceed further accordingly.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;Mohammed Ali CM&lt;/P&gt;&lt;P&gt;Intel Customer Support Technician&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 20 Apr 2026 23:43:05 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Xeon-Processor-and-Server/Question-About-High-Throughput-Networking/m-p/1745118#M27305</guid>
      <dc:creator>MACM</dc:creator>
      <dc:date>2026-04-20T23:43:05Z</dc:date>
    </item>
    <item>
      <title>Re: Re:Question About High Throughput Networking</title>
      <link>https://community.intel.com/t5/Intel-Xeon-Processor-and-Server/Question-About-High-Throughput-Networking/m-p/1745205#M27307</link>
      <description>&lt;P&gt;Dear Mohammed Ali,&lt;/P&gt;&lt;P&gt;Thank you for your response.&lt;/P&gt;&lt;P&gt;We are planning to use an Intel Xeon processor as the host CPU in our system.&lt;/P&gt;&lt;P&gt;Best regards,&lt;BR /&gt;Alperen Burkay Sevim&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 21 Apr 2026 13:32:51 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Xeon-Processor-and-Server/Question-About-High-Throughput-Networking/m-p/1745205#M27307</guid>
      <dc:creator>Alperen_A</dc:creator>
      <dc:date>2026-04-21T13:32:51Z</dc:date>
    </item>
    <item>
      <title>Re:Question About High Throughput Networking</title>
      <link>https://community.intel.com/t5/Intel-Xeon-Processor-and-Server/Question-About-High-Throughput-Networking/m-p/1745214#M27308</link>
      <description>&lt;P&gt;Hi Alperen_A,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thank you for your response. Kindly confirm the complete model name of the Intel Xeon processor and the full system model so that we can check the details further.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Best Regards,&lt;/P&gt;&lt;P&gt;Sreelakshmi&lt;/P&gt;&lt;P&gt;Intel Customer Support Technician&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 21 Apr 2026 15:02:20 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Xeon-Processor-and-Server/Question-About-High-Throughput-Networking/m-p/1745214#M27308</guid>
      <dc:creator>Sreelakshmi1</dc:creator>
      <dc:date>2026-04-21T15:02:20Z</dc:date>
    </item>
    <item>
      <title>Re: Re:Question About High Throughput Networking</title>
      <link>https://community.intel.com/t5/Intel-Xeon-Processor-and-Server/Question-About-High-Throughput-Networking/m-p/1745319#M27314</link>
      <description>&lt;P class=""&gt;&lt;SPAN&gt;Hi Sreelakshmi,&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN&gt;We are currently using an Intel Xeon Silver 4510 as the host CPU in our system.&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN&gt;Also, packet data is written directly to host memory via DMA and stored in a PCAP compatible format. Additionally, each packet includes a high-precision timestamp generated using a White Rabbit (WR) synchronization system.&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN&gt;The CPU is responsible for handling continuous high-bandwidth DMA traffic, as well as parsing and processing PCAP formatted packet data together with the associated WR timestamps.&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Best regards,&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;Alperen Burkay Sevim&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 22 Apr 2026 07:27:04 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Xeon-Processor-and-Server/Question-About-High-Throughput-Networking/m-p/1745319#M27314</guid>
      <dc:creator>Alperen_A</dc:creator>
      <dc:date>2026-04-22T07:27:04Z</dc:date>
    </item>
    <item>
      <title>Re:Question About High Throughput Networking</title>
      <link>https://community.intel.com/t5/Intel-Xeon-Processor-and-Server/Question-About-High-Throughput-Networking/m-p/1745334#M27315</link>
      <description>&lt;P&gt;Hi Alperen_A,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Greetings of the day.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thank you for providing the details. We are currently reviewing the case and will get back to you with an update as soon as possible.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Pujeeth_Intel&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 22 Apr 2026 09:39:52 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Xeon-Processor-and-Server/Question-About-High-Throughput-Networking/m-p/1745334#M27315</guid>
      <dc:creator>pujeeth</dc:creator>
      <dc:date>2026-04-22T09:39:52Z</dc:date>
    </item>
    <item>
      <title>Re:Question About High Throughput Networking</title>
      <link>https://community.intel.com/t5/Intel-Xeon-Processor-and-Server/Question-About-High-Throughput-Networking/m-p/1745395#M27316</link>
      <description>&lt;P&gt;Hello Alperen_A,&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Greetings!&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Thank you for your patience. We have sent an email to you. Kindly review it.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;Regards,&amp;nbsp;&lt;/P&gt;&lt;P&gt;Dinesh&lt;/P&gt;&lt;P&gt;Intel Customer Support Technician&lt;/P&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 22 Apr 2026 15:50:10 GMT</pubDate>
      <guid>https://community.intel.com/t5/Intel-Xeon-Processor-and-Server/Question-About-High-Throughput-Networking/m-p/1745395#M27316</guid>
      <dc:creator>Dineshbabu</dc:creator>
      <dc:date>2026-04-22T15:50:10Z</dc:date>
    </item>
  </channel>
</rss>

