FPGA
Connect with Intel® experts on FPGAs and Programmable Solutions
223 Discussions

CXL Adoption Ramps with New Product Announcements from Intel and Others

Thomas_Schulte
Employee
5 0 7,486

Enthusiasm over Artificial Intelligence (AI) related products, such as the generative AI software ChatGPT, is igniting another wave of demand¹ for high-performance computing, and a return to typical growth patterns² in data center infrastructure, cloud computing, and high-performance computing (HPC) segments.

One of the most talked about technologies associated with this trend is the rapid adoption of Compute Express Link (CXL) products and solutions. A sampling of 2023 CXL announcements includes:

  • CXL memory products from Samsung and Micron
  • CXL memory controller from Astera Labs
  • CXL switch from Xconn
  • CXL test validation solution from Teledyne Lecroy
  • CXL collaboration software for multi-server architectures from MemVerge
  • 4th and 5th Gen Intel Xeon® scalable processors with support for CXL 1.1
  • Release of CXL specification revision 3.1
  • Introduction of many other CXL 1.1-compliant products

Intel FPGAs also contributed to CXL momentum with the announcement of two industry firsts:

  • The first FPGA to achieve CXL 1.1 compliance for all device types: Type 1, Type 2, and Type 3.
  • The world’s first demonstration of FPGA acceleration on a real-life application using a CXL Type 2 interconnect to the recently launched 5th Gen Intel Xeon scalable processor.

The new Intel CXL Type 2 demonstration was implemented by a team at the University of Illinois, Urbana-Champaign. A Redis database application was accelerated by off-loading the Linux algorithm function kernel same-page merging (KSM) to an Intel Agilex® 7 FPGA using a CXL Type 2 interconnect to a 5th Gen Intel® Xeon® Scalable Processor. Running the Yahoo! Cloud Serving Benchmark yielded impressive performance improvements:

  • Up to 76% fewer CPU cycles running the DB application and KSM function.
  • Up to 90% tail latency reduction for memory reads/writes.

These are clear performance and power reduction results that provide lower TCO and improved sustainability for system providers, and faster job turnaround time for end users. This is important because the surge in adoption of AI applications has required solution providers to add AI accelerators (GPU, CPU, ASIC/ASSP, FPGA) into their overall system infrastructure to keep up with demand. Investments to off-load compute-intensive workloads like AI to FPGA accelerators have consistently provided TCO and power reduction benefits. This benchmark confirms that CXL delivers even greater performance and throughput. Consequently, we expect to see many more CXL-based milestones and solutions announced in 2024 and 2025.

Intel’s CXL ingredients can be used to design your next-generation high-performance accelerators for a variety of end applications. For more information, please see these references:

Contact your local Intel salesperson (ask for the FPGA specialist) for a more detailed technical discussion, or to get in touch with the University of Illinois team responsible for the FPGA-based CXL Type 2 demonstration.

  1. 71% of enterprises are experimenting with real use cases for generative AI (A report by Forrester Research, Inc.).
  2. In 2023 global tech spending is expected to reach $4.4 trillion, growing by 4.7% compared to 2022 (A report by Forrester Research, Inc.).

Intel does not control or audit third-party data. You should consult other sources to determine accuracy. 

© Intel Corporation. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries.