hidden text to trigger early load of fonts ПродукцияПродукцияПродукцияПродукция Các sản phẩmCác sản phẩmCác sản phẩmCác sản phẩm المنتجاتالمنتجاتالمنتجاتالمنتجات מוצריםמוצריםמוצריםמוצרים
Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
646 Discussions

Five Reasons for Choosing Intel® Xeon® 6 Processors to Drive AI Success

MilanMehta
Employee
0 0 748

The world of artificial intelligence is growing faster than ever before. Workloads in both Gen AI and non-Gen AI are evolving in complexity on a daily basis, and end users are constantly demanding greater performance with improved operational efficiency.

While GPU and AI accelerator products will always play an important role for AI use cases, the host CPU will remain critical for new-age AI-accelerated systems. Intel® Xeon® processors are best positioned to support critical AI workloads as the host CPU.

How Do We Know?

NVIDIA chose to leverage Intel Xeon processors as the host CPU of choice for its DGX H100 and B200 systems. Xeon processors were selected because of their impressive single-threaded performance, among other important product features. Xeon processors are positioned even better with the new Intel Xeon 6 platform, which includes new and improved features designed specifically to support AI workloads in AI-accelerated systems.

Why Do You Need an AI-Accelerated System?

As predictive AI, generative AI (GenAI), and high-performance computing (HPC) workloads grow in complexity, their performance requirements grow. To achieve the compute performance required for the widest set of AI workloads, the optimal solution is an AI-accelerated system with the high-powered muscle of AI accelerators paired with the best host CPU.

Intel Xeon 6 processors with Performance-cores (P-cores) are ideal as a host CPUs. Serving as the brain of an AI-accelerated system, the host CPU performs a wide variety of management, optimization, pre-processing, processing, and offloading tasks to AI accelerators to facilitate system performance and efficiency. GPUs and Intel® Gaudi® AI accelerators provide a system’s high-powered muscles. These discrete AI accelerators dedicate their parallel-processing capabilities to large language model (LLM) training for GenAI and to model training for predictive AI.

Why Choose Intel® Xeon® 6 Processors as Host CPU?

Intel Xeon processors are the host CPUs of choice for the world’s most powerful AI accelerator platforms, being the most utilized host processors for these systems.(1)

Here are the top 5 reasons why Intel Xeon processors are the clear choice for host CPU in AI-accelerated systems:

  1. Superior I/O Performance
    Higher input/output (I/O) bandwidth with up to 20 percent more PCIe lanes than the prior generation to accelerate data offloads and elevate operational efficiency.

  2. Higher core counts and single-threaded performance
    Up to 128 P-cores per CPU delivering 2x more cores per socket than the previous generation. Higher CPU core counts and single-threaded performance translate into faster data feeds for GPUs/accelerators, which helps shorten models’ time-to-train. High max turbo processor frequencies boost single-threaded CPU performance.

  3. Higher memory bandwidth and capacity
    Intel Xeon 6 is the first processor family to introduce Multiplexed Rank DIMMs (MRDIMMs). This innovative memory technology boosts bandwidth, performance, and latency for memory-bound AI and HPC workloads. Intel® Xeon® 6 supports (2) DIMMs per memory channel, enabling large memory capacities, which are important for AI systems that need to support ever-increasing AI model sizes and data sets. MRDIMMs deliver up to 2.3x higher memory bandwidth compared to the previous generation.(2)

    Up to 504 MB L3 cache combined with support from Compute Express Link (CXL). CXL maintains memory coherency between the CPU memory space and memory on attached devices. CXL enables high-performance resource sharing, reduced software stack complexity, and lower overall system cost.

  4. Dedicated RAS Support
    Intel’s industry-leading reliability, availability, and serviceability (RAS) support reduces costly downtime for large AI/HPC systems. Advanced management capabilities include telemetry, platform monitoring, control over shared resources, and real-time firmware updates. RAS benefits from the collective expertise of platform partners, ISVs, and solution integrators. Minimize business disruptions with Intel Xeon 6 processors, built to maximize uptime and operational efficiency.

  5. Flexibility for mixed workloads
    Intel Xeon 6 processors are designed to support a wide variety of workloads as host CPUs, delivering both performance and efficiency. In some cases, host CPUs in AI systems may need to support limited AI during the data pre-processing phase. Intel AMX includes newly added support for FP16 precision arithmetic to support data pre-processing and other host CPU responsibilities in AI-accelerated systems.

Learn more about the additional benefits that Intel Xeon 6 processors can deliver as the host CPU of choice for AI-accelerated systems.

(1) Based on MLPerf benchmark testing as of 2024. For details, visit https://mlcommons.org/.

(2) Based on Intel analysis as of May 2024. Baseline: 1-node, 2 x Intel Xeon Platinum 8592+ processors, 64 cores, Intel® Hyper-Threading Technology (Intel® HT Technology) on, Intel® Turbo Boost Technology on, NUMA configuration SNC2, 1,024 GB total memory (16 x 64 GB DDR5 5,600 megatransfers per second [MT/s]), BIOS version 3B07.TEL2P1, microcode 0x21000200, Ubuntu 24.04, Linux version 6.8.0-31-generic, tested by Intel as of May 2024. New: 1-node, pre-production platform, 2 x Intel Xeon 6 processors with P-cores, Intel HT Technology on, Intel Turbo Boost Technology on, NUMA configuration SNC3, 3,072 GB total memory (24 x 128 GB MCR 8,800 MT/s), BIOS version BHSDCRB1.IPC.0031.D97.2404192148, microcode 0x81000200, Ubuntu 23.10, kernel version 6.5.0-28-generic. Software: NEMO v4.2.2. ORCA025 dataset from CMCC. Intel® Fortran Compiler Classic and Intel® MPI from 2024.1; Intel® oneAPI HPC Toolkit. Compiler flags “-i4 -r8 -O3 -xCORE-AVX2 -fno-alias -fp-model fast=2 -align array64byte -fimf-use-svml=true.” 

 

Notices and Disclaimers

Performance varies by use, configuration, and other factors. Learn more on the Performance Index site.
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available ​updates. See backup for configuration details. No product or component can be absolutely secure.
Your costs and results may vary.
Intel technologies may require enabled hardware, software, or service activation.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.