Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
816 Discussions

Intel® Xeon® 6 Processors: The Smart Total Cost of Ownership Choice

IntelAI
Employee
0 0 1,434

Author: Troy Wallin, Hardware Research Engineer, Intel

In today’s IT landscape, performance is only half the equation. Total cost of ownership (TCO) is a crucial metric for IT buyers, who must balance performance, efficiency, and budget. As power and cooling costs soar, rack and floor space shrink, server maintenance challenges persist, and hardware warranties expire, TCO becomes an important strategic lever for enterprises. The latest Intel® Xeon® 6 processors deliver performance advantages across key enterprise workloads, enabling companies to deploy fewer servers and still deliver a similar aggregate performance level compared to AMD EPYC solutions. The Intel Xeon advantage can enable measurable TCO savings across infrastructure, energy, and operational costs for artificial intelligence (AI), high-performance computing (HPC), and web services.(2,3,4,5)

Advantages of the Intel Xeon 6 Processor Family

Intel Xeon 6900-series processors are delivered in a new class of Intel server platform design, offering customers high performance, high memory bandwidth, and high throughput ideal for cloud, HPC, and AI environments. These processors feature higher core counts, more memory channels, and input/output (I/O) lanes with thermal design points that exceed those of other Intel Xeon processor alternatives.

Delivered on an updated server platform, Intel Xeon 6700-series processors and Intel Xeon 6500-series processors offer high performance with cost- and power-efficient solutions, ideal for a wide array of data center environments. As the foundational central processing unit (CPU) for AI systems, Intel Xeon 6 pairs exceptionally well with a graphics processing unit (GPU) as a host node processor. These processors are available in one-socket to eight-socket options, featuring enhanced input/output (I/O) and memory within established data center power and cooling footprints.

Intel Xeon 6 processor key architectural strengths include:

  • Intel® Advanced Matrix Extension (Intel® AMX) delivers AI acceleration built into every core.(1)
  • Multiplexed Rank DIMMs (MRDIMMs) with an expected data transfer rate of up to 8,800 megatransfers per second (MT/s), delivering improved memory bandwidth for better memory access patterns.(1)
  • Enhanced software optimizations from a vast open software ecosystem such as the Intel® oneAPI Deep Neural Network Library (oneDNN), OpenVINO™ toolkit, and Intel® Extension for PyTorch.

Comparative Analysis: Save Power and Money When Deploying New Servers

Deep learning recommendation models (DLRMs) are designed for recommendation workloads that combine sparse categorical and dense numerical features to predict user-item relevance or click-through rates. Performance for these models is measured in samples per second. While delivering a similar level of performance, our performance testing and cost analysis found that choosing a 128-core Intel Xeon 6980P instead of a 128-core AMD EPYC 9755 for a DLRM workload can result in the following savings:(2)

  • 1.87x greater performance per server
  • 47% fewer servers
  • 40% energy and CO2 savings
  • 46% TCO savings

Picture1.png

Figure 1: Intel Xeon 6980P versus AMD EPYC 9755 while running a Recommendation System DLRM workload(2)

OpenFOAM open-source software for HPC computational fluid dynamics is widely used in academia and the automotive and aerospace industries (performance measured in seconds). While delivering a similar level of performance, choosing a 128-core Intel Xeon 6980P instead of a 128-core AMD EPYC 9755 for an OpenFOAM workload can result in the following savings:(3)

  • 1.43x faster performance per server
  • 28% fewer servers
  • 24% energy and CO2 savings
  • 28% TCO savings

Picture2.png

Figure 2: Intel Xeon 6980P versus AMD EPYC 9755 while running an OpenFOAM geomean (2) workload.(3)

Enterprises often deploy lower-core-count CPUs in their data centers instead of the highest available core-count CPUs due to a combination of cost efficiency, workload optimization, and thermal/power considerations. While delivering a similar level of performance, a 64-core Intel Xeon 6760P versus a 64-core AMD EPYC 9535 yielded cost and energy savings in performance testing and cost analysis for the following NGINX and Vision Transformer workloads.

NGINX is a web server that processes authenticated connections (performance measured in connections per second). Clients send connection requests without asking for a packet. Using key exchange and certificate authentication, clients initiate a Transport Layer Security (TLS) handshake. Choosing an Intel Xeon 6760P instead of an AMD EPYC 9535 for an NGINX TLS (1-socket) workload can result in the following savings:(4)

  • 1.55x greater performance per server
  • 37% fewer servers
  • 43% energy and CO2 savings
  • 41% TCO savings

Picture3.png

Figure 3: Intel Xeon 6760P versus AMD EPYC 9535 while running an NGINX TLS (1-socket) workload.(4)

Vision Transformer is a type of AI model that helps computers understand and interpret visual data, such as images (performance measured in samples per second). The model identifies and classifies objects within images, which can be useful for applications such as self-driving cars, medical imaging, and facial recognition. Choosing an Intel Xeon 6760P instead of an AMD EPYC 9535 for a Vision Transformer workload can result in the following savings:(5)

  • 2.09x greater performance per server
  • 51% fewer servers
  • 41% energy and CO2 savings
  • 52% TCO savings

Picture4.png

 Figure 4: Intel Xeon 6760P versus AMD EPYC 9535 while running a Vision Transformer workload.(5)

Optimize Your TCO Today

Find out how your business can realize Intel Xeon 6-based TCO savings. For more Xeon 6-based TCO savings examples, sign up for the free Intel Xeon Processor Advisor Suite today. Learn how Intel Xeon 6 processors can help meet your diverse power, performance, and efficiency requirements.

 

Endnotes:

1. Availability of accelerators and MRDIMM support varies depending on SKU. Visit the Intel® Product Specifications page for additional product details.
2. See [9T222] at intel.com/processorclaims: Intel® Xeon® 6. Estimated over 4 years. Test by Intel as of January 2025. Results may vary.
3. See [9T223] at intel.com/processorclaims: Intel® Xeon® 6. Estimated over 4 years. Test by Intel as of January 2025. Results may vary.
4. See [7T223] at intel.com/processorclaims: Intel® Xeon® 6. Estimated over 4 years. Test by Intel as of January 2025. Results may vary.
5. See [7T221] at intel.com/processorclaims: Intel® Xeon® 6. Estimated over 4 years. Test by Intel as of February 2025. Results may vary.

 

Notices and Disclaimers

Performance varies by use, configuration, and other factors. Learn more on the Performance Index site.
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.
Your costs and results may vary.
This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via www.openfoam.com, and owner of the OPENFOAM® and OpenCFD® trademark.
Intel technologies may require enabled hardware, software, or service activation.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.