Data Center
Participate in insightful discussions regarding Data Center topics
59 Discussions

The IPU: A New, Strategic Resource for Cloud Service Providers

Patricia_Kummrow
Employee
0 0 4,002

Workloads in modern cloud data centers are increasingly structured as a collection of microservices. While a microservice-oriented architecture has many benefits, it also creates substantial communication overhead due to its disaggregated nature. CPU cycles spent on this infrastructure overhead do not generate revenue for the Cloud Service Providers (CSP). A recent paper by Srirama, Dhanota et. al. found that microservice overhead at a hyperscaler can range from 31 to 83 percent1, as illustrated in the figure below.


Bar graph showing that microservice overhead can range from 31 to 83 percent depending on web, ads, or cache tasks.

Recently, Intel unveiled the infrastructure processing unit (IPU). With an IPU-based architecture, CSPs can maximize data center revenue by offloading infrastructure tasks from CPUs to IPUs, which frees server CPU cycles for revenue-generating tasks. Offloading infrastructure tasks to the IPU allows CSPs to rent 100 percent of their server CPUs to customers.



Cloud Data Centers are Like Hotels, Not Houses


A simple metaphor based on homes and hotels will help to explain the division of workload ownership that has driven the development of IPUs. In my home, I want to move easily from my living room, to the kitchen, or to the dinner table. We have an open kitchen, so everything is contained in one big room and it is easy for us to move freely from one area to another.


Things are different in a hotel, where the guest rooms, dining hall, and kitchen are separate areas. Areas where hotel staff work are partitioned from the areas where the hotel guests eat, drink, sleep, and meet. Doors generally separate hotel areas that serve different functions and, for safety and security reasons, you may even need a badge to pass through a door between a guest area and a staff area.


A blue print of a house and one of a hotel, showing how they are laid out differently.

This separation of guest and staff areas in a hotel is analogous to the disaggregation of tenant and CSP workloads in a data center architecture that includes IPUs. By introducing IPUs into the data center to implement the infrastructure functions, the CSP’s infrastructure workloads run on the IPU, which unburdens the server CPUs so that they can run more tenants’ applications.


The IPU-based data center architecture offers several major advantages:




  • The strong separation between infrastructure functions and tenant workloads provides much better isolation between these functions, which greatly enhances system security.

  • Tenants take full control of and get the full performance from the server CPU.

  • Spikes in infrastructure workloads do not create performance issues in server CPUs, which is a growing problem with the traditional data center architectural model, as illustrated by the statistics cited above.

  • CSPs can maximize data center revenue by offloading infrastructure tasks from CPUs to IPUs, which frees server CPU cycles for revenue-generating tasks.

  • Offloading infrastructure tasks to the IPU allows CSPs to rent out 100 percent of their server CPUs to customers.

  • IPUs, specifically targeting infrastructure tasks instead of general purpose processing, can apply hardware acceleration and more finely tuned compute to get significantly better performance and power efficiency.

  • IPUs enable a fully diskless server architecture in the cloud data center. In traditional enterprise data center architectures, each server has its own set of attached disk drives and SSDs for storage.


Because it is hard to predict storage usage on a tenant-by-tenant basis, each server must be over-provisioned with storage resources to handle peak storage loads with the traditional data center architecture. With a diskless server architecture, a central service provides storage resources for all tenants. A possible diskless server architecture appears in the figure below.


A flowchart showing how utilizing an IPU can virtual storage via network.

It is much easier and far more efficient to manage one central storage service than it is to manage the storage resources of hundreds of thousands of servers in a data center.



Data Center Evolution


Intel has evolved its data center products in partnership with key CSPs including Microsoft, Baidu, JD.com, and VMware for several years. We are the volume leader in the IPU market with products based on Intel® Xeon® D CPUs, Intel® FPGAs, and Ethernet components. The first generation of Intel’s FPGA-based IPU platforms were designed in collaboration with our hyperscale partners and are already deployed at data centers owned and operated by multiple CSPs.


During the five years that we have been in the IPU business with our FPGA-based products, we have observed that hyperscale CSPs realize the value of IPUs in stages:




  • Stage 1: Accelerated Networking – to offload common networking tasks, such as virtual switch and firewall functions, from the server CPU to an IPU. Offloading user plane functions (UPFs) such as flow lookup and encapsulation/decapsulation from the CPU to the IPU frees CPU cycles.

  • Stage 2: Accelerated Storage – to move the storage stack from the server CPU to the IPU, increasing storage throughput while reducing storage complexity, overhead, and management.

  • Stage 3: Accelerated Security – to offload encryption/decryption, compression, and other security functions that would otherwise consume server CPU cycles. (These security functions are often paired with the offloaded storage functions of Stage 2). In addition, an IPU can initiate the boot and configuration of the host system, which further hardens security by isolating secure functions and providing a root of trust that is separate from the CPU.

  • Stage 4: Infrastructure Processing – perhaps the most sophisticated usage, offloads hypervisor services management functions from the server CPU to the IPU.


Current Intel FPGA-based IPUs combine an Intel® Stratix® 10 FPGA with an Intel Xeon D processor. They combine optimized accelerators – based on configurable, FPGA-based data paths – with software-programmable CPUs that securely accelerate and manage infrastructure functions in the data center. This hybrid IPU architecture enables network management at hardware speeds with the software flexibility needed to implement control-plane functions more easily. The programmability of both the hardware-based data path using the on-board resources of the FPGA and the software-based control plane running in conjunction with an infrastructure OS stack on the IPU’s on-board processor makes these IPUs powerful. IPUs differ from SmartNICs by serving as a secure, independent control point not directly accessible to tenant workloads.



Future IPU and SmartNICs from Intel


Going forward, we are rolling out additional IPUs based on even more advanced processors, FPGAs, and even integrated ASICs, while continuing to build on the existing solid IPU software foundation that enables cloud operators and ecosystem vendors to build ever more powerful cloud orchestration software. At Intel Architecture Day this week, we announced two new IPU’s (Mount Evans and Oak Springs Canyon) and the Intel N6000 Acceleration Development Platform (formerly code named Arrow Creek).


As a follow-on to the successful Big Spring Canyon platform, Oak Springs Canyon (OSC) is a platform based on an Intel® Agilex™ FPGA, which currently leads the FPGA industry in performance, power consumption, and workload efficiency.2 Working in concert with servers based on Intel Xeon CPUs, OSC delivers the infrastructure acceleration needed to offload 2x100G workloads. OSC has a rich software ecosystem optimized for Intel® CPUs including the Intel Open FPGA Stack – a scalable, source accessible software and hardware infrastructure that enables our partners and customers to create customized solutions. OSC’s abilities and features are aligned to meet the needs of next-wave CSP deployments that will employ 100G networks.


Product photo of Oak Springs Canyon

Another new Intel development, Intel N6000 Acceleration Development Platform (formerly code named Arrow Creek), is an FPGA-based SmartNIC 100GbE network acceleration development platform (ADP). It builds upon the success of the Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC) N3000, which is currently deployed in data centers operated by some of the world’s top Communications Service Providers (CoSPs). This new Intel N6000 Acceleration Development Platform is based on the Intel Agilex FPGA and the Intel® Ethernet Controller E810. It is designed to be used with Intel-based servers and it supports several types of infrastructure tasks to help telco providers accelerate various workloads such as Juniper Contrail, OVS, and SRv6.


Product image of Arrow Creek

Intel announced our first ASIC-based IPU, designed in collaboration with a large CSP and code-named Mount Evans. The Mount Evans IPU is based on a best-in-class packet-processing engine, instantiated in an ASIC. This ASIC supports many existing use cases – including vSwitch offload, firewalls, and virtual routing – while providing significant headroom for future use cases. The Mount Evans IPU emulates NVMe devices at very high IOPS rates by leveraging and extending the Intel® Optane™ NVMe controller. The same Intel infrastructure OS that runs on FPGA-based IPUs will run on Mount Evans as well.


Additional technology innovations in the Mount Evans IPU are a next-generation reliable transport protocol, co-innovated with our CSP partner to solve the long-tail latency problem on lossy networks, and our advanced crypto and compression accelerators.


Design diagram for Mount Evans

The IPU: A New, Strategic Resource for CSPs


The IPU is a strategic element in Intel’s cloud strategy. We believe our leading IPU portfolio provides the common infrastructure foundation that allows our cloud customers to fully leverage their general-purpose compute, XPU, and acceleration resources in the heterogenous data center architectures of the near future. The blending of these capabilities perfectly matches the ongoing trends in microservices development and offers a unique opportunity for building optimized, function-based infrastructure that matches high-speed hardware networking components and common software frameworks. The IPU provides CSPs with an opportunity to rethink data center architecture, to accelerate the cloud, and to host more revenue-generating services – tenant apps running on virtual machines – on every server CPU in the data center.


With its ability to increase performance, reduce cost and deliver a better cloud data center architecture, we think the IPU will become a strategic component of future data center designs.








1 Akshitha Srirama and Abhishek Dhanotia, Accelerometer: Understanding Acceleration Opportunities for Data Center Overheads at Hyperscale, ASPLOS XXV, Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems: https://dl.acm.org/action/showFmPdf?doi=10.1145%2F3373376
2 Steven Leibson, Breakthrough FPGA News from Intel: https://blogs.intel.com/psg/breakthrough-fpga-news-from-intel/


Notices & Disclaimers
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.
Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary.
Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy.
Code names are used by Intel to identify products, technologies, or services that are in development and not publicly available. These are not "commercial" names and not intended to function as trademarks.
Statements that refer to future plans or expectations are forward-looking statements. These statements are based on current expectations and involve many risks and uncertainties that could cause actual results to differ materially from those expressed or implied in such statements. For more information on the factors that could cause actual results to differ materially, see our most recent earnings release and SEC filings at www.intc.com.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

About the Author
Patty Kummrow is a VP in the Networking and Edge Group, and the GM of the Ethernet Products Group at Intel Corporation. She leads the strategy, architecture, development, manufacturing, and marketing of Intel® Ethernet Network Adapters, Controllers, and IPUs to enable next generation solutions used to accelerate networking, storage, and network security in data centers. Kummrow has two decades of experience in CPU design and technical leadership. She has led multiple teams developing Intel processors for data center, networking, storage, and autonomous driving applications. She holds a B.S. in Electrical Engineering from the University of Texas, and a M.S. in Technology Management from Walden University.