There’s a revolution coming. It’s a data center revolution and Infrastructure Processing Units (IPUs) are at the heart of it all. IPUs are a response to the server evolution that has occurred over the past several years. The advent of multi-core CPUs enabled the creation of servers that run multiple virtual machines (VMs), which must be managed. This need for VM management drove hypervisor development. As the complexity of managing server applications has risen, hypervisors have grown more complex and now absorb a significant number of CPU cycles. In addition, migration from monolithic applications to applications made of microservices further complicates the management task.
The growing complexity of managing applications and microservices distribution to the VMs while also managing the data center’s networking and storage infrastructure associated with these applications has reached the level of complexity and overhead where a new system architecture is required to meet the needs of data centersIntel firmly believes that an IPU-based architecture best meets those needs.
The architecture of the data center of the future is designed for an increasingly disaggregated application environment based on microservices and harnesses the accelerated processing capabilities of heterogeneous computing. In this data center of the future architecture, traditional server nodes are augmented with specialized compute nodes that accelerate these applications and microservices in addition to storage and network services. In this architecture, the CPU, acceleration, and storage nodes are all interconnected by IPUs. For many Tier 1 hyperscalers, the architectural transformation is already underway.
There are four key technical developments driving this data center transformation:
- The advent and rapid growth of cloud-native applications drives the need for specialized infrastructure that maximizes the agility of cloud-based services while improving data center efficiency.
- The partitioning of applications into myriad microservices drives the development of a distributed, heterogeneous computing environment where each microservice runs on an acceleration node best suited for that workload or microservice.
- The broad adoption of the microservices model has driven the creation of data center orchestration systems to automate and manage the distribution of microservices across heterogeneous compute servers within the data center.
- The growing use of microservices, VMs, containers, and orchestration for these containers drives the development of service meshes that simplify microservice-to-microservice communications and make them more efficient. Service meshes have become a standard part of the cloud-native stack.
Today, microservices are increasingly used for workloads at the application, storage, and networking layers in data centers, which in turn amplifies the four major developments listed above. IPUs make all these layers far more efficient and more cost-effective.
IPUs are an evolved form of SmartNICs. While implementing SmartNICs capabilities, IPUs deliver higher levels of security and control within the data center. Tier 1 providers are pioneering the use of IPUs and driving their broader adoption.
IPUs offer several major benefits to data center operators:
- They reduce hypervisor and infrastructure stack overhead in CPUs, making more CPU cycles available for application and tenant workloads.
- They offload the storage stack from the CPU, freeing even more CPU cycles for application and tenant workloads.
- They offload cycle-intensive infrastructure tasks such as cryptographic encryption and decryption and packet processing.
- In the extreme case, they can offload the entire hypervisor, freeing all cores in a CPU for applications and microservices, which is essential for bare-metal service offerings.
Intel, along with Silicom and Napatech, has produced a comprehensive webinar on IPUs. This webinar takes an in-depth look at the developments that are driving the data center’s , as listed above. The webinar then discusses how IPUs make data centers far more efficient and more secure. The webinar provides you with a solid overview of where IPU development is today, and where it’s going in the future. This webinar also discusses how Silicom and Napatech can help you customize, optimize, and productize IPU-based solutions for networking and storage infrastructure acceleration.
The Intel/Silicom/Napatech IPU webinar is now available on demand. Watch now.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.