Edge & 5G
Engage with our experts on Edge & 5G topics
Announcements
The Intel sign-in experience is changing in February to support enhanced security controls. If you sign in, click here for more information.
72 Discussions

Intel® Xeon® D Processors: A Platform Designed for Edge Applications

Rajesh_Gadiyar
Employee
0 0 737

Intel® Xeon® D Processors:

A Platform Designed for Edge Applications

Transformative forces that include 5G, edge computing, and cloud-native infrastructure are dramatically accelerating the development of new use cases and services.

The era of 5G and Edge Computing coupled with the Cloud Native transformation is bringing Computing and Communications like never before and driving new use cases and services. Protocols, rates, services, and performance are scalable gradients of the same sets of distinct applications. Many new services require ubiquitous connectivity with low latency and quality of service. This new generation of services requires high performance computing and increasingly uses AI for automation and fast adjunct decision making.

These scaled requirements from cloud to edge access points are driving the era of edge computing. The edge transition brings the cloud compute gradient closer to the application, enabling network operators to realize the flexibility and cost benefits of the cloud. Additionally, processing data closer to the application significantly improves the user experience. An ideal edge platform therefore should bring together the requirements of cloud computing and communications.

 

Intel® Xeon® D Processors Power Next-Generation Edge Computing

Intel Xeon D-2700 and D-1700 processors are ideal for edge applications such as 5G radio access networks (RANs), secure access service edge (SASE) deployments, industrial IoT, and content delivery networks (CDNs). Providing outstanding compute and AI performance married with integrated networking and security capabilities, Intel Xeon D-2700 and D-1700 processors are a transformational edge platform. The architecture optimizes bandwidth, latency, privacy, and energy efficiency, making it practical to process data close to the point where it is generated, driving the trend of deploying increasingly sophisticated compute resources at the network edge.

To meet advanced requirements at the network edge, Intel Xeon D-2700 and D-1700 processors support high-density compute at low thermal design power (TDP). Edge applications often have significant power and thermal constraints because of the environments they are deployed in. As a result, they must deliver optimal performance per watt. Delivered in a power-efficient system-on-chip (SoC) form factor, the platform supports easy design-in, meeting specialized requirements for edge deployments of indoor, outdoor, and ruggedized devices, including at extended temperatures. It is also fully software- and API-compatible with previous generations of Intel Xeon processors, and it draws on the whole spectrum of Intel ecosystem enablement, including partner programs and open-source leadership.

The architecture of the Intel Xeon D-2700 and D-1700 processors is designed to fulfill the distinct security requirements at the distributed edge and to optimize and accelerate AI/deep learning workloads. A few of the platform’s key features and capabilities in both these areas are described below.

 

Enabling Security for Distributed Workloads

Traditional approaches to network security have depended on a “walled city” model, where the objective was to keep attackers out by focusing on the network perimeter. While their access to data and resources might be controlled, users, applications, and services inside the wall were trusted.

When breaches inevitably occur, that approach leaves the door open to lateral movement of threats inside the trusted space. In addition, deploying compute resources to a distributed edge breaks down the notion of a network perimeter, demanding a modern alternative to walled cities.

Today’s cloud-native topologies pass software components freely across the containerized environment. Security services such as authentication, virtual firewalls, and virtual security gateways run in containers to support this highly distributed and dynamically defined environment. Intel Xeon D-2700 and D-1700 processors include hardware-based features and capabilities that enhance data protection for edge workloads, while that data is at rest, in transit, and in use.

 

Accelerated Encryption for Data at Rest and in Transit

Pervasive security capabilities including symmetric and asymmetric encryption may consume significant compute resources, adding overhead that can interfere with application responsiveness or drive up total cost of ownership (TCO). Reducing that overhead is a critical enabler for implementing pervasive encryption, which is a foundational requirement for data protection in distributed edge environments.

Intel® QuickAssist Technology (Intel® QAT) accelerates cryptographic ciphers, public key encryption, and compression/decompression. Offloading these functions to the Intel QAT accelerator not only provides higher application performance, but does so with a remarkable core efficiency, freeing up core resources for other work. Intel QAT is integrated in the Intel Xeon D-2700 and D-1700 processors, providing superior power efficiency compared to previous implementations based on PCI Express cards or other discrete accelerator options.

Network acceleration complex (NAC), available in select Intel Xeon D-2700 SKUs, enables very high-performance in-line cryptography, as well as high-performance Ethernet I/O and full Ethernet switching capabilities. The NAC packet pipeline uses in-line Intel QAT for encryption/decryption of IPsec data flows. Generation-to-generation performance gains using this inline capability can be greater than 50 percent. These in-line security capabilities are well suited to SASE (Secure Access Service Edge), SD-WANs, access gateways, and user-plane solutions.

For hardware accelerator failover solutions or application compatibility independent of product SKUs, Intel has further enhanced the inherent AES instructions and their optimized encryption libraries. Intel® AES has gotten a performance upgrade with new vector extensions that provide the ability to process up to four individual cipher blocks at a time, for an efficiency gain that dramatically increases throughput.

 

Confidential Computing for Data in Use

Data typically needs to be decrypted for applications to make use of it, and it is vulnerable while in that unencrypted state. Applications traditionally cannot restrict data to keep it away from privileged software such as an OS or hypervisor, so any compromise of those potentially exposes the data.

The Intel Xeon D processor solves this dilemma with protected execution enclaves enabled by Intel® Software Guard Extensions (Intel® SGX). Application “secrets” such as encryption keys and passwords are held within these protected regions of memory. Developers use Intel SGX instructions to designate “trusted components” of code that execute on unencrypted secret data inside Intel SGX enclaves. Trusted code within the Intel SGX enclave treats everything outside the enclave as untrusted, based on a trust boundary created at the hardware level.

 

Hardware Acceleration Extends AI’s Potential

AI/deep learning is a crucial enabler for generating insights from data, and running these algorithms at the network edge enables usages that would otherwise be untenable. For example, video feeds can be analyzed in real time on edge systems, so that only telemetry metadata and summary reports need to be transmitted over the wire. Crunching massive volumes of sensor data from an industrial facility can help detect unsafe conditions and respond to them by alerting safety personnel or shutting down equipment with the millisecond latency needed to help prevent on-the-job death or injury.

As new capabilities for edge-based AI/deep learning have emerged and become more complex in recent years, the associated workloads have become more demanding. Today, high throughput for AI/deep learning algorithms is a key requirement for mainstream edge-computing implementations. Built-in hardware features accelerate these workloads, increasing the amount of throughput possible for sophisticated operations.

The Intel Xeon D-2700 and D-1700 processors help businesses meet their computing objectives using AI/deep learning by increasing inferencing performance for implementations such as networking, security, and image/video analytics. Advances in per-core computational throughput and acceleration technologies provide up to a 2.4x generation-to-generation performance improvement in AI inference performance on Intel Xeon D-2700 and D-1700 processor networking SKUs.1

The processors include Intel® Deep Learning Boost (Intel® DL Boost), a technology that reduces unneeded precision in the calculations made by AI/deep learning algorithms, so they can be completed with less work. Intel DL Boost delivers the performance and versatility to run AI and machine learning side by side with 5G network workloads. As a result, CoSPs can maximize their ROI on infrastructure while improving network performance.

Intel Xeon D processors are a strong choice for AI workloads even beyond performance considerations. Integrating AI into the network function, rather than running workloads on separate hardware, can help reduce latency. Our cloud-native, API-driven architecture enables real-time automated service assurance, and our hardware offers capabilities to measure the resources that impact packet loss, throughput, and latency. Additionally, Intel facilitates the entire data pipeline that enables AI in the network, including data collection, processing, and transformation.

 

Into the Edge-Enabled Future of Enterprise

Intel Xeon D-2700 and D-1700 processors are purpose-built to deliver leadership performance, security, and AI/deep learning at the network edge, while meeting demanding power and thermal constraints. The SoC package combines breakthrough compute throughput and power efficiency with integrated Intel Ethernet and acceleration for AI, security, and compression functions. The platform’s broad-based innovation includes cutting-edge platform features such as Intel QAT, Intel DL Boost, and Intel SGX that enable new use cases and capabilities for key edge applications such as SASE, 5G VRAN, and private wireless, among many others.

 

Learn more in the infographic:
Intel® Xeon® D Processors: Built for the Network Edge

1   See [9] at https://edc.intel.com/content/www/us/en/products/performance/benchmarks/intel-xeon-d-processors/. Results may vary.

About the Author
Rajesh Gadiyar, Chief Technology Officer (CTO) for the Network & Custom Logic Group, leads the Architecture & Systems Engineering organization. He is focused on delivering a scalable and efficient architecture for next generation communications platforms. He leads the architecture efforts to accelerate Network Function Virtualization (NFV) including 5G infrastructure, Edge Cloud, and AI in Networking. Rajesh has a B.S. in Electronics and Telecommunications engineering from National Institute of Technology, Trichy, India and an MBA from UCLA Anderson School of Management.