Business Advantages of Intel Xeon Scalable Processors for AI
- Most business AI use cases do not require graphics processing units (GPUs), and porting data from a CPU to a GPU can introduce latency into your AI applications. Moreover, Intel has heavily optimized Intel Xeon Scalable processors to speed up the math-heavy operations involved with running AI models in production environments.
- Building AI capabilities on the same infrastructure that the rest of your business uses enables your AI developers and administrators to use the skills they already have and to tap the rich software and developer ecosystem that exists for Intel Xeon Scalable processors.
- Intel® Optane™ persistent memory (PMem) and Intel Optane solid state drives (SSDs) enable a new data layer between your data pipeline and storage resources, accelerating every step of your AI workflow.
See if this sounds familiar:
You’ve got a data center to keep running, …
… with mandates from senior management to support more artificial intelligence (AI) efforts, …
… and data scientists clamoring for more compute power.
You know that AI is maturing, but you also know that there are a thousand different ways to implement it. Which way you choose to move forward can have a big impact on your data center now and down the road.
Tech consultancy Omdia recently put out a paper, “Implications for investing in a new microprocessor: essential checklist,” on the role that the choice of microprocessor—particularly the CPU—plays in the total cost of ownership (TCO) for AI workloads. One insight to be distilled from Omdia’s paper is that powerful CPUs can help you get more done to meet your objectives for AI. With the vast majority of AI inference running on Intel® CPUs in the world’s data centers today, Intel Xeon Scalable processors are well-suited to fill that role and help you make the most of your AI initiatives.
The Omdia paper lays out 11 factors that influence how the choice of microprocessor can impact your costs for AI implementations. Intel Xeon Scalable processors can help with all of these factors, but three in particular stand out as strong cases of where these processors can benefit you the most:
- Software-stack maturity
- Memory efficiency
- Total cost of ownership
1. Software-Stack Maturity“This is possibly the criterion most likely to [challenge] evaluators, as they assess a microprocessor and neglect to consider the implications for their developers.”
The performance (and cost) of a processor matters little if it cannot support the software you need it to run. This applies not only to the languages your developers and data scientists use to write applications (think C, C++, and Python), but also to your developers’ development and testing tools and the wider ecosystem of libraries and frameworks with which AI applications need to interface, such as TensorFlow, PyTorch, and Apache MXNet.
When you consider options for supporting AI applications in your data center, keep in mind the flexibility provided by Intel Xeon Scalable processors, coupled with the maturity of the Intel software stack. Not only do Intel Xeon Scalable processors support the wide range of languages and tools that your developers know and rely upon—along with lots of community support built over the years along with one of the broadest developer ecosystems in the world—but Intel has also heavily optimized Intel Xeon Scalable processors to speed up the math-heavy operations involved with running AI models in production environments (known as inferencing). And future 3rd Generation Intel Xeon Scalable processors will include built-in acceleration to provide up to a 60-percent increase in training performance over the previous generation.
For specifics on AI optimizations in 2nd Generation Intel Xeon Scalable processors, see these links:
- Using Intel Xeon Scalable processors for AI inferencing with Intel® Deep Learning Boost: intel.ai/vnni-enables-inference/
- TensorFlow on Intel Xeon Scalable processors best practices: intel.ai/best-practices-for-tensorflow-on-intel-xeon-processor-based-hpc-infrastructures/
- Intel Distribution of OpenVINO™ Toolkit: https://software.intel.com/en-us/openvino-toolkit
- Performance enhancement to AI training with 3rd Generation Intel Xeon Scalable processors: https://newsroom.intel.com/news-releases/intel-ces-2020/
2. Memory Efficiency“New generation architectures aim to reduce the delay in getting data to the processing point. … The driver is always reducing latency, typically for processing streaming big data to analytic engines and AI workloads. Increasingly in-memory processing capability is an essential feature.”
Intel Xeon Scalable processors support larger memory capacity than many other AI accelerators, which can reduce training bottlenecks by enabling larger models to be stored in memory. In addition, Intel has released Intel Optane technology, the first all-new class of memory and storage in 25 years. Intel Optane persistent memory allows you to affordably expand a large pool of memory closer to the CPU, further boosting both training and inference performance.
Along with a new memory class and support for more memory on 2nd Generation Intel Xeon Scalable processors, Intel Optane SSDs enable a new layer between the data pipeline and storage resources that accelerates every step of the AI workflow, from ingestion to inference. This layer keeps compute fed throughout the pipeline and accommodates small block sizes to improve model accuracy.
To learn more about Intel Optane persistent memory and Intel Optane SSDs, see the following links:
3. Total Cost of Ownership
“The final part of the assessment is where all the previous strands are brought together and factored in as cost and added to the cost of the chip itself. The total cost of ownership, especially for internet-scale applications, will provide the real-world cost of using the chip.”
With the hardware updates and software optimizations in Intel Xeon Scalable processors, many businesses can run AI at the levels of performance they need on the Intel architecture their IT administrators know. Most businesses do not run the deep learning applications that utilize GPUs; and porting data from CPU to GPU and back can create latency overhead for your AI that can be avoided by using CPUs for AI workloads. Getting more out of your existing infrastructure for AI—including with Intel Xeon Scalable processors—can help you build on the hardware- and software-based skills that your IT organization already has. All of this can help reduce overall TCO.
Build for the Future
There are lots of ways to support AI in your data center, some of them more exotic than others. Discussions about data center architecture for AI often turn to accelerators, which certainly have their place, particularly for specialized AI applications. But most companies do not have the specialized AI needs that accelerators cater to. As you take stock of your company’s IT needs for AI in the data center, keep in mind you can get more done with Intel Xeon Scalable processors than you think. And by using industry-standard CPUs as the backbone for your AI efforts, you will also be laying the foundation for a data center that can pivot to meet changing business needs in the future.
 Intel. January 2020. “2020 CES: Intel Brings Innovation to Life with Intelligent Tech Spanning the Cloud, Network, Edge and PC.” https://newsroom.intel.com/news-releases/intel-ces-2020
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.