Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
636 Discussions

Simple Tips to Unlock Performance with Open-Source AI Software

Jack_Erickson
Employee
1 0 2,442

AI development and inference don’t always require the latest dedicated AI accelerator devices. CPUs are often more readily available since they can be used for a variety of workloads and offer advantages in terms of data processing and memory bandwidth. Data centers and public clouds typically consist of multiple generations of CPUs and GPUs, so the ability to utilize a wide range of devices unlocks more compute resources. Or you might be developing or running AI on a PC, where performance and energy efficiency are intertwined. The demands of AI workloads require getting the most out of your available hardware, so your software needs to maximize your hardware’s capabilities.

Intel works to optimize open-source AI development software for the variety of hardware devices it offers. These optimizations come in multiple levels of abstraction. oneAPI libraries deliver building blocks optimized for the underlying hardware so framework developers don’t have to rewrite optimized kernels for every device. And Intel also actively contributes higher-level optimizations directly to open-source AI tools and frameworks.

In addition to optimizing the open-source software you’re already using, Intel offers free extensions that deliver the newest optimizations, features, and hardware support before they get incorporated into the open-source offerings. So, if you want even more AI performance from your Intel hardware, or if you want to run on hardware not yet supported by your open-source framework, you can plug in an extension with a couple of lines of code. Below is a quick reference for getting the most out of your hardware when using popular open-source AI software.

PyTorch*

Intel is a premier member of the PyTorch Foundation and actively contributes optimizations to the latest releases. So, the easiest way to get the latest optimizations is to stay up-to-date with the latest release. Intel® Extension for PyTorch* adds features such as LLM-specific optimization, graph and operator optimization, and support for Intel's latest GPUs. Just install this open-source extension and add a couple of lines of code, as shown in the video below.

Learn more: PyTorch Optimizations from Intel

TensorFlow*

Enabling the Intel® oneAPI Deep Neural Network Library (oneDNN) optimizations in open source TensorFlow versions 2.5 through 2.8 required setting an environment variable as shown in the video below, however starting with TensorFlow 2.9, the oneDNN optimizations are on by default. Now, Intel® Extension for TensorFlow* supports Intel GPUs and operator and graph optimizations with no code changes required; just install the open-source extension.

Learn more: TensorFlow Optimizations from Intel

scikit-learn*

scikit-learn is a popular open-source machine learning and data analytics library. Intel® Extension for Scikit-learn* is a free plug-in library that accelerates compute-intensive scikit-learn algorithms by 10-100x on CPUs or GPUs. Because this software accelerator more efficiently utilizes the underlying hardware, it runs more efficiently, even reducing energy usage. Just install the extension and add a couple of lines of code, as shown in the video below, to accelerate scikit-learn algorithms.

Learn More: Intel® Extension for Scikit-learn*

XGBoost*

Intel has contributed optimizations to XGBoost tree-based training methods since version 0.81, and began contributing inference optimizations starting with version 1.3.1. Since these optimizations, along with the optimizations in the Intel® oneAPI Data Analytics Library (oneDAL), are built into open-source XGBoost, all you need to do is make sure you use the newest version of XGBoost to get the latest optimizations. You can see the progress in the video below!

Learn More: XGBoost Optimizations from Intel

Modin*

If you’re looking to speed up your pandas* DataFrame processing by distributing your workload across all of your available processors, Modin is a drop-in replacement seeing rapid adoption. As shown in the video below, just install Modin and add one line of code to import it, and it will accelerate your existing pandas commands. You can alternatively install Intel® Distribution of Modin*, which includes open-source Modin with added optimizations specific to Intel hardware.

Learn More: Intel® Distribution of Modin*

Python*

While Python has become extremely popular, its performance for production deployment is limited because it’s an interpreted language. Intel® Distribution for Python* is a high-performance binary distribution and includes data-parallel implementations of compute-intensive numerical packages such as NumPy, SciPy, and Numba*. The video below shows how to get started with NumPy with a couple of lines of code.

Learn More: Intel® Distribution for Python*

Software at all levels of the stack can unlock the full capabilities of the underlying hardware. The frameworks shown here are just a few open-source AI projects to which Intel contributes optimizations and extensions. To get the most performance from whatever hardware you’re running your AI jobs on, check out the full end-to-end suite of AI development tools, libraries, and frameworks optimized by Intel.

 

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.
* Other names and brands may be claimed as the property of others.

 

 

 

 

About the Author
Technical marketing manager for Intel AI/ML product and solutions. Previous to Intel, I spent 7.5 years at MathWorks in technical marketing for the HDL product line, and 20 years at Cadence Design Systems in various technical and marketing roles for synthesis, simulation, and other verification technologies.