Artificial Intelligence (AI)
Discuss current events in AI and technological innovations with Intel® employees
623 Discussions

Accelerating PyTorch on Intel with DirectML support

GurmanSingh
Employee
0 0 2,798

Authors: Szymon Marcinkowski and Hariharan Srinivasan

PyTorch is one of the most popular frameworks used among AI developers. However, the need for seamless interoperability across GPU hardware vendors has become paramount. Microsoft, together with Intel, has recognized that fact and is continuing to invest on the DirectML plugin which enables PyTorch code to work at scale. Intel is proud to expand support for PyTorch with DirectML backend and sees it as an emerging and promising solution that enables a unified platform for AI development on Windows that transcends hardware vendor boundaries. This support extends to all Intel® Arc™ Graphics and Intel® Iris® Xe Graphics GPUs. 

Introduction to Intel’s PyTorch Support with DirectML

PyTorch – known for its dynamic computational graphs and ease of use has generated a large following among AI developers and researchers. Intel’s collaboration with Microsoft expands DirectML plugin support for PyTorch, which enables even more inferencing scenarios starting from Intel® ArcTM Graphics and Intel® Iris® Xe Graphics.

Integrating DirectML backend into an existing PyTorch codebase is a straightforward process. Here’s a brief overview of how easy it is to transition traditional PyTorch code to leverage DirectML.

    1. Installation and setup: Begin by ensuring you have the necessary prerequisites installed, including latest version of torch-directml plugin (https://pypi.org/project/torch-directml/).
    2. Backend configuration: After installing the DirectML backend, configuring PyTorch is as simple as creating a DML device:
import torch
import torch_directml
dml = torch_directml.device()

3. Code Adaptation: In most cases, existing PyTorch code will require minimal modifications to use the DirectML backend. Ensure that any custom layers or operations used in your models are compatible with DirectML. Please refer to Operator Roadmap to get list of all currently supported operators here.

By following these steps developers can transition existing PyTorch codebases to use the DirectML backend, unlocking performance of Intel® ArcTM GPUs in PyTorch applications. Please refer to examples repository to find out more details about integrating DirectML into your PyTorch code. Intel is proud to announce that our latest drivers are supporting these models on integrated GPUs starting with 11th Gen Intel® CoreTM processors and Intel® ArcTM Graphics discrete GPUs.

By using computation power of DirectML users can now harness the potential of PyTorch across diverse range of existing GPU devices, from laptops to desktops – without the need for specialized hardware or costly upgrades. 

PyTorch-DirectML Small and Large Language Models support with ARC770 

The integration of LLMs support in PyTorch with DirectML opens exciting possibilities, but also poses challenges. LLM models up to 7 billion parameters like Llama2, Mistral, and SLMs like Phi-2 and Phi-3 mini are supported.

These models require massive amounts of memory due to data storage needs which includes parameters, embeddings, and intermediate activation during inference. Intel’s ArcTM A770 equipped with 16GB of VRAM is ideally suited to meet the memory demands on these LLMs ensuring smooth experience.

Intel® Graphics Driver Support

The Intel Graphics Driver (starting version 31.0.101.5522 or later) supports PyTorch with DirectML on a broad spectrum of integrated GPUs starting with 11th Gen Intel® CoreTM processors and Intel® ArcTM Graphics discrete GPUs. This driver has targeted optimizations for LLMs which enables running these models on Intel® ArcTM A770 16GB graphics cards.

What’s coming?

Intel in partnership with Microsoft is committed to extending PyTorch with DirectML support for functionality and performance. Stay tuned for upcoming Intel graphics driver updates.