Did you miss the PyTorch* Conference held in September of this year? Intel was a Diamond Sponsor for this conference and contributed multiple talks. In this blog, we summarize three talks given by Intel and congratulate the winner and nominee (both from Intel) of a PyTorch contributor award.
Enabling AI Everywhere with PyTorch - Kismat Singh , VP of Engineering for AI Frameworks at Intel
In this keynote, Kismat discusses the progress Intel has made in enabling PyTorch and in building open solutions. He shares a significant announcement in that Intel client GPUs will be supported in PyTorch 2.5 to enable developers to run PyTorch on PCs and laptops that are built with the latest generations of Intel processors. He also discusses the progress of PyTorch support on Intel® Gaudi® AI accelerators with the availability of PyTorch core and PyTorch ecosystem libraries for Gaudi. Lastly, he reviews the Open Platform for Enterprise AI (OPEA), a PyTorch based project contributed by Intel to Linux Foundation AI & Data, to simplify enterprise generative AI adoption and reduce the time to production of hardened, trusted solutions.
Watch the full recording.
TorchInductor CPU Backend Advancements: New Features and Performance Improvements- Jiong Gong, Leslie Fang, Intel
In this talk, Jiong starts by presenting the community issue resolution stats which include 12/12 high priority issues and 138/180 total issues. He then reviews performance speedup trends for several data types in eager mode. He takes a technical deep dive into the new features in the TorchInductor CPU backend including:
- Hardening C++ vectorized codegen (beta)
- Max-autotune with C++ GEMM template (prototype)
He also mentions that Windows support will be available as a prototype feature in PyTorch 2.5, that handles Windows specifics such as long type, file path separator, and compiler standards.
Watch the full recording and download the slides.
Intel GPU in Upstream PyTorch: Expanding GPU Choices and Enhancing Backend Flexibility- Eikan Wang, Min Jean Cho, Intel
In this talk, Eikan presented the Intel GPU client support that is in PyTorch 2.5. Note: In PyTorch 2.4, Intel introduced the initial support of Intel Data Center GPU Max Series in stock PyTorch through source build. He discusses the four pronged approach to implementing this support in 2.5:
- Runtime: Implement torch runtime APIs on top of Intel GPU Runtime API set
- Eager: Implement ATen Operations in SYCL and Library
- Torch.compile: Implement Inductor Backend for Intel GPU on top of Triton
- Distributed: Implement torch distributed for Intel GPU on top of Intel CCL library
Where applicable, he also shows the coding examples to enable GPU support.
Watch the full recording and download the slides.
And now to announce the Intel winner and nominee of the PyTorch contributor awards. Drum roll please!
Congratulations to Jiong Gong of Intel for his PyTorch 2024 Contributor Award of being designated as the PyTorchbearer! Jiong is a software architect from Intel who works on PyTorch framework optimizations. He is the PyTorch module maintainer for CPU and compiler and also the newly elected Vice Chair of the PyTorch Foundation’s Technical Advisory Council (TAC).
Congratulations to Leslie Fang of Intel for being nominated for a PyTorch 2024 Contributor Award! Leslie is a software engineer from Intel who works on PyTorch performance optimization on X86 servers for the past 4 years. Currently, he is mainly focusing on the feature domain of Quantization, Autocast, and Inductor CPP/OpenMP backend in stock PyTorch. Nomination for this award is a great acknowledgement of his contributions to PyTorch.
Mark your calendars for the 2025 PyTorch Conference, October 22-23, 2025 in San Francisco.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.