- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Your compile_bundle.bat for intel extension for pytorch
produces incompatible products that can't work together
Your script produces a PyTorch
that can't be installed
with your Intel Extension For PyTorch
I have fought since August 2024 to get here.
Why won't you just give us compatible binaries???
There are millions of us who bought your
12th gen CPUs with Integrated Xe GPUs
-- why won't you help us?
Why only the Data Center Cards?
pip install C:\ipex\dist\torchvision-0.21.0+7af6987-cp311-cp311-win_amd64.whl C:\ipex\dist\intel_extension_for_pytorch-2.6.10+gitaa2f41c-cp311-cp311-win_amd64.whl C:\ipex\dist\torch-2.6.0a0+git1eba9b3-cp311-cp311-win_amd64.whl C:\ipex\dist\torchaudio-2.6.0a0+d883142-cp311-cp311-win_amd64.whl
Processing c:\ipex\dist\torchvision-0.21.0+7af6987-cp311-cp311-win_amd64.whl
Processing c:\ipex\dist\intel_extension_for_pytorch-2.6.10+gitaa2f41c-cp311-cp311-win_amd64.whl
Processing c:\ipex\dist\torch-2.6.0a0+git1eba9b3-cp311-cp311-win_amd64.whl
Processing c:\ipex\dist\torchaudio-2.6.0a0+d883142-cp311-cp311-win_amd64.whl
Requirement already satisfied: numpy in c:\programdata\miniforge3\envs\intel_win\lib\site-packages (from torchvision==0.21.0+7af6987) (2.2.4)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in c:\programdata\miniforge3\envs\intel_win\lib\site-packages (from torchvision==0.21.0+7af6987) (11.1.0)
Requirement already satisfied: psutil in c:\programdata\miniforge3\envs\intel_win\lib\site-packages (from intel-extension-for-pytorch==2.6.10+gitaa2f41c) (7.0.0)
Requirement already satisfied: packaging in c:\programdata\miniforge3\envs\intel_win\lib\site-packages (from intel-extension-for-pytorch==2.6.10+gitaa2f41c) (24.2)
Requirement already satisfied: pydantic in c:\programdata\miniforge3\envs\intel_win\lib\site-packages (from intel-extension-for-pytorch==2.6.10+gitaa2f41c) (2.11.2)
Requirement already satisfied: ruamel.yaml in c:\programdata\miniforge3\envs\intel_win\lib\site-packages (from intel-extension-for-pytorch==2.6.10+gitaa2f41c) (0.18.6)
Collecting dpcpp-cpp-rt==2025.0.4 (from intel-extension-for-pytorch==2.6.10+gitaa2f41c)
Using cached dpcpp_cpp_rt-2025.0.4-py2.py3-none-win_amd64.whl.metadata (1.3 kB)
Collecting mkl-dpcpp==2025.0.1 (from intel-extension-for-pytorch==2.6.10+gitaa2f41c)
Using cached mkl_dpcpp-2025.0.1-py2.py3-none-win_amd64.whl.metadata (1.8 kB)
Collecting intel-openmp==2025.0.4 (from dpcpp-cpp-rt==2025.0.4->intel-extension-for-pytorch==2.6.10+gitaa2f41c)
Using cached intel_openmp-2025.0.4-py2.py3-none-win_amd64.whl.metadata (1.3 kB)
Collecting intel-opencl-rt==2025.0.4 (from dpcpp-cpp-rt==2025.0.4->intel-extension-for-pytorch==2.6.10+gitaa2f41c)
Using cached intel_opencl_rt-2025.0.4-py2.py3-none-win_amd64.whl.metadata (1.2 kB)
Collecting intel-sycl-rt==2025.0.4 (from dpcpp-cpp-rt==2025.0.4->intel-extension-for-pytorch==2.6.10+gitaa2f41c)
Using cached intel_sycl_rt-2025.0.4-py2.py3-none-win_amd64.whl.metadata (1.6 kB)
Collecting mkl==2025.0.1 (from mkl-dpcpp==2025.0.1->intel-extension-for-pytorch==2.6.10+gitaa2f41c)
Using cached mkl-2025.0.1-py2.py3-none-win_amd64.whl.metadata (1.4 kB)
Collecting onemkl-sycl-blas==2025.0.1 (from mkl-dpcpp==2025.0.1->intel-extension-for-pytorch==2.6.10+gitaa2f41c)
Using cached onemkl_sycl_blas-2025.0.1-py2.py3-none-win_amd64.whl.metadata (1.5 kB)
Collecting onemkl-sycl-lapack==2025.0.1 (from mkl-dpcpp==2025.0.1->intel-extension-for-pytorch==2.6.10+gitaa2f41c)
Using cached onemkl_sycl_lapack-2025.0.1-py2.py3-none-win_amd64.whl.metadata (1.5 kB)
Collecting onemkl-sycl-dft==2025.0.1 (from mkl-dpcpp==2025.0.1->intel-extension-for-pytorch==2.6.10+gitaa2f41c)
Using cached onemkl_sycl_dft-2025.0.1-py2.py3-none-win_amd64.whl.metadata (1.5 kB)
Collecting onemkl-sycl-sparse==2025.0.1 (from mkl-dpcpp==2025.0.1->intel-extension-for-pytorch==2.6.10+gitaa2f41c)
Using cached onemkl_sycl_sparse-2025.0.1-py2.py3-none-win_amd64.whl.metadata (1.5 kB)
Collecting onemkl-sycl-vm==2025.0.1 (from mkl-dpcpp==2025.0.1->intel-extension-for-pytorch==2.6.10+gitaa2f41c)
Using cached onemkl_sycl_vm-2025.0.1-py2.py3-none-win_amd64.whl.metadata (1.5 kB)
Collecting onemkl-sycl-rng==2025.0.1 (from mkl-dpcpp==2025.0.1->intel-extension-for-pytorch==2.6.10+gitaa2f41c)
Using cached onemkl_sycl_rng-2025.0.1-py2.py3-none-win_amd64.whl.metadata (1.5 kB)
Collecting onemkl-sycl-stats==2025.0.1 (from mkl-dpcpp==2025.0.1->intel-extension-for-pytorch==2.6.10+gitaa2f41c)
Using cached onemkl_sycl_stats-2025.0.1-py2.py3-none-win_amd64.whl.metadata (1.5 kB)
Collecting onemkl-sycl-datafitting==2025.0.1 (from mkl-dpcpp==2025.0.1->intel-extension-for-pytorch==2.6.10+gitaa2f41c)
Using cached onemkl_sycl_datafitting-2025.0.1-py2.py3-none-win_amd64.whl.metadata (1.5 kB)
Requirement already satisfied: tbb==2022.* in c:\programdata\miniforge3\envs\intel_win\lib\site-packages (from intel-opencl-rt==2025.0.4->dpcpp-cpp-rt==2025.0.4->intel-extension-for-pytorch==2.6.10+gitaa2f41c) (2022.1.0)
Collecting intel-cmplr-lic-rt==2025.0.4 (from intel-opencl-rt==2025.0.4->dpcpp-cpp-rt==2025.0.4->intel-extension-for-pytorch==2.6.10+gitaa2f41c)
Using cached intel_cmplr_lic_rt-2025.0.4-py2.py3-none-win_amd64.whl.metadata (1.2 kB)
Collecting intel-cmplr-lib-ur==2025.0.4 (from intel-openmp==2025.0.4->dpcpp-cpp-rt==2025.0.4->intel-extension-for-pytorch==2.6.10+gitaa2f41c)
Using cached intel_cmplr_lib_ur-2025.0.4-py2.py3-none-win_amd64.whl.metadata (1.3 kB)
Collecting intel-cmplr-lib-rt==2025.0.4 (from intel-sycl-rt==2025.0.4->dpcpp-cpp-rt==2025.0.4->intel-extension-for-pytorch==2.6.10+gitaa2f41c)
Using cached intel_cmplr_lib_rt-2025.0.4-py2.py3-none-win_amd64.whl.metadata (1.2 kB)
Requirement already satisfied: umf==0.9.* in c:\programdata\miniforge3\envs\intel_win\lib\site-packages (from intel-cmplr-lib-ur==2025.0.4->intel-openmp==2025.0.4->dpcpp-cpp-rt==2025.0.4->intel-extension-for-pytorch==2.6.10+gitaa2f41c) (0.9.1)
Requirement already satisfied: tcmlib==1.* in c:\programdata\miniforge3\envs\intel_win\lib\site-packages (from tbb==2022.*->intel-opencl-rt==2025.0.4->dpcpp-cpp-rt==2025.0.4->intel-extension-for-pytorch==2.6.10+gitaa2f41c) (1.2.0)
Requirement already satisfied: filelock in c:\programdata\miniforge3\envs\intel_win\lib\site-packages (from torch==2.6.0a0+git1eba9b3) (3.18.0)
Requirement already satisfied: typing-extensions>=4.10.0 in c:\programdata\miniforge3\envs\intel_win\lib\site-packages (from torch==2.6.0a0+git1eba9b3) (4.13.1)
Requirement already satisfied: networkx in c:\programdata\miniforge3\envs\intel_win\lib\site-packages (from torch==2.6.0a0+git1eba9b3) (3.4.2)
Requirement already satisfied: jinja2 in c:\programdata\miniforge3\envs\intel_win\lib\site-packages (from torch==2.6.0a0+git1eba9b3) (3.1.6)
Requirement already satisfied: fsspec in c:\programdata\miniforge3\envs\intel_win\lib\site-packages (from torch==2.6.0a0+git1eba9b3) (2025.3.2)
INFO: pip is looking at multiple versions of torch to determine which version is compatible with other requirements. This could take a while.
ERROR: Cannot install dpcpp-cpp-rt and torch==2.6.0a0+git1eba9b3 because these package versions have conflicting dependencies.
The conflict is caused by:
intel-sycl-rt 2025.0.4 depends on intel-cmplr-lib-rt==2025.0.4
torch 2.6.0a0+git1eba9b3 depends on intel-cmplr-lib-rt==2025.0.2
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip to attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Evstratios,
Thank you a lot for sharing your experience here, we had ecalated developer team to improve the related issues. As we keep upstream the Intel IPEX to pyTorch and Pytorch 2.7 for XPU is released by pyTorch official recently!
You are welcomed to use that Getting Started on Intel GPU — PyTorch main documentation directly
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/xpu
please noticed, the Supported by prebuilt binaries, some client CPU with integrated XPU are listed
python -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Your reply is the most insulting possible reply.
You didn't bother to address ANYTHING I said at all.
Oh, I should use those binaries you pointed out for me?
THOSE ARE OFFERED ON THE MAIN INSTALLATION PAGE
How thoughtful to point out the obvious
as if I missed it all along - I'm such a boob
No, you're the thoughtless boob - you know I can't use it
Your advice will work if I have a $1000+ data center card
which you know I don't have - should I jam it into my laptop?
I wouldn't be writing at all if I had that.
You didn't give what I said a second of thought
You DO support my Integrated GPU
not as ADL-P as your web page on AOT tells us
but instead it took me SIX MONTHS to discover finally
that you support my GPU as XE-LPG
Your code supports our GPUs individually -- but wait!
XETLA restricts the choices to three giant categories
so you're wasting our time with your AOT webpage
only to have XETLA change the rules at the end.
You support my GPU -- BUT YOU WON'T GIVE US THE BINARIES
We're left out in the cold to compile it from scratch ourselves!
Sorry we didn't buy $1000+ data center cards
because you provide binaries for those very few customers.
THE TENS OF MILLIONS OF US (MORE PROBABLY) GET NO BINARIES.
How many of us are there? is it a hundred million?
Yet you're supporting Data Center customers - are there 50 of them?
WE SHOULD GET THE BINARIES FIRST - WE'RE A GIANT FINANCIAL STAKE
or how about give out binaries to everybody!!?!!
there are only three categories - so compile it all for us!
you wrote this code, you understand it, we don't.
You know the exact compilation environment to generate IPEX binaries
but you won't tell us. You know all the libraries it needs to see to build,
you know exactly what versions too but we're left out in the cold
to guess and try it a zillion times fruitlessly.
I've been trying to get this to compile since August 2024.
The industry is moving at light-speed right on past me
BECAUSE I BOUGHT THE WRONG COMPUTER - I GOT THE LOUSY INTEL GPU.
NVIDIA can do it out of the box, AMD can do it easily too
BUT INTEL WANTS TO PUNISH AND TORTURE THEIR CUSTOMERS
well, except for those with $1000+ data center cards...
"WE'VE GOT A BED FOR YOU ALL READY. HERE'S WOOD, SPRINGS,
COTTON, SILK, THREAD, NAILS: BUILD IT YOURSELF!
But, right this way, data center customers,
you don't need to trifle like the peasants."
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Sorry for your bad experience!
oneAPI support ADL iGPU. So it appears in the AOT list.
Intel Extension for PyTorch (IPEX) support client dGPU and built-in GPU in Core CPU (since Core Ultra serial - Meteor Lake).
ADL is not in the supported list.
That's why the binary doesn't support it.
I think the installation issue is due the mismatch of oneAPI version used in Pytorch and IPEX in this case.
Here is my proposal:
Option 1 (recommend):
Use two PIP cmds to install them one by one.
Change the installation order of PyTorch and IPEX. IPEX is first, or revert.
Option 2:
Built the Pytorch and IPEX in same time.
compile_bundle.bat support to build them in same time.
After building is finished, the PyTorch and IPEX will be installed to the building environment automatically.
So you could run your test code in the building environment.
If both doesn't work, could you tell what's the AI workload (CNN, RNN or LLM) or AI model in your case?
We could provide better solution for your case.
Thank you!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Could you try with the suggestion and feedback if possible?
Thank you!

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page