- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, While trying the MLOps Professional videos training in the Optimization part, I confronted the below error:
(pytorch) u2b13adf8e82d2463fab7667f346d3c8@idc-beta-batch-pvc-node-10:~$ python IntelPyTorch_Optimizations.py -dtype "fp32"
Traceback (most recent call last):
File "/home/u2b13adf8e82d2463fab7667f346d3c8/intel-mlops-course/MLOps_Professional/lab5/sample/IntelPyTorch_Optimizations.py", line 15, in <module>
import intel_extension_for_pytorch as ipex
File "/opt/intel/oneapi/intelpython/latest/envs/pytorch-gpu/lib/python3.9/site-packages/intel_extension_for_pytorch/__init__.py", line 93, in <module>
from .utils._proxy_module import *
File "/opt/intel/oneapi/intelpython/latest/envs/pytorch-gpu/lib/python3.9/site-packages/intel_extension_for_pytorch/utils/_proxy_module.py", line 2, in <module>
import intel_extension_for_pytorch._C
ImportError: /opt/intel/oneapi/intelpython/latest/envs/pytorch-gpu/lib/python3.9/site-packages/intel_extension_for_pytorch/lib/libintel-ext-pt-gpu.so: undefined symbol: _ZNK5torch8autograd4Node4nameB5cxx11Ev
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Atirah,
I think I have found a workaround. I am running/rebuilding the libraries as mentioned in the Github link and also tried to use the
pip install --pre --upgrade bigdl-llm[xpu_2.1] -f https://developer.intel.com/ipex-whl-stable-xpu from the https://bigdl.readthedocs.io/ site. I have to run this on all the jupyter notebooks in the MLOps_Professional Labs -> change the kernels to `base` and follow the above.
It is a lot of hassle, but atleast I got it working. I hope the engineering team takes cognizance of this.
Thank you.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi SantoshKV,
Thank you for reaching out to us.
We have informed the relevant team about this issue for further investigation and will update you as soon as possible.
Regards,
Athirah
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi SantoshKV,
We just got an update from the relevant team regarding this issue.
Please share the specific part in the video that you're having the error so that we can investigate this issue further.
On another note, we found a similar issue from Github that might help you with the error:
https://github.com/intel/intel-extension-for-pytorch/issues/317
Regards,
Athirah
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Atirah,
Thank you for checking. I am attaching the screenshot files for better understanding.
I do get this error in the environment as well, also the when running the JupyterLab environments launched from the "Training and Workshops".
Please suggest how to proceed.
Thank you
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi SantoshKV,
Thank you for sharing the screenshots.
For clarification purposes, does the error persist after building IPEX from source as suggested in the Github thread shared previously?
python -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
python -m pip install --trusted-host pytorch-extension.intel.com intel_extension_for_pytorch --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/
Regards,
Athirah
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hey Atirah,
I'm not sure of this, but I see the same problem everywhere, I hope you have seen the screenshots too. I get the same with the pre-built notebooks with the kernels developed already too.
Please help.
Thank you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi SantoshKV,
We have informed the relevant team about this issue for further investigation and will update you as soon as possible.
Regards,
Athirah
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Atirah,
I think I have found a workaround. I am running/rebuilding the libraries as mentioned in the Github link and also tried to use the
pip install --pre --upgrade bigdl-llm[xpu_2.1] -f https://developer.intel.com/ipex-whl-stable-xpu from the https://bigdl.readthedocs.io/ site. I have to run this on all the jupyter notebooks in the MLOps_Professional Labs -> change the kernels to `base` and follow the above.
It is a lot of hassle, but atleast I got it working. I hope the engineering team takes cognizance of this.
Thank you.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page