Intel® Tiber Developer Cloud
Help connecting to or getting started on Intel® Tiber Developer Cloud
305 Discussions

IPEX Installation - ImportError:version `LIBUR_LOADER_0.10' not found

RajashekarK_Intel
453 Views

Hi,

When I'm trying to create conda environment inside the Jupyter Notebook and install IPEX.

I'm getting this error. seems like a breaking change with latest oneAPI update.

 

conda create -n <env_name> python=3.11 -y
conda activate <env_name>
python -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/xpu
python -m pip install intel-extension-for-pytorch==2.6.10+xpu oneccl_bind_pt==2.6.0+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/

PyTorch 2.6 Sanity Test

(xpu2.6) ue5c01becfd98f8b9e769a8179f45cdf@idc-training-gpu-compute-06:~$ python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__); [print(f'[{i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())];"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/ue5c01becfd98f8b9e769a8179f45cdf/.conda/envs/xpu2.6/lib/python3.11/site-packages/torch/__init__.py", line 405, in <module>
    from torch._C import *  # noqa: F403
    ^^^^^^^^^^^^^^^^^^^^^^
ImportError: /mount/opt/intel/oneapi/compiler/2025.1/lib/libur_loader.so.0: version `LIBUR_LOADER_0.10' not found (required by /home/ue5c01becfd98f8b9e769a8179f45cdf/.conda/envs/xpu2.6/lib/python3.11/site-packages/torch/lib/../../../../libsycl.so.8)

 

I also verified with PyTorch 2.5 XPU installation and observed the same.

PyTorch 2.5 Sanity Test

 

(xpu2.5) ue5c01becfd98f8b9e769a8179f45cdf@idc-training-gpu-compute-06:~$ python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__); [print(f'[{i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())];"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/ue5c01becfd98f8b9e769a8179f45cdf/.conda/envs/xpu2.5/lib/python3.11/site-packages/torch/__init__.py", line 367, in <module>
    from torch._C import *  # noqa: F403
    ^^^^^^^^^^^^^^^^^^^^^^
ImportError: /mount/opt/intel/oneapi/compiler/2025.1/lib/libur_loader.so.0: version `LIBUR_LOADER_0.10' not found (required by /home/ue5c01becfd98f8b9e769a8179f45cdf/.conda/envs/xpu2.5/lib/python3.11/site-packages/torch/lib/../../../../libsycl.so.8)

 

 

While the default environments `pytorch-gpu` and `pytorch_2.6` are working as expected.

 

Regards,

Rajashekar

 

Labels (1)
0 Kudos
5 Replies
unrahul
Employee
407 Views

Hi @RajashekarK_Intel , 

 

  • The Problem is  that: 

    • The linker did find a libur_loader.so.0 file (the one from system's oneAPI 2025.1 installation).
    • The problem is that the content of this specific file is missing the required version (LIBUR_LOADER_0.10) that the library (libsycl.so.8) needs.

 

Looking at the release notes for 2.6 xpu version of pytorch, we need oneAPI 2025.0.1.

 

  • The specific error version 'LIBUR_LOADER_0.10' not found in  oneAPI 2025.1 library, when required suggests oneAPI 2025.1 is likely incompatible with the IPEX 2.6.10+xpu build. 

 

The release notes explicitly states that Intel® Extension for PyTorch v2.6.10+xpu  is compatible with Intel® oneAPI Base Toolkit version 2025.0.1.

 

Possible Solution:

 

 1. See if the library is available:

 

```bash

find /opt/intel/oneapi /mount/opt/intel/oneapi -maxdepth 2 -type d -name '2025.0.1' 2>/dev/null

# if this doesn't work, lets check setvarsh.sh
find /opt/intel/oneapi /mount/opt/intel/oneapi -name setvars.sh | grep '2025.0.1'

```

 

2. If found, we need to set this in the path:

```bash

source /path/to/oneapi/2025.0.1/setvars.sh

export LD_LIBRARY_PATH="/path/to/oneapi/2025.0.1/lib:${LD_LIBRARY_PATH}"

```

 

3. Then try:

 

```bash

python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(f"is xpu available: {torch.xpu.is_available()}"); [print(f'[{i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())];"

```

0 Kudos
RajashekarK_Intel
352 Views

Thanks for the workaround @unrahul ,

 

Here are the observations after rolling back to previous version of oneAPI.

ue5c01becfd98f8b9e769a8179f45cdf@idc-training-gpu-compute-08:~/dev/learnings/liftoff-days$ conda activate xpu2.6

(xpu2.6) ue5c01becfd98f8b9e769a8179f45cdf@idc-training-gpu-compute-08:~/dev/learnings/liftoff-days$ icpx --version
Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /mount/opt/intel/oneapi/compiler/2025.1/bin/compiler
Configuration file: /mount/opt/intel/oneapi/compiler/2025.1/bin/compiler/../icpx.cfg

(xpu2.6) ue5c01becfd98f8b9e769a8179f45cdf@idc-training-gpu-compute-08:~/dev/learnings/liftoff-days$ source /opt/intel/oneapi/2025.0/oneapi-vars.sh --force
 
:: initializing oneAPI environment ...
   bash: BASH_VERSION = 5.1.16(1)-release
   args: Using "$@" for oneapi-vars.sh arguments: --force
:: ccl -- processing etc/ccl/vars.sh
:: compiler -- processing etc/compiler/vars.sh
:: debugger -- processing etc/debugger/vars.sh
:: dnnl -- processing etc/dnnl/vars.sh
:: dpl -- processing etc/dpl/vars.sh
:: mkl -- processing etc/mkl/vars.sh
:: mpi -- processing etc/mpi/vars.sh
:: pti -- processing etc/pti/vars.sh
:: tbb -- processing etc/tbb/vars.sh
:: oneAPI environment initialized ::
 

(xpu2.6) ue5c01becfd98f8b9e769a8179f45cdf@idc-training-gpu-compute-08:~/dev/learnings/liftoff-days$ export LD_LIBRARY_PATH="/opt/intel/oneapi/2025.0/lib:${LD_LIBRARY_PATH}"

(xpu2.6) ue5c01becfd98f8b9e769a8179f45cdf@idc-training-gpu-compute-08:~/dev/learnings/liftoff-days$ icpx --version
Intel(R) oneAPI DPC++/C++ Compiler 2025.0.4 (2025.0.4.20241205)
Target: x86_64-unknown-linux-gnu
Thread model: posix
InstalledDir: /mount/opt/intel/oneapi/compiler/2025.0/bin/compiler
Configuration file: /mount/opt/intel/oneapi/compiler/2025.0/bin/compiler/../icpx.cfg

(xpu2.6) ue5c01becfd98f8b9e769a8179f45cdf@idc-training-gpu-compute-08:~/dev/learnings/liftoff-days$ python -c "import torch; import intel_extension_for_pytorch as ipex; print(torch.__version__); print(ipex.__version__); [print(f'[{i}]: {torch.xpu.get_device_properties(i)}') for i in range(torch.xpu.device_count())];"
[W409 09:12:10.544777130 OperatorEntry.cpp:154] Warning: Warning only once for all operators,  other operators may also be overridden.
  Overriding a previously registered kernel for the same operator and the same dispatch key
  operator: aten::_validate_compressed_sparse_indices(bool is_crow, Tensor compressed_idx, Tensor plain_idx, int cdim, int dim, int nnz) -> ()
    registered at /pytorch/build/aten/src/ATen/RegisterSchema.cpp:6
  dispatch key: XPU
  previous kernel: registered at /pytorch/build/aten/src/ATen/RegisterCPU.cpp:30477
       new kernel: registered at /build/intel-pytorch-extension/build/Release/csrc/gpu/csrc/aten/generated/ATen/RegisterXPU.cpp:468 (function operator())
2.6.0+xpu
2.6.10+xpu
[0]: _XpuDeviceProperties(name='Intel(R) Data Center GPU Max 1100', platform_name='Intel(R) oneAPI Unified Runtime over Level-Zero', type='gpu', driver_version='1.6.32224+14', total_memory=49152MB, max_compute_units=448, gpu_eu_count=448, gpu_subslice_count=56, max_work_group_size=1024, max_num_sub_groups=64, sub_group_sizes=[16 32], has_fp16=1, has_fp64=1, has_atomic64=1)
(xpu2.6) ue5c01becfd98f8b9e769a8179f45cdf@idc-training-gpu-compute-08:~/dev/learnings/liftoff-days$ [W409 09:12:13.145070911 OperatorEntry.cpp:154] Warning: Warning only once for all operators,  other operators may also be overridden.
  Overriding a previously registered kernel for the same operator and the same dispatch key
  operator: aten::_validate_compressed_sparse_indices(bool is_crow, Tensor compressed_idx, Tensor plain_idx, int cdim, int dim, int nnz) -> ()
    registered at /pytorch/build/aten/src/ATen/RegisterSchema.cpp:6
  dispatch key: XPU
  previous kernel: registered at /pytorch/build/aten/src/ATen/RegisterCPU.cpp:30477
       new kernel: registered at /build/intel-pytorch-extension/build/Release/csrc/gpu/csrc/aten/generated/ATen/RegisterXPU.cpp:468 (function operator())

 

Observation 1:

It did display the information as expected but, the terminal get's stuck at the end until I cancel it.

 

Observation 2:

After following same steps in Jupyter Notebook it doesn't roll back and causing the same issue.

RajashekarK_Intel_0-1744191209983.png

 

Looking forwarding to any suggestions.

 

Regads,

Rajashekar

 

0 Kudos
Sarven_Intel
Moderator
297 Views

Hi RajashekarK_Intel,


Please try again (run all the codes) after restarting your Jupyter Notebook server and kernel.


Steps to restart Jupyter Notebook server:


1. Go to "file" at the top left.

2. Choose "Hub Control Panel".

3. Click "Stop My Server".

4. Click "Start My Server" again.


Additionally, you can also try restarting your kernel as a good measure.


Regards,

Sarven


0 Kudos
RajashekarK_Intel
136 Views
0 Kudos
Sarven_Intel
Moderator
80 Views

Hi RajashekarK_Intel,


Thank you for the update. I will inform this issue to the appropriate team for further investigation of this matter. Upon receiving their feedback, I will provide you with an update.


Regards,

Sarven


0 Kudos
Reply