Intel® Optimized AI Frameworks
Receive community support for questions related to PyTorch* and TensorFlow* frameworks.
73 Discussions

import torch fails but import tensorflow doesnot : IntelAIToolkit

kekuda1
Beginner
2,454 Views

I am trying to leverage intel optimization for deep learning on the following machine:

Processor: Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz

OS: Red Hat Enterprise Linux 7

GPUs: None

 

I installed the intel AI toolkit according to the instructions mentioned: https://www.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux/top/before-you-begin.html#before-you-begin. Things installed smoothly without any issues.

After installation, I went into the conda pytorch env. But when I "import pytorch" here I get the following error: 

File "<string>", line 1, in <module>
File "/opt/intel/oneapi/intelpython/latest/envs/pytorch/lib/python3.7/site-packages/torch/__init__.py", line 196, in <module>
from torch._C import *
ImportError: /lib64/libm.so.6: version `GLIBC_2.23' not found (required by /opt/intel/oneapi/intelpython/latest/envs/pytorch/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)

 

When I checked the web about this issue, it seems like a gcc7/8 issue. Redhat 7 has gcc 7 while glibc_2.23 is in gcc8.

 

But interestingly when I follow the same for tensorflow import tensorflow imports without any error. Why is there this difference between pytorch and tf under similar settings? How can I get intel extension running for redhat 7 without any issues?

0 Kudos
7 Replies
JyothisV_Intel
Moderator
2,389 Views

Hi,

 

Thanks for posting in Intel Communities.

 

Sorry for the delay.

 

We were able to replicate the issue from our end on RHEL/CentOS 7.x. Thanks for letting us know regarding this issue, we are working to solve this internally.

 

>> Why is there this difference between pytorch and tf under similar settings?

This issue is caused due to the version of GNU C Library (glibc) running inside the RHEL/CentOS 7.x being older and incompatible with Intel Extension for PyTorch (IPEX). The newer version of IPEX has solved this issue.

 

>> How can I get intel extension running for redhat 7 without any issues?

For the time being, the workaround is to upgrade the Intel Extension for PyTorch (IPEX) manually to version 1.9.0 inside your PyTorch environment using the following command:

python -m pip install torch_ipex==1.9.0 -f https://software.intel.com/ipex-whl-stable

 

If this resolves your issue, make sure to accept this as a solution. This would help others with a similar issue.

 

Thank You!

 

Regards,

Jyothis V James

 

0 Kudos
JyothisV_Intel
Moderator
2,265 Views

Hi


Has the solution helped to resolve your issue? Could you please provide us with an update?


Thank You!


Regards,

Jyothis V James


0 Kudos
kekuda1
Beginner
2,208 Views

I did update install torch-ipex from the intel site and import pytorch does work. But when I do an import torch_ipex I get an error of illegal instruction. 

When I run python -c "import intel_extension_for_pytorch as ipex" I get no module intel_extension_for_pytorch error.

 

0 Kudos
JyothisV_Intel
Moderator
2,186 Views

Hi,


Thanks for your response.


We were able to replicate your issue. We are investigating this issue at our end and will let you know once we find a solution.


Regards,

Jyothis V James


0 Kudos
JyothisV_Intel
Moderator
2,175 Views

Hi,


Thanks for your patience.


From your previous posts, we were able to find out that you are using an Intel® Xeon® E5-2690 v4 processor. Unfortunately, this is not an Intel® Xeon® Scalable Performance processor and is not supported by Intel AI Analytics Toolkit.

You can read more about the system requirements for Intel® AI Analytics Toolkit using the link: https://www.intel.com/content/www/us/en/developer/articles/system-requirements/intel-oneapi-ai-analytics-toolkit-system-requirements.html


Regarding the error, some of the optimized packages inside Intel® AI Analytics Toolkit such as Intel® Optimization for PyTorch or IPEX require specific instruction sets dependent on certain kinds of processors. Since these instructions sets are not present in your processor, the Python shell crashes with an illegal instruction error.


The workaround approach is to use a system that meets the requirements of Intel® AI Analytics Toolkit or switch to using Intel® DevCloud for oneAPI. You can learn more about Intel® DevCloud for oneAPI and how to use it via the link: https://devcloud.intel.com/oneapi/get_started/


If this resolves your issue, make sure to accept this as a solution. This would help others with a similar issue.


Thank You!


Regards,

Jyothis V James


0 Kudos
JyothisV_Intel
Moderator
2,126 Views

Hi,


Has the workaround solution helped to resolve your issue? Could you kindly provide us with an update?


Regards,

Jyothis V James


0 Kudos
JyothisV_Intel
Moderator
2,047 Views

Hi,


I have not heard back from you. This thread will no longer be monitored by Intel. If you need further assistance, please post a new question.


Regards,

Jyothis V James


0 Kudos
Reply