Intel® Optimized AI Frameworks
Receive community support for questions related to PyTorch* and TensorFlow* frameworks.
84 Discussions

Pytorch for Intel on Intel(R) UHD Graphics

jmoyniham
Novice
613 Views

I am trying to conduct language model inference using a computer that has an Intel processor. I have loading pytorch for Intel (along with the Intel Driver and DeepLearning tools) following torch and intel instructions:

 

Getting Started on Intel GPU — PyTorch 2.6 documentation

PyTorch Prerequisites for Intel® GPUs

 

This is the processor on my computer:

13th Gen Intel(R) Core(TM) i5-1335U, 1300 Mhz, 10 Core(s), 12 Logical Processor(s)

 

I run this script:

 

import torch
import intel_extension_for_pytorch
import logging
import platform



def load_device(model: torch.nn.Module):

logging.info(f"CUDA available: {torch.cuda.is_available()}")

if torch.cuda.is_available():
logging.info(f"CUDA version: {torch.version.cuda}")
device = torch.device("cuda")
elif platform.system() == "Darwin" and torch.backends.mps.is_available():
logging.info("MPS Metal available")
device = torch.device("mps")
elif hasattr(torch, 'xpu') and torch.xpu.is_available():
logging.info("Intel XPU available")
device = torch.device("xpu")
else:
device = torch.device("cpu")

logging.info(f"Using device {device}.")
model.to(device)
return device, model

 

My log then states that is "Using device xpu". This leaves me to assume that the issue is not with pytorch of my processor. Is this a correct assumption?

The error arises when I load the model + device to my masked language modeling inference code. I routinely receive some form of runtime error, such as:

Traceback (most recent call last):
File "C:\Users\jm9095\logion-app\src\backend\main.py", line 181, in prediction_endpoint
results = predict.prediction_function(
text,
...<5 lines>...
num_predictions=5,
)
File "C:\Users\jm9095\logion-app\src\backend\prediction\predict.py", line 59, in prediction_function
outputs = model(**chunk_inputs)
File "C:\Users\jm9095\AppData\Local\Programs\Python\Python313\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\jm9095\AppData\Local\Programs\Python\Python313\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\jm9095\AppData\Local\Programs\Python\Python313\Lib\site-packages\transformers\models\bert\modeling_bert.py", line 1461, in forward
outputs = self.bert(
input_ids,
...<9 lines>...
return_dict=return_dict,
)
File "C:\Users\jm9095\AppData\Local\Programs\Python\Python313\Lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "C:\Users\jm9095\AppData\Local\Programs\Python\Python313\Lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\jm9095\AppData\Local\Programs\Python\Python313\Lib\site-packages\transformers\models\bert\modeling_bert.py", line 1108, in forward
extended_attention_mask = _prepare_4d_attention_mask_for_sdpa(
attention_mask, embedding_output.dtype, tgt_len=seq_length
)
File "C:\Users\jm9095\AppData\Local\Programs\Python\Python313\Lib\site-packages\transformers\modeling_attn_mask_utils.py", line 448, in _prepare_4d_attention_mask_for_sdpa
if not is_tracing and torch.all(mask == 1):
~~~~~~~~~^^^^^^^^^^^
RuntimeError: Native API failed. Native API returns: 20 (UR_RESULT_ERROR_DEVICE_LOST)

 

Or I most recently encountered this error without making any changes to my code or system:

raceback (most recent call last):
File "C:\Users\jm9095\logion-app\src\backend\main.py", line 181, in prediction_endpoint
results = predict.prediction_function(
text,
...<5 lines>...
num_predictions=5,
)
File "C:\Users\jm9095\logion-app\src\backend\prediction\predict.py", line 74, in prediction_function
predicted_index = int(sorted_idx[k].item())
~~~~~~~~~~~~~~~~~~^^
RuntimeError: Native API failed. Native API returns: 2147483646 (UR_RESULT_ERROR_UNKNOWN)

 

I wondered whether the issue is because of the Intel driver, as I do not encounter this problem with other graphics processors (e.g. Nvidia, M-chip) or when using my PC's CPUs. That is, the issue only arises when trying to load the xpu device. But given that pytorch does load the xpu device, I do not know whether this is correct. If not, might anyone be able to help me in troubleshooting what may be the cause of the problem? 

 

Labels (3)
0 Kudos
7 Replies
DhannielM_Intel
Moderator
584 Views

Hi jmoyniham,


Thank you for posting in the community. It appears you're experiencing issues with initiating the device for use at the start of your program. Could you please provide the exact model of your laptop and the amount of RAM it currently has? Additionally, are you running this program directly on your system, or are you using a third-party platform like Google Colab, Kaggle, etc.?


Best regards,


Dhanniel M.

Intel Customer Support Technician


0 Kudos
jmoyniham
Novice
555 Views

Thank you for your response @DhannielM_Intel.

 

I am running this directly on my system.

 

I am using  a Dell Latitude 5540 x64 with Windows 11 Enterprise. 16GB RAM. The processor is:

13th Gen Intel(R) Core(TM) i5-1335U, 1300 Mhz, 10 Core(s), 12 Logical Processor(s).

0 Kudos
RandyT_Intel
Moderator
532 Views

Hello @jmoyniham ,

 

Have you tried contacting Dell regarding this concern? They usually have specific steps to address this issue using their own technologies. However, let me coordinate this internally to see if we can find something that might help address this error message. I'll post an update here or notify you directly once there are any developments. If I need further details, I'll reach out to you here. I appreciate your patience as I work on this matter.

 

Regards,

 

Randy T.

Intel Customer Support Technician

 

0 Kudos
jmoyniham
Novice
463 Views

@RandyT_IntelThank you for looking into this.

 

Dell advised reaching out to Intel

0 Kudos
RandyT_Intel
Moderator
438 Views

Hello @jmoyniham ,


I wanted to provide you with an update regarding your recent case with us.


After a thorough review by our specialist, it has been determined that your case would be best addressed by our Intel® Optimized AI Frameworks. To ensure you receive the most accurate and efficient support, we will be rerouting your case to the appropriate forum.


Please rest assured that this transition will be handled smoothly, and you will receive the necessary guidance and assistance from the experts in the Intel® Optimized AI Frameworks team.


Thank you for your understanding and cooperation.


Regards,

 

Randy T.

Intel Customer Support Technician


0 Kudos
wangk2
Moderator
418 Views

@jmoyniham 

 

Thanks for raising the ticket. Please refer to the hardware requirements for Intel GPU support in PyTorch: https://pytorch.org/docs/main/notes/get_start_xpu.html#hardware-prerequisite. And you can refer to https://pytorch-extension.intel.com/installation?platform=gpu&version=v2.6.10%2Bxpu&os=windows&package=pip for the detailed SKUs that have been verified. The product series you are using is not in our support list. We would recommend upgrading to the hardware that we've listed to leverage our PyTorch support and optimizations. Thanks.

 

Best,

 

Kai

Intel AI Framework Technical Support Team

0 Kudos
jmoyniham
Novice
380 Views

@RandyT_Inteland @wangk2 Thank you very much for your help

0 Kudos
Reply