Intel® Distribution for Python*
Engage in discussions with community peers related to Python* applications and core computational packages.

Pytorch Extension random error

bobbybrown
Beginner
2,137 Views

Hi all,

Not an expert on anything intel or compiler related.  Have this weird bug and don't know what to do.  I'm running pytorch on Ubuntu 22 with an arc a770

 

Traceback (most recent call last):
File "/home/dev/projects/proj/sub/ndb.py", line 398, in <module>
loss = criterion(output, target)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/loss.py", line 536, in forward
return F.mse_loss(input, target, reduction=self.reduction)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py", line 3292, in mse_loss
return torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))
RuntimeError: Native API failed. Native API returns: -50 (PI_ERROR_INVALID_ARG_VALUE) -50 (PI_ERROR_INVALID_ARG_VALUE)

 

any ideas?  This happens when computing the loss.  I tried making the worlds most simple linear layer.  also get these warnings when optimizing

 

/usr/local/lib/python3.10/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension:
warn(f"Failed to load image Python extension: {e}")
/usr/local/lib/python3.10/dist-packages/intel_extension_for_pytorch/frontend.py:447: UserWarning: For XPU device, the split master weight is unsupported for now, so temp to disable it
warnings.warn("For XPU device, the split master weight is unsupported for now, so temp to disable it")
/usr/local/lib/python3.10/dist-packages/intel_extension_for_pytorch/frontend.py:457: UserWarning: For XPU device to save valuable device memory, temp to do optimization on inplaced model, so make inplace to be true
warnings.warn(
/usr/local/lib/python3.10/dist-packages/intel_extension_for_pytorch/frontend.py:464: UserWarning: For XPU, the weight prepack and sample input are disabled. The onednn layout is automatically chosen to use
warnings.warn(
/usr/local/lib/python3.10/dist-packages/intel_extension_for_pytorch/optim/_optimizer_utils.py:250: UserWarning: Does not suport fused step for <class 'torch.optim.adam.Adam'>, will use non-fused step
warnings.warn("Does not suport fused step for " + str(type(optimizer)) + ", will use non-fused step")
file_1.size() torch.Size([1, 1787747])

 

I installed pytorch as sudo, so not sure if that causes an issue.  Any ideas what this error is about?

0 Kudos
4 Replies
bobbybrown
Beginner
2,132 Views
0 Kudos
ThasneemV_Intel
Moderator
2,067 Views

Hi,

 

Thanks for posting in Intel Communities.

 

We hope that your issue is resolved. Thanks for sharing the same with the community. Could you please let us know if you have any further queries?  If not , do let us know so that we can stop monitoring this thread.

 

Thanks,

Thasneem Vazim

 

0 Kudos
ThasneemV_Intel
Moderator
2,002 Views

Hi,


We have not heard back from you. Could you please give us an update?


Regards,

Thasneem Vazim


0 Kudos
ThasneemV_Intel
Moderator
1,924 Views

Hi,


We assume that your issue is resolved. If you need any additional information, please post a new question as this thread will no longer be monitored by Intel.


Regards,

Thasneem Vazim


0 Kudos
Reply