- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am trying to run my deep learning models on the Intel VPU using the OpenVINO toolkit. According to the Supported Networks for VPU devices on this page:
, PyTorch is not listed as a supported network for the VPU plugin. Does this mean that I cannot use PyTorch models with the VPU plugin?
Thank you for your help.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Roxy1,
Thanks for reaching out to us.
As long as the layers in PyTorch model is supported (listed in Supported Layers), the model can be used with VPU plugin.
You have to convert PyTorch model into ONNX first. Next, you can directly load ONNX model into VPU plugin or convert ONNX model to Intermediate Representation (IR) and then load IR into VPU plugin.
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Roxy1,
This thread will no longer be monitored since we have provided answer. If you need any additional information from Intel, please submit a new question.
Regards,
Peh

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page