- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Pytorch to openvino model optimizer intermediate representation conversion.
Successfully convert pytorch model file (.pt) to ONNX file(.onnx).
I am facing the following error while converting ONNX file (.onnx) to openvino model optimizer intermediate representation(.xml,.bin).
Error :
[ ERROR ] Cannot infer shapes or values for node "Where_837".
[ ERROR ] There is no registered "infer" function for node "Where_837" with op = "Where". Please implement this function in the extensions.
Command :
python mo_onnx.py --input_model yolov5s.onnx --input_shape [1,3,480,640] --data_type FP32
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Vishnu,
The error in the log is related to the Where layer. Could you try upgrading to the latest release 2020.4? The Where layer is now listed as supported for MXNet, TensorFlow and ONNX frameworks. If you continue to see an error converting, please attach the log files and your model.
Regards,
Jesus
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Vishnu_T,
The error you are seeing is due to the Where layer not being supported in OpenVINO. Please take a look at the ONNX supported layers list. Could you try replacing that layer with one that is on the supported list?
Regards,
Jesus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for your reply.
I couldn't find the supporting layer similar to Where layer ONNX Framework.
In MxNet framework Where layers is available equivalent Intermediate Representation is Select layer.
The below link contains the framework layers and equivalent IR representation.
My ONNX file I replace the Select layer instead of Where layer. But I am getting Framework error after modification.
Please suggest how to find the supporting layer for Where layer.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Vishnu_T,
The model optimizer is not able to read the model, the model is either corrupted or may have not been converted from pytorch to onnx successfully. Please check the model and try running the model optimize with the --log_level=DEBUG flag and provide the output.
Regards,
Jesus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jesus,
Sorry for late reply.
Converting model from pytorch to ONNX is successful.
I enabled the log_level in debug mode as per your suggestion.
I have attached the log file for your perusal.
Thanks
Vishnu T
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Vishnu,
The error in the log is related to the Where layer. Could you try upgrading to the latest release 2020.4? The Where layer is now listed as supported for MXNet, TensorFlow and ONNX frameworks. If you continue to see an error converting, please attach the log files and your model.
Regards,
Jesus
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Jesus,
Thanks for your continuous support.
I will check with latest version and let me know the status.
Thanks
Vishnu T
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
New version of openvino (2020.4) solves the issue.
Thanks quick response and updated version to solve this issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page