- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Pytorch to openvino model optimizer intermediate representation conversion.
Successfully convert pytorch model file (.pt) to ONNX file(.onnx).
I am facing the following error while converting ONNX file (.onnx) to openvino model optimizer intermediate representation(.xml,.bin).
Error :
[ ERROR ] Cannot infer shapes or values for node "Where_837".
[ ERROR ] There is no registered "infer" function for node "Where_837" with op = "Where". Please implement this function in the extensions.
Command :
python mo_onnx.py --input_model yolov5s.onnx --input_shape [1,3,480,640] --data_type FP32
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi Vishnu,
The error in the log is related to the Where layer. Could you try upgrading to the latest release 2020.4? The Where layer is now listed as supported for MXNet, TensorFlow and ONNX frameworks. If you continue to see an error converting, please attach the log files and your model.
Regards,
Jesus
링크가 복사됨
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi Vishnu_T,
The error you are seeing is due to the Where layer not being supported in OpenVINO. Please take a look at the ONNX supported layers list. Could you try replacing that layer with one that is on the supported list?
Regards,
Jesus
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Thanks for your reply.
I couldn't find the supporting layer similar to Where layer ONNX Framework.
In MxNet framework Where layers is available equivalent Intermediate Representation is Select layer.
The below link contains the framework layers and equivalent IR representation.
My ONNX file I replace the Select layer instead of Where layer. But I am getting Framework error after modification.
Please suggest how to find the supporting layer for Where layer.
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi Vishnu_T,
The model optimizer is not able to read the model, the model is either corrupted or may have not been converted from pytorch to onnx successfully. Please check the model and try running the model optimize with the --log_level=DEBUG flag and provide the output.
Regards,
Jesus
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi Jesus,
Sorry for late reply.
Converting model from pytorch to ONNX is successful.
I enabled the log_level in debug mode as per your suggestion.
I have attached the log file for your perusal.
Thanks
Vishnu T
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi Vishnu,
The error in the log is related to the Where layer. Could you try upgrading to the latest release 2020.4? The Where layer is now listed as supported for MXNet, TensorFlow and ONNX frameworks. If you continue to see an error converting, please attach the log files and your model.
Regards,
Jesus
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Hi Jesus,
Thanks for your continuous support.
I will check with latest version and let me know the status.
Thanks
Vishnu T
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
New version of openvino (2020.4) solves the issue.
Thanks quick response and updated version to solve this issue.
- 신규로 표시
- 북마크
- 구독
- 소거
- RSS 피드 구독
- 강조
- 인쇄
- 부적절한 컨텐트 신고
Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.
