We are facing issues while converting the faster rcnn ONNX model to IR. PFB commands used and errors faced.
python3 /opt/intel/openvino_2021.3.394/deployment_tools/model_optimizer/mo.py --input_model /home/pruthvin/SteelDefect_FRCNN_ModelFiles/SteelDefect_FRCNN_ONNXModel/SteelDefect_FRCNN_ONNXModel.onnx --input_shape [1,1,1024,512] --transformations_config /opt/intel/openvino_2021.3.394/deployment_tools/model_optimizer/extensions/front/onnx/faster_rcnn.json --output_dir /home/pruthvin/SteelDefect_FRCNN_ModelFiles/SteelDefect_FRCNN_PBFiles --model_name tf_v3 --reverse_input_channels
Ubuntu 18.04.4 LTS
Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz
Thanks for reaching out. Which specific OpenVINO models that you are using? Could you please share your models and any related files so that we can test them on our side?
We cannot share the model as it is specific to customer, kindly provide the links or solutions related to the posted error which would help us in landing at the solution.
Hi Pruthvin S,
Could you try to use the parameters from the Convert ONNX* Faster R-CNN Model to the Intermediate Representation documentation as mention in Step 2? Please give it a try and get back to me with the result. Your error might due to the unsupported node/layer or incompatibility input size since you are using the custom model.
Hi Pruthvin S,
Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.