Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.
5765 Discussions

Error in converting ONNX model to OpenVINO IR

mdatres
Beginner
475 Views

Dear all, 

I have a problem in converting a .onnx model using the openVINO toolkit. I have trained the retinanet model in mmdetection on my costum dataset (https://github.com/open-mmlab/mmdetection). I have exported the .onnx model using the mmdetection converter function as explained at https://mmdetection.readthedocs.io/en/latest/tutorials/pytorch2onnx.html . After having done that, I have tried to convert the model using the following command:

python3 mo_onnx.py --input_model ~/Documents/catchme_mmdet/ONNX_models/retinanet/retinanet1.onnx --input_shape '[1,3456,4608,3]' --output_dir ~/Documents/catchme_mmdet/OpenVino_models/retinanet  --data_type FP32 --reverse_input_channels --log_level=DEBUG

Unfortunately, it does not work and I can't understand the reason why. The error is:

mo.utils.error.Error: Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "Expand_764" node. 

I put in attach a .txt with the log. Here you also have the dropbox folder with the .onnx file: 

https://www.dropbox.com/sh/hdig8b9yzr7vlkq/AABRYicBwSGYiRgBTtR383TBa?dl=0

I can convert the frozen .pb file using mo_tf.py and so I think that I have installed OpenVINO Toolkit correctly. I have also tried to convert the .onnx file to a .pb file using https://github.com/onnx/onnx-tensorflow , and after I have used mo_tf.py to convert the .pb file, but it did not work (the error message was that it  is not able to load the model). 

Can you help? Thanks in advance.

  

0 Kudos
4 Replies
IntelSupport
Community Manager
434 Views


Hi mdatres,

I have tested your model on our machine and currently investigating this. However, we are getting a different node name error from yours. Can you make sure the onnx model in the dropbox is the same that you tested on your side?Thanks

 

Regards,

Aznie


mdatres
Beginner
433 Views

Hi Aznie, 

thanks for the reply. I have just checked the correctness of the dropbox uploaded model. I confirm that it is the same that I'm trying to convert on my machine. It may be useful to know that my model optimizer version is:

Version of Model Optimizer is: 2020.1.0-61-gd349c3ba4a

Best regards, 

Max 

Hari_B_Intel
Moderator
374 Views

Hi @mdatres 

Thanks for sharing your model with us for further investigation on your issue.

I tried to convert the ONNX model your share with us to IR with mo_onnx.py on the latest OpenVINO toolkit (version 2021.2), and I got the following error

[ ERROR ]  There is no registered "infer" function for node "If_1072" with op = "If". Please implement this function in the extensions.

 

Based on the Supported Layers document, ONNX “If” operation is yet to supported by OpenVINO. However, it should be possible to create a custom layer, see the Custom Layers Guide.

 

The issue you are facing is due to propagation at "Expand_764" node, and from your log.txt you’re using the older version of the OpenVINO toolkit (version 2020.1). The good news is Expand operation is supported for the latest OpenVINO toolkit version (2021.2), but ONNX “If” operation within your model is still not being supported.  

So I would suggest to download the latest OpenVINO toolkit for the “Expand” operation to work and try to implement “If” as a custom operation using the aforementioned instructions.

 

Hope this answers your question.

 

Thank you

Hari_B_Intel
Moderator
343 Views

Hi mdatres


This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.


Thank you


Reply