Community
cancel
Showing results for 
Search instead for 
Did you mean: 
beltramo__enrico
Beginner
278 Views

Error loading IR model

I successfully converted a pytorch model from pytorch to openvino (pytorch --> onnx --> IR)

python mo.py --input_model '/home/ulix/Progetti/pysot/siamrpnmobilenet.onnx' --output_dir /home/ulix/Progetti/pysot/openvinosiamrpnmobilenet/ --data_type FP16 --input_shape [1,3,256,7,7],[1,3,224,224] --input z,x

Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/ulix/Progetti/pysot/siamrpnmobilenet.onnx
- Path for generated IR: /home/ulix/Progetti/pysot/openvinosiamrpnmobilenet/
- IR output name: siamrpnmobilenet
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: z,x
- Output layers: Not specified, inherited from the model
- Input shapes: [1,3,256,7,7],[1,3,224,224]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP16
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: False
ONNX specific parameters:
Model Optimizer version: 2021.1.0-1237-bece22ac675-releases/2021/1

[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/ulix/Progetti/pysot/openvinosiamrpnmobilenet/siamrpnmobilenet.xml
[ SUCCESS ] BIN file: /home/ulix/Progetti/pysot/openvinosiamrpnmobilenet/siamrpnmobilenet.bin
[ SUCCESS ] Total execution time: 28.44 seconds.
[ SUCCESS ] Memory consumed: 368 MB.

 

But when I load the model for inference, I have follow error:

Traceback (most recent call last):
File "/home/ulix/Progetti/pysot/tools/OVDemo.py", line 608, in <module>
sys.exit(main() or 0)
File "/home/ulix/Progetti/pysot/tools/OVDemo.py", line 466, in main
caught_exceptions=exceptions),
File "/home/ulix/Progetti/pysot/tools/OVDemo.py", line 195, in __init__
super().__init__(*args, **kwargs)
File "/home/ulix/Progetti/pysot/tools/OVDemo.py", line 131, in __init__
self.exec_net = ie.load_network(network=self.net, device_name=device, config=plugin_config, num_requests=max_num_requests)
File "ie_api.pyx", line 311, in openvino.inference_engine.ie_api.IECore.load_network
File "ie_api.pyx", line 320, in openvino.inference_engine.ie_api.IECore.load_network
RuntimeError: Check 'Dimension::merge(merged_channel_count, data_channel_count, filter_input_channel_count)' failed at ngraph/core/src/validation_util.cpp:341:
While validating node 'v1::ConvolutionIE ConvolutionIE_16773 (1125[0]:f32{1,256,25,25}, 1122[0]:f32{1,256,5,5}) -> (dynamic?)' with friendly_name 'ConvolutionIE_16773':
Data batch channel count (1) does not match filter input channel count (256).

I attach the original model, the onnx converted model and IR model

https://drive.google.com/file/d/1HlRnqdXd9Ziq5y8bTLTHsyziQQbCczRe/view?usp=sharing

0 Kudos
5 Replies
Munesh_Intel
Moderator
253 Views

Hi Enrico,

Thanks for reaching out to us.

SiamRPNMobileNet using PyTorch is not officially supported and has not been validated officially by OpenVINO. Details about supported PyTorch models are available here:

https://docs.openvinotoolkit.org/2021.1/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Mode...

 

Having said that, you can always validate models from your end, and share the outcome with us. 


We suspect that your issue is due to model conversion. Thus, we recommend you redo the conversion by using the correct input shape.

 

More details are available at the following links:

https://docs.openvinotoolkit.org/2021.1/openvino_docs_MO_DG_prepare_model_convert_model_Converting_M...

 

https://docs.openvinotoolkit.org/2021.1/openvino_docs_MO_DG_prepare_model_convert_model_Converting_M...


Additionally, if possible please share more information about your model, if custom model what type of layers does the model use, the topology (repository name if possible), command given to Model Optimizer to convert your trained model to Intermediate Representation (IR), and also environment details (versions of ONNX, Python, CMake, etc.)


Regards,

Munesh



beltramo__enrico
Beginner
245 Views

Thank you for response. What is looking strange for me is that if I run this model using ONNX runtime WITH Openvino provider, it run perfectly. So I guess all layers and operation are compliant with openvino

I attach here the link of latest onn model and conversion in IR:

https://drive.google.com/file/d/1Bt3k5qmGkLyvEL9-FdICPvdYVxhxSV-e/view?usp=sharing

The original model comes from https://github.com/STVIR/pysot (Siamrpn with mobilenet conversion)

In the command that I use, I explicitly say dimensions of inputs:

python mo.py --input_model '/home/ulix/Progetti/pysot/siamrpnmobilenet.onnx' --output_dir '/home/ulix/Progetti/pysot/openvinosiamrpnmobilenet/' --input_shape [1,3,256,7,7],[1,3,255,255] --input z,x

The problems coming when I add the second input. When I convert only the backbones or other layers with a single input, if works fine

beltramo__enrico
Beginner
232 Views

In a issue in ONNX runtime someone explained me why the model work in ONNX runtime using Openvino and not in Openvino directly (ONNX move the layers that don't are compatible with OpenVino on CPU inference):

https://github.com/microsoft/onnxruntime/issues/5528

Anyway it possible to understand which layers are compatible and which no with Openvino? Because the conversion doesn't notify any error and that make me a few confusing.

Munesh_Intel
Moderator
223 Views

Hi Enrico,

Model Optimizer ONNX supported operators are mentioned in the following link:

https://docs.openvinotoolkit.org/2021.1/openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layer...

 

Meanwhile, Inference Engine supported layers are mentioned in the link below:

https://docs.openvinotoolkit.org/2021.1/openvino_docs_IE_DG_supported_plugins_Supported_Devices.html...

 

Regards,

Munesh

 

 

Munesh_Intel
Moderator
200 Views

Hi Enrico,


This thread will no longer be monitored since we have provided references. If you need any additional information from Intel, please submit a new question.


Regards,

Munesh


Reply