Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Employee
95 Views

Failed to convert CNN-LSTM ONNX model to IR format

Hi I'm trying to convert an onnx model to IR using this command:

python3 mo_onnx.py --input_model /workdir/tf-deepsoli/soli_model.onnx --input_shape [40,32,32,1]

The conversion failed with output as below:

Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /workdir/tf-deepsoli/soli_model.onnx
- Path for generated IR: /opt/intel/openvino_2021.1.110/deployment_tools/model_optimizer/.
- IR output name: soli_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [40,32,32,1]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: False
ONNX specific parameters:
Model Optimizer version: 2021.1.0-1237-bece22ac675-releases/2021/1
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.RNNSequenceNormalizeToIE.RNNSequenceNormalize'>): Something bad has happened with graph! Data node "lstm" has 0 producers

I've checked from https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layer... that all layers/ops in my model should be supported by OpenVINO. The model is as attached

0 Kudos
2 Replies
Highlighted
Moderator
63 Views

Hi Mojangle,

 

Yes, it seems that all the layers should be supported.

Would you mind sharing if this is from an established model or a totally custom model?

 

Furthermore, when checked using Netron, your LSTM node does not have variable/node output name for some of its inputs.

This could probably be the cause of the error.

Regards,

Rizal

 

0 Kudos
Highlighted
Moderator
15 Views

Hi Mojangle,


Thank you for your question. If you need any additional information from Intel, please submit a new question as Intel is no longer monitoring this thread.


Regards,

Rizal


0 Kudos