Hi I'm trying to convert an onnx model to IR using this command:
python3 mo_onnx.py --input_model /workdir/tf-deepsoli/soli_
The conversion failed with output as below:
Model Optimizer arguments:
- Path to the Input Model: /workdir/tf-deepsoli/soli_
- Path for generated IR: /opt/intel/openvino_2021.1.
- IR output name: soli_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [40,32,32,1]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: False
ONNX specific parameters:
Model Optimizer version: 2021.1.0-1237-bece22ac675-
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.
I've checked from https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers.html#tensorflow_supported_operations that all layers/ops in my model should be supported by OpenVINO. The model is as attached
Yes, it seems that all the layers should be supported.
Would you mind sharing if this is from an established model or a totally custom model?
Furthermore, when checked using Netron, your LSTM node does not have variable/node output name for some of its inputs.
This could probably be the cause of the error.
Thank you for your question. If you need any additional information from Intel, please submit a new question as Intel is no longer monitoring this thread.