Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Andres_M_Intel2
Employee
174 Views

Model Optimizer - converting ONNX models

I am trying to convert ONNX models using Model Optimizer.

I only succeeded to convert 3 of the 8 possible models (bvlc_googlenet, inception_v1, squeezenet) that should be covered (openVINO 2018 R2).

1) I wonder if you have a BKM similar to this one ? https://software.intel.com/en-us/articles/OpenVINO-Using-MXNet
2) Could you describe the 8 topologies that are supported ?

Please find the output from the basic command i am using:

>> python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo.py --input_model model.onnx --output_dir ./ --framework onnx
Model Optimizer arguments:
Common parameters:
 - Path to the Input Model:  onnx/tiny_yolov2/model.onnx
 - Path for generated IR:  onnx/tiny_yolov2/./
 - IR output name:  model
 - Log level:  ERROR
 - Batch:  Not specified, inherited from the model
 - Input layers:  Not specified, inherited from the model
 - Output layers:  Not specified, inherited from the model
 - Input shapes:  Not specified, inherited from the model
 - Mean values:  Not specified
 - Scale values:  Not specified
 - Scale factor:  Not specified
 - Precision of IR:  FP32
 - Enable fusing:  True
 - Enable grouped convolutions fusing:  True
 - Move mean values to preprocess section:  False
 - Reverse input channels:  False
ONNX specific parameters:
Model Optimizer version:  1.2.110.59f62983
[ ERROR ]  FusedBatchNorm doesn't support is_test=False
[ ERROR ]  FusedBatchNorm doesn't support is_test=False
[ ERROR ]  FusedBatchNorm doesn't support is_test=False
[ ERROR ]  FusedBatchNorm doesn't support is_test=False
[ ERROR ]  FusedBatchNorm doesn't support is_test=False
[ ERROR ]  FusedBatchNorm doesn't support is_test=False
[ ERROR ]  FusedBatchNorm doesn't support is_test=False
[ ERROR ]  FusedBatchNorm doesn't support is_test=False
[ ERROR ]  Cannot infer shapes or values for node "scalerPreprocessor".
[ ERROR ]  There is no registered "infer" function for node "scalerPreprocessor" with op = "ImageScaler". Please implement this function in the extensions.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #37.
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <UNKNOWN>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Stopped shape/value propagation at "scalerPreprocessor" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

Thanks in advance,
Andres

0 Kudos
1 Reply
Severine_H_Intel
Employee
174 Views

Dear Andres, 

ONNX support is still in feature preview in R2, which explains the few documentation about it. 

Have you downloaded the models from our documentation? computer_vision_sdk_2018.2.299/deployment_tools/documentation/Intro.html

If not and if you have downloaded the models from the ONNX github, you should pay attention to use models with opset under 7. If I take the example of ResNet50, you will see a list like the one below and you should pick one of the first two models. We do not support yet opset 7 and above. This is true for any ONNX models on the github page. 

Best, 

Severine

 

Reply