Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Employee
52 Views

Facing issue while converting TINYYOLO V2 ONNX model to IR

Jump to solution

Hi, 

I am trying to convert tinyyolo v2 onnx model to intel IR using model optimizer. Seeing few errors:

 

Command used : 

python mo.py --input_model c:\OpenVino\OpenVino_Dependencies\ONNX\TinyYoLo\Tiny_YOLO_V2_model_fp16.onnx

 


Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      c:\OpenVino\OpenVino_Dependencies\ONNX\TinyYoLo\Tiny_YOLO_V2_model_fp16.onnx
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\.
        - IR output name:       Tiny_YOLO_V2_model_fp16
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
ONNX specific parameters:
Model Optimizer version:        2019.3.0-408-gac8584cb7
[ ERROR ]  Shape [ -1   3 416 416] is not fully defined for output 0 of "scalerPreprocessor/mul_". Use --input_shape with positive integers to override model input shapes.
[ ERROR ]  Cannot infer shapes or values for node "scalerPreprocessor/mul_".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "scalerPreprocessor/mul_".
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #40.
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function Elementwise.__init__.<locals>.<lambda> at 0x000001619DAF1288>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Not all output shapes were inferred or fully defined for node "scalerPreprocessor/mul_".
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #40.
Stopped shape/value propagation at "scalerPreprocessor/mul_" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.
Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Not all output shapes were inferred or fully defined for node "scalerPreprocessor/mul_".
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #40.
Stopped shape/value propagation at "scalerPreprocessor/mul_" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

0 Kudos

Accepted Solutions
Highlighted
52 Views

Hi Ajay,

Try using –input_shape parameter with positive integers

Refer to FAQ (question 40).

Best Regards,

Surya

View solution in original post

0 Kudos
2 Replies
Highlighted
53 Views

Hi Ajay,

Try using –input_shape parameter with positive integers

Refer to FAQ (question 40).

Best Regards,

Surya

View solution in original post

0 Kudos
Highlighted
Employee
52 Views

Thanks Surya. xml and bin file generated after adding the --input_shape (1,3,227,227). 

0 Kudos