Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Manisha_B_
Innovator
247 Views

Getting Open Vino Model Optimizer Error in converting the model trained in ssd_mobilenet_v2_coco_2018_03_29

Hi, 

I have a pretrained model developed using tensor frame work. When I tried to use the model optimizer, I am facing the below error. Please help to review the error message and let me know your feedback to resolve this issue. 

C:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer>python mo_tf.py --input_model "C:\tensorflow2\models\research\object_detection\inference_graph\frozen_inference_graph.pb" --tensorflow_use_custom_operations_config "C:\Users\manisha\Desktop\ssd\ssd_v2_support.json" --tensorflow_object_detection_api_pipeline_config "C:\tensorflow2\models\research\object_detection\inference_graph\pipeline.config" 
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      C:\tensorflow2\models\research\object_detection\inference_graph\frozen_inference_graph.pb
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer\.
        - IR output name:       frozen_inference_graph
        - Log level:    ERROR
        - Batch:        1
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  C:\tensorflow2\models\research\object_detection\inference_graph\pipeline.config
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  C:\Users\manisha\Desktop\ssd\ssd_v2_support.json
Model Optimizer version:        2019.1.1-83-g28dfbfd
[ WARNING ]
Detected not satisfied dependencies:
        tensorflow: installed: 1.12.0, required: 1.12
        test-generator: installed: 0.1.2, required: 0.1.1

Please install required versions of components or use install_prerequisites script
C:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer\install_prerequisites\install_prerequisites_tf.bat
Note that install_prerequisites scripts may install additional components.
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ]  Cannot infer shapes or values for node "Postprocessor/Cast".
[ ERROR ]  0
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function Cast.infer at 0x00000169975F5950>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "Postprocessor/Cast" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

Can anybody please help

 

0 Kudos
3 Replies
247 Views

Hi Manisha,

 

Please try running an install_dependencies script to install all OpenVINO dependencies.

 

In order to convert this model, try passing --input_shape [n, h, w, c] argument to your current command line.

-Iqbal

Shubha_R_Intel
Employee
247 Views

Dear Manisha B,

It looks like you're using the latest and greatest OpenVino 2019R1.1 release so that is good. But I just tried it and it worked fine for me (see the below command)

:\Users\sdramani\Downloads\ssd_mobilenet_v2_coco_2018_03_29.tar\ssd_mobilenet_v2_coco_2018_03_29\ssd_mobilenet_v2_coco_2018_03_29>python "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer\mo_tf.py" --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config  "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\model_optimizer\extensions\front\tf\ssd_v2_support.json"  --tensorflow_object_detection_api_pipeline_config  pipeline.config

Did you run all your prerequisites under deployment_tools\model_optimizer\install_prerequisites ? I am wondering because I see this error 

Please install required versions of components or use install_prerequisites script

In your output, and this is not normal (I don't see that error).

Thanks,

Shubha

 

WDomo
Beginner
247 Views

Hi Shubba

 

I am trying to install OpenVio R1.144. under Ubuntu 18.04

When I try to run the squeeznet demo I get

Model Optimizer version:     2019.1.1-83-g28dfbfd
[ ERROR ]  
Detected not satisfied dependencies:
    test-generator: not installed, required: 0.1.1

Please install required versions of components or use install_prerequisites script
I have tried running the istall prerequisistes script three times but no use.

 

Any ideas?

Kind regards

Frank

 

Reply