Showing results for 
Search instead for 
Did you mean: 

Not able to Optimize re-trained SSD-MobileNet-v2

I have trained re-trained the SSD-MobileNet-v2 model on my custom dataset with tensorflow-GPU=1.11* and object detection API v1.10*
Im unable to convert the obtained frozen graph to Intermediate Representation (.xml,.bin). The following is the error trace when i run

''''''''''''''''''''''''''''''''' --input_model frozen_inference_graph.pb  --tensorflow_use_custom_operations_config ssd_support_api_v1.14.json --tensorflow_object_detection_api_pipeline_config pipeline.config

Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /frozen_inference_graph.pb
    - Path for generated IR:     /opt/intel/openvino_2019.3.376/deployment_tools/model_optimizer/.
    - IR output name:     frozen_inference_graph
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     /pipeline.config
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     /opt/intel/openvino_2019.3.376/deployment_tools/model_optimizer/extensions/front/tf/ssd_support_api_v1.14.json
Model Optimizer version:     2019.3.0-408-gac8584cb7
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ]  Failed to match nodes from custom replacement description with id 'ObjectDetectionAPISSDPostprocessorReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ]  Cannot infer shapes or values for node "Postprocessor/Cast_1".
[ ERROR ]  0
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function Cast.infer at 0x7f7b87bc0048>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  0
Stopped shape/value propagation at "Postprocessor/Cast_1" node.
 For more information please refer to Model Optimizer FAQ (, question #38.
Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): 0
Stopped shape/value propagation at "Postprocessor/Cast_1" node.
 For more information please refer to Model Optimizer FAQ (, question #38.

i) I have appended the start_points in "ssd_support_api_v1.14.json" with "Postprocessor/Cast_1" and yet I receive this error.
ii) I was able to successfully convert the "frozen_graph.pb" file from the SSD-MobileNet-v2 saved model available from the official tensorflow repo. (

Is there any particular commit of the object_detection repository on which we should train? I have successfully attempted training on the latest object detection API (2019) with tensorflow 1.15 and 1.11
Both the trained frozen graph prompted the above error message.

Please do suggest a workaround for this error or point me to the particular version/commit of the object_detection API where I could successfully optimize a retrained SSD-MobileNet-v2 model.

0 Kudos
1 Reply

Hi Siddharth,

Thanks for reaching out. I see you are trying to convert SSD-MobileNet-V2 model to IR, could you try using the ssd_v2_support.json instead of ssd_v2_support.json and see if that works? 


I can also attempt to convert your re-trained model and see if I can find a solution, if possible share your model files so we can try. You can share privately these files in case you don't want to share them publicly.