Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Cannot optimize retrained SSD-MobileNet-v2

Xu__Chenjie
Beginner
402 Views

I am optimizing a retrained model, ssd mobilenet v2 coco, from TenserFlow Object Detection API. But I met this problem.

Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      /tmp/fruits/ssd_mobilenet_v2_64_86242/model/frozen_inference_graph.pb
        - Path for generated IR:        /opt/intel/openvino_2019.3.376/deployment_tools/model_optimizer/.
        - IR output name:       frozen_inference_graph
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       True
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  /tmp/fruits/ssd_mobilenet_v2_64_86242/pipeline.config
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  /opt/intel/openvino_2019.3.376/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json
[ ERROR ]  Failed to match nodes from custom replacement description with id 'ObjectDetectionAPISSDPostprocessorReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ]  Cannot infer shapes or values for node "Postprocessor/Cast_1".
[ ERROR ]  0
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function Cast.infer at 0x7fbf8914b950>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  0
Stopped shape/value propagation at "Postprocessor/Cast_1" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.
Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): 0
Stopped shape/value propagation at "Postprocessor/Cast_1" node.

I am using OpenVINO 2019_R3.1, tensorflow 1.14 and installed all dependents. I can convert an unretrained model successfully. 

I used this command.

./mo_tf.py --input_model /tmp/fruits/ssd_mobilenet_v2_64_86242/model/frozen_inference_graph.pb --tensorflow_use_custom_operations_config extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /tmp/fruits/ssd_mobilenet_v2_64_86242/pipeline.config --reverse_input_channels

 I also tried to fix ( "Postprocessor/ToFloat"  replacing with "Postprocessor/Cast") in ssd_v2_support.json.

Is that because a Postprocessor/Cast_1 node is added to the model which is not supported by the current version of OpenVINO?

0 Kudos
1 Reply
Xu__Chenjie
Beginner
402 Views
0 Kudos
Reply