Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Unable to convert retrained mobilenet_ssd model to IR

Xia__Linmei
Beginner
415 Views
I have trained the SSD-MobileNet-v1 model on my dataset.I try to convert the frozen graph into Intermediate Representation (.xml,.bin). The following is the error trace when I run mo_tf.py I try to get IR model with the following command. *********** python mo_tf.py --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config ssd_v2_support.json --output="detection_boxes,detection_scores,num_detections" --tensorflow_object_detection_api_pipeline_config pipeline.config --data_type FP16 *********** The error is below: *********** Model Optimizer arguments: Common parameters: - Path to the Input Model: C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\frozen_inference_graph.pb - Path for generated IR: C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\. - IR output name: frozen_inference_graph - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: detection_boxes,detection_scores,num_detections - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP16 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\pipeline.config - Operations to offload: None - Patterns to offload: None - Use the config file: C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\ssd_v2_support.json Model Optimizer version: 2019.3.0-408-gac8584cb7 The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept. [ ERROR ] Failed to match nodes from custom replacement description with id 'ObjectDetectionAPISSDPostprocessorReplacement': It means model and custom replacement description are incompatible. Try to correct custom replacement description according to documentation with respect to model node names [ ERROR ] Cannot infer shapes or values for node "Postprocessor/Cast_1". [ ERROR ] 0 [ ERROR ] [ ERROR ] It can happen due to bug in custom shape infer function . [ ERROR ] Or because the node inputs have incorrect values/shapes. [ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape). [ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information. [ ERROR ] 0 Stopped shape/value propagation at "Postprocessor/Cast_1" node. For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38. Exception occurred during running replacer "REPLACEMENT_ID" (): 0 Stopped shape/value propagation at "Postprocessor/Cast_1" node. For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38. *************** So how could I can obtain the IR model based the retrained mobilenet_v1_ssd.When I train thus model,I used the tensorflow gpu 1.13.1
0 Kudos
1 Solution
Luis_at_Intel
Moderator
415 Views

Hi Xia,Linmei

Looks like there is an issue with the --tensorflow_use_custom_operations_config parameter, based on this thread another user had a similar issue converting their custom trained model. What they did was change line #57 of the ssd_support_api_v1.14.json file under /opt/intel/openvino/deployment_tools/model/_optimizer/extensions/front/tf/ directory, from "Postprocessor/Cast" to "Postprocessor/Cast_1".

I was able to convert your model in R3.1 using this approach. Please give it a try and let me know if you have any issues.

python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model files/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_support_api_v1.14.json --output="detection_boxes,detection_scores,num_detections" --tensorflow_object_detection_api_pipeline_config files/pipeline.config --data_type FP16

 

Regards,

Luis

View solution in original post

0 Kudos
2 Replies
Xia__Linmei
Beginner
415 Views

The appendix is my frozen graph file

0 Kudos
Luis_at_Intel
Moderator
416 Views

Hi Xia,Linmei

Looks like there is an issue with the --tensorflow_use_custom_operations_config parameter, based on this thread another user had a similar issue converting their custom trained model. What they did was change line #57 of the ssd_support_api_v1.14.json file under /opt/intel/openvino/deployment_tools/model/_optimizer/extensions/front/tf/ directory, from "Postprocessor/Cast" to "Postprocessor/Cast_1".

I was able to convert your model in R3.1 using this approach. Please give it a try and let me know if you have any issues.

python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model files/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_support_api_v1.14.json --output="detection_boxes,detection_scores,num_detections" --tensorflow_object_detection_api_pipeline_config files/pipeline.config --data_type FP16

 

Regards,

Luis

0 Kudos
Reply