Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Converting TensorFlow SSD MobileNet V1 FPN COCO into OpenVINO IR format.

Shelke__Sagar
Beginner
1,055 Views

Hi,

I have been trying to convert SSD MobileNet V1 FPN COCO model into OpenVINO IR format but facing error and I think it is due to not using sub graph file which is necessary in the case of SSD models.

I am running following command, custom operations file I am using is ssd_support.json

python3 ./mo_tf.py --input_model ~/ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/frozen_inference_graph.pb --data_type FP32 --output_dir ~/ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/openvino --tensorflow_object_detection_api_pipeline_config ~/ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/pipeline.config --input image_tensor --input_shape [1,640,640,3] --tensorflow_use_custom_operations_config ./extensions/front/tf/ssd_support.json

And error I am getting is as follows

[ ERROR ]  Cannot infer shapes or values for node "Postprocessor/ToFloat".
[ ERROR ]  Node 'Postprocessor/ToFloat': Unknown input node 'Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3'
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7fd9dbc11b70>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Stopped shape/value propagation at "Postprocessor/ToFloat" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

Do i need to use any other custom config file? I have downloaded model from TensorFlow model zoo and haven't made any changes.

0 Kudos
4 Replies
Shubha_R_Intel
Employee
1,055 Views

Dear Sagar, here is a list of supported SSD models. Do a search on "SSD". did you download your model from one of these sources ? (It looks like you did). Can you kindly re-run your command with --log_level=DEBUG ?

https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow

From reading the OpenVino documentation 

The model is not reshape-able, meaning that it's not possible to change the size of the model input image. For example, SSD FPN models have Reshape operations with hard-coded output shapes, but the input size to these Reshape instances depends on the input image size. In this case, the Model Optimizer shows an error during the shape inference phase. Run the Model Optimizer with --log_level DEBUG to see the inferred layers output shapes to see the mismatch.

This is what you may be experiencing. No way to tell until you re-run with --log_level DEBUG.

Thanks,

Shubha

0 Kudos
Shubha_R_Intel
Employee
1,055 Views

Sagar please peruse this forum link where I showed how to successfully create IR from an SSD Mobilenet model:

https://software.intel.com/en-us/forums/computer-vision/topic/805387

0 Kudos
Shelke__Sagar
Beginner
1,055 Views

Thanks Shubha, I was able to solve the problem.

0 Kudos
Shubha_R_Intel
Employee
1,055 Views

Dear Sagar,

Glad to hear that the problem was solved.

Shubha

0 Kudos
Reply