Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Model conversion error for MobileNet SSD v1 (COCO), MobileNet SSD v2 (COCO), MobileNet SSD v2 (Faces)

Bench__Andriy
Beginner
963 Views

I tried to convert MobileNet SSD v1 (COCO) network ( http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_quantized_300x300_coco14_sync_2018_07_18.tar.gz) ;
from TF to inference engine format. Following commands were used (Win10, OpenVINO 2019 R1):

call c:\IntelSWTools\openvino\bin\setupvars.bat
python c:\IntelSWTools\openvino\deployment_tools\model_optimizer\mo.py --input_model tflite_graph.pb  --tensorflow_use_custom_operations_config c:\IntelSWTools\openvino\deployment_tools\model_optimizer\extensions\front\tf\ssd_support.json --tensorflow_object_detection_api_pipeline_config pipeline.config --input_shape [1,300,300,3] --data_type FP16

During conversion following error message was shown:

Model Optimizer version: 2019.1.0-341-gc9b66a2
[ ERROR ] Cannot infer shapes or values for node "TFLite_Detection_PostProcess".
[ ERROR ] Op type not registered 'TFLite_Detection_PostProcess' in binary running on DESKTOP-816AP20. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x000002D007A9B598>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "TFLite_Detection_PostProcess" node.

Same errors happened during conversion of other networks:
MobileNet SSD v2 (COCO) - http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz
MobileNet SSD v2 (Faces) - http://download.tensorflow.org/models/object_detection/facessd_mobilenet_v2_quantized_320x320_open_image_v4.tar.gz

How to convert mentioned networks for usage on Movidius?

0 Kudos
3 Replies
Shubha_R_Intel
Employee
963 Views

Dear Bench, Andriy,

Your title says ssd_v2 coco but your example is ssd_v1. Anyway, I had no problem with ssd_mobilenet_v2_coco. Please see the below command (I got the model  downloaded using model_downloader.py)

c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\model_optimizer>python mo_tf.py  --input_model "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\tools\model_downloader\object_detection\common\ssd_mobilenet_v2_coco\tf\ssd_mobilenet_v2_coco.frozen.pb" --tensorflow_use_custom_operations_config  "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\model_optimizer\extensions\front\tf\ssd_v2_support.json" --tensorflow_object_detection_api_pipeline_config  "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\tools\model_downloader\object_detection\common\ssd_mobilenet_v2_coco\tf\ssd_mobilenet_v2_coco.config" --data_type FP16 --log_level DEBUG

Kindly study this document:

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html

Thanks,

Shubha

 

0 Kudos
Jiang__Hao
Beginner
963 Views

Shubha R. (Intel) wrote:

Dear Bench, Andriy,

Your title says ssd_v2 coco but your example is ssd_v1. Anyway, I had no problem with ssd_mobilenet_v2_coco. Please see the below command (I got the model  downloaded using model_downloader.py)

c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\model_optimizer>python mo_tf.py  --input_model "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\tools\model_downloader\object_detection\common\ssd_mobilenet_v2_coco\tf\ssd_mobilenet_v2_coco.frozen.pb" --tensorflow_use_custom_operations_config  "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\model_optimizer\extensions\front\tf\ssd_v2_support.json" --tensorflow_object_detection_api_pipeline_config  "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\tools\model_downloader\object_detection\common\ssd_mobilenet_v2_coco\tf\ssd_mobilenet_v2_coco.config" --data_type FP16 --log_level DEBUG

Kindly study this document:

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html

Thanks,

Shubha

 

but how to convert SSD_MobileNet_V1_COCO?

0 Kudos
Shubha_R_Intel
Employee
963 Views

Dear Jiang, Hao,

According to this list we definitely support SSD_MobileNet_V1_COCO. The command should be very similar to above except you may need to use a different *.json and a different *.config.

Shubha

0 Kudos
Reply