Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Model_Optimizer convert Tensorflow SSD_V2 issue

PLICHET__Bruno
Beginner
1,271 Views

Hello

I'm on Windows 10 with Python v3.6, Tensorflow v1.14, openvino_2019.3.379.

I use this Github "Tony607/object_detection_demo" with colab to learn how to convert a Tensorflow Graph with Openvino.

I try to convert a frozen_inference_graph.pb with this commande :

python mo_tf.py --input_model=frozen_inference_graph.pb --tensorflow_use_custom_operations_config ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config ssd_mobilenet_v2_coco.config --input_shape [1,300,300,3] --data_type FP32

It' s ok with a copy of Tonny607 model saving on Github.

But when i train my custom model i have this issue:

C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer>python mo_tf.py --input_model=frozen_inference_graph.pb --tensorflow_use_custom_operations_config ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config ssd_mobilenet_v2_coco.config --input_shape [1,300,300,3] --data_type FP32
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\frozen_inference_graph.pb
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\.
        - IR output name:       frozen_inference_graph
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [1,300,300,3]
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Path to model dump for TensorBoard:   None
        - List of shared libraries with TensorFlow custom layers implementation:        None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\ssd_mobilenet_v2_coco.config
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\ssd_v2_support.json
Model Optimizer version:        2019.3.0-408-gac8584cb7
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ]  Failed to match nodes from custom replacement description with id 'ObjectDetectionAPISSDPostprocessorReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ]  Cannot infer shapes or values for node "Postprocessor/Cast_1".
[ ERROR ]  0
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function Cast.infer at 0x0000011987E98840>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  0
Stopped shape/value propagation at "Postprocessor/Cast_1" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.
Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): 0
Stopped shape/value propagation at "Postprocessor/Cast_1" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

I try to use "ssd_support_api_v1.14.json"  with and without modification of "Postprocessor/Cast_1 -> same issue.

I thinks they model is different but why?

Can you help me please ?

 

 

 

 

0 Kudos
7 Replies
Cary_P_Intel1
Employee
1,271 Views

Hi, Bruno,

Have you changed any layers inside the SSD model? since from your error message "Failed to match nodes from custom replacement description with id 'ObjectDetectionAPISSDPostprocessorReplacement':
It means model and custom replacement description are incompatible.", which means that the model optimizer is not able to find the match pattern inside the "ssd_v2_support.json", while you said the pre-trained model works well with the model optimizer conversion, thus I guess you should change something.

If so, please refer to the online document https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html section "SSD/Postprocessor Block" to know the meaning of the setting inside the json file. And you can refer to your model with tensorboard to change the configuration inside the json file properly.

0 Kudos
PLICHET__Bruno
Beginner
1,271 Views

Hi,

I haven' t changed the model, I was tryed with the originale notebook of Github "Tony607/object_detection_demo" withe same issue. stange...

I  verify the model was no changed , is http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz

I try to understan this issue with --log_level DEBUG parameter but it's very complex for me, i don't use Tensoboard very well.

Copy of error in DEBUG:
"ObjectDetectionAPISSDPostprocessorReplacement".
[ ERROR ]  Failed to match nodes from custom replacement description with id 'ObjectDetectionAPISSDPostprocessorReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names.

if you have a suggestion ?

0 Kudos
Cary_P_Intel1
Employee
1,271 Views

Hi, Bruno,

The error message you provided, as I said, which told you the python file tries to find specific pattern which the SSD model should have inside the graph, but it couldn't find it, thus it throw out the error. Have you tried the model if it works well with tensorflow itself? And if possible, please share with the model you trained, otherwise it's hard to verify where the problem is.

0 Kudos
PLICHET__Bruno
Beginner
1,271 Views

Hi,

The graph run fine with Tensorflow.

You can find the graph in attach file.

Thanks for your time.

0 Kudos
Cary_P_Intel1
Employee
1,271 Views

Hi, Bruno,

I've tried to convert your model with model optimizer to export the tensorboard file, and I found out that your model was trained by Tensorflow 2.0 which is not yet supported by OpenVINO thus will cause the error. 

Here is the error message I encountered while trying to export the tensorboard file:

[ ERROR ]  Op NonMaxSuppressionV5 is used by the graph, but is not registered
Cannot write an event file for the tensorboard

The NonMaxSuppressionV5 is available in Tensorflow 2.0

0 Kudos
Xu__Chenjie
Beginner
1,271 Views

I also met the same problem and I post my issue here.

I trained the Tensorflow Object Detection API model with tf 1 because the API does not support tf2. 

0 Kudos
PLICHET__Bruno
Beginner
1,271 Views

Hi,

Today i was verifyed Tenorflow version in Google colab = v1.15.

I have modified te version to v1.14 and retraing the model but i have the same issue :( 

It possible we have a mistake when we exporting a Trained Inference Graph in  colab.

Have a good Day.

 

 

 

 

 

0 Kudos
Reply