Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

SSD model convertation problem

goncharova__mary
Beginner
1,912 Views

Hi there, I ran into a common problem with converting SSD Tensorflow Object Detection API model to OpenVINO. I was fallowing the official instructions but it didn't help me. I tried all  the solution methods described on the forum with such a problem but error still with me. The problem is below.

$ /opt/intel/openvino_2020.2.130/deployment_tools/model_optimizer/mo_tf.py --input_model=detection_model.pb --tensorflow_use_custom_operations_config /opt/intel/openvino_2020.2.130/deployment_tools/model_optimizer/extensions/front/tf/ssd_support_api_v1.14.json --tensorflow_object_detection_api_pipeline_config detection_config.config --input_shape=[1,300,300,3]
[ WARNING ]  Use of deprecated cli option --tensorflow_use_custom_operations_config detected. Option use in the following releases will be fatal. Please use --transformations_config cli option instead
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /media/mgoncharova/Elements1/Mary/detection_model.pb
    - Path for generated IR:     /media/mgoncharova/Elements1/Mary/.
    - IR output name:     detection_model
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     [1,300,300,3]
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     /media/mgoncharova/Elements1/Mary/detection_config.config
    - Use the config file:     /opt/intel/openvino_2020.2.130/deployment_tools/model_optimizer/extensions/front/tf/ssd_support_api_v1.14.json
Model Optimizer version:     2020.2.0-60-g0bc66e26ff
/home/mgoncharova/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/mgoncharova/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/mgoncharova/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/mgoncharova/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/mgoncharova/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/mgoncharova/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/mgoncharova/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/mgoncharova/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/mgoncharova/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/mgoncharova/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/mgoncharova/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/mgoncharova/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ]  Failed to match nodes from custom replacement description with id 'ObjectDetectionAPISSDPostprocessorReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ]  Shape is not defined for output 0 of "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice".
[ ERROR ]  Cannot infer shapes or values for node "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice". 
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #40. 
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function Slice.infer at 0x7fecb1b1d268>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ANALYSIS INFO ]  Your model looks like TensorFlow Object Detection API Model.
Check if all parameters are specified:
    --tensorflow_use_custom_operations_config
    --tensorflow_object_detection_api_pipeline_config
    --input_shape (optional)
    --reverse_input_channels (if you convert a model to use with the Inference Engine sample applications)
Detailed information about conversion of this model can be found at
https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice" node. 
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38. 
 

I have tried to change tensorflow_use_custom_operations_configs to ssd_v2_support but ERROR is still here:

[ ERROR ]  Shape is not defined for output 0 of "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice".
 

Also, I have seen solution about replacing "Postprocessor/Cast" to "Postprocessor/Cast_1" and I tried this as well. Still doesn't help.

Adding -b 1 instead of --input_shape also doesn't work for me just like --reverse_input_channels.

One thing can help me convert my model its ssd_support.json or adding "Postprocessor/Slice" to config but at the stage of loading the model I got the next error:

"RuntimeError: Check '(data_shape.at(axis) == 1)' failed at /teamcity/work/scoring_engine_build/releases_2020_2/ngraph/src/ngraph/op/fused/squeeze.cpp:79: While validating node 'v0::Squeeze Squeeze_424(Reshape_422[0]:f32{1,1083,4}, Constant_423[0]:i64{1}) -> (dynamic?)': provided axis value is invalid. Only axes of size 1 may be removed."

Looks like about the same problem with dynamic shape. 

Give me some advices please...

0 Kudos
7 Replies
Max_L_Intel
Moderator
1,912 Views

Hi Mary.

I am able to convert your model using ssd_support.json file instead of ssd_support_api_v1.14.json. So please try that from your end.

Hope this helps.
Best regards, Max.

0 Kudos
goncharova__mary
Beginner
1,887 Views

Hi @Max_L_Intel its a tf version 1.3 and model SSD MobileNet v1. Thanx for replying

0 Kudos
Max_L_Intel
Moderator
1,879 Views

Hi @goncharova__mary 

SSD MobileNet V1 should be supported, however, I think TensorFlow version 1.3 might be too old for OpenVINO Model Optimizer conversion.
We recommend to use TensorFlow versions 1.14 or 1.15 for the purpose of manual training of SSD custom topology and its further usage with OpenVINO toolkit.
We are sorry for the inconvenience. 

Best regards, Max.

0 Kudos
goncharova__mary
Beginner
1,912 Views

Hi @Max, thanx for you fast reply. As I have written before I am able to convert my model with ssd_support.json too but if you will try to run it you will get an error here net = ie.read_network(model=model_xml, weights=model_bin)

"RuntimeError: Check '(data_shape.at(axis) == 1)' failed at /teamcity/work/scoring_engine_build/releases_2020_2/ngraph/src/ngraph/op/fused/squeeze.cpp:79: While validating node 'v0::Squeeze Squeeze_424(Reshape_422[0]:f32{1,1083,4}, Constant_423[0]:i64{1}) -> (dynamic?)': provided axis value is invalid. Only axes of size 1 may be removed."

0 Kudos
Max_L_Intel
Moderator
1,905 Views

Hi @goncharova__mary 

Sorry, I missed that line. Indeed, I see the same error when trying to run your AI model.

Seems like your AI topology might not be officially supported by OpenVINO toolkit. What TensorFlow version did you use to train it? And what is the initial SSD topology/model (MobileNet, ResNet, Inception, etc.)?

0 Kudos
longhao1995
Beginner
1,587 Views

tensorflow version is 1.14 , network structure is mobilenetv2-SSD  ,openvino version is 2019.3,   the problem :  Model Optimizer version: 2019.3.0-408-gac8584cb7
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ] Failed to match nodes from custom replacement description with id 'ObjectDetectionAPISSDPostprocessorReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ] Cannot infer shapes or values for node "Postprocessor/Cast_1".
[ ERROR ] 0
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function Cast.infer at 0x000002275EBBFF28>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] 0
Stopped shape/value propagation at "Postprocessor/Cast_1" node.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.
Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): 0
Stopped shape/value propagation at "Postprocessor/Cast_1" node.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

 

0 Kudos
Max_L_Intel
Moderator
1,581 Views

@longhao1995 

Please open a new thread in this community section.
Thanks.

0 Kudos
Reply