Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Provided OpenVino Frozen TensorFlow Object Detection SSD FPN doesn't convert properly

Jung__Kenneth
Beginner
965 Views

System Specifications:

OS: Ubuntu 16.04

CPU: Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz

OpenVino Version: l_openvino_toolkit_p_2018.5.445

 

Issue Description:

I attempted to download the "ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz" file on your website from the below link and run the model optimizer on its frozen graph pb file.

https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow

I extracted the tar file and I ran the following command inside the "/opt/intel/computer_vision_sdk/deployment_tool" directory to convert it:

sudo python3 model_optimizer/mo_tf.py --input_shape [1,640,640,3] --input_model /path/to/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/frozen_inference_graph.pb --log_level DEBUG --input image_tensor --tensorflow_object_detection_api_pipeline_config /path/to/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/pipeline.config --tensorflow_use_custom_operations_config model_optimizer/extensions/front/tf/retinanet.json

 

I obtain the following error that describes some dimentions of the model are not inferred. May you please try to reproduce this error and provide a solution on how to fix this? Part of the error log is given below:

.
.
.

[ 2019-01-26 01:06:08,681 ] [ DEBUG ] [ infer:40 ]  input[0]: shape = [], value = Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArray_1/Output_0/Data_
[ 2019-01-26 01:06:08,681 ] [ DEBUG ] [ infer:165 ]  Outputs:
[ 2019-01-26 01:06:08,681 ] [ DEBUG ] [ infer:40 ]  output[0]: shape = [], value = Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArray_1/Output_0/Data_
[ 2019-01-26 01:06:08,681 ] [ DEBUG ] [ infer:150 ]  --------------------
[ 2019-01-26 01:06:08,681 ] [ DEBUG ] [ infer:151 ]  Partial infer for Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayUnstack_1/TensorArrayScatter/TensorArrayScatterV3
[ 2019-01-26 01:06:08,681 ] [ DEBUG ] [ infer:152 ]  Op: TensorArrayScatterV3
[ 2019-01-26 01:06:08,681 ] [ DEBUG ] [ infer:163 ]  Inputs:
[ 2019-01-26 01:06:08,681 ] [ DEBUG ] [ infer:40 ]  input[0]: shape = [], value = Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArray_1/Output_0/Data_
[ 2019-01-26 01:06:08,681 ] [ DEBUG ] [ infer:40 ]  input[1]: shape = [1], value = [0]
[ 2019-01-26 01:06:08,682 ] [ DEBUG ] [ infer:40 ]  input[2]: shape = [    1 51150    90], value = <UNKNOWN>
[ 2019-01-26 01:06:08,682 ] [ DEBUG ] [ infer:40 ]  input[3]: shape = [], value = <UNKNOWN>
[ 2019-01-26 01:06:08,682 ] [ DEBUG ] [ infer:165 ]  Outputs:
[ 2019-01-26 01:06:08,682 ] [ DEBUG ] [ infer:40 ]  output[0]: shape = [], value = <UNKNOWN>
[ 2019-01-26 01:06:08,682 ] [ DEBUG ] [ infer:150 ]  --------------------
[ 2019-01-26 01:06:08,682 ] [ DEBUG ] [ infer:151 ]  Partial infer for Postprocessor/BatchMultiClassNonMaxSuppression/map/while/TensorArrayReadV3_1/Enter_1
[ 2019-01-26 01:06:08,682 ] [ DEBUG ] [ infer:152 ]  Op: Enter
[ 2019-01-26 01:06:08,682 ] [ DEBUG ] [ infer:163 ]  Inputs:
[ 2019-01-26 01:06:08,682 ] [ DEBUG ] [ infer:40 ]  input[0]: shape = [], value = <UNKNOWN>
[ 2019-01-26 01:06:08,682 ] [ DEBUG ] [ infer:165 ]  Outputs:
[ 2019-01-26 01:06:08,682 ] [ DEBUG ] [ infer:40 ]  output[0]: shape = [], value = <UNKNOWN>
[ 2019-01-26 01:06:08,682 ] [ DEBUG ] [ infer:150 ]  --------------------
[ 2019-01-26 01:06:08,682 ] [ DEBUG ] [ infer:151 ]  Partial infer for Postprocessor/BatchMultiClassNonMaxSuppression/map/while/TensorArrayReadV3_1
[ 2019-01-26 01:06:08,682 ] [ DEBUG ] [ infer:152 ]  Op: TensorArrayReadV3
[ 2019-01-26 01:06:08,682 ] [ DEBUG ] [ infer:163 ]  Inputs:
[ 2019-01-26 01:06:08,683 ] [ DEBUG ] [ infer:40 ]  input[0]: shape = [], value = Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArray_1/Output_0/Data_
[ 2019-01-26 01:06:08,683 ] [ DEBUG ] [ infer:40 ]  input[1]: shape = [], value = <UNKNOWN>
[ 2019-01-26 01:06:08,683 ] [ DEBUG ] [ infer:40 ]  input[2]: shape = [], value = <UNKNOWN>
[ 2019-01-26 01:06:08,683 ] [ DEBUG ] [ infer:165 ]  Outputs:
[ 2019-01-26 01:06:08,683 ] [ DEBUG ] [ infer:40 ]  output[0]: shape = [51150    90], value = <UNKNOWN>
[ 2019-01-26 01:06:08,683 ] [ DEBUG ] [ infer:150 ]  --------------------
[ 2019-01-26 01:06:08,683 ] [ DEBUG ] [ infer:151 ]  Partial infer for Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1
[ 2019-01-26 01:06:08,683 ] [ DEBUG ] [ infer:152 ]  Op: Slice
[ WARNING ]  Incorrect slice operation: no starts or end attr
[ 2019-01-26 01:06:08,683 ] [ DEBUG ] [ infer:163 ]  Inputs:
[ 2019-01-26 01:06:08,683 ] [ DEBUG ] [ infer:40 ]  input[0]: shape = [51150    90], value = <UNKNOWN>
[ 2019-01-26 01:06:08,683 ] [ DEBUG ] [ infer:40 ]  input[1]: shape = [2], value = [0 0]
[ 2019-01-26 01:06:08,683 ] [ DEBUG ] [ infer:40 ]  input[2]: shape = [2], value = <UNKNOWN>
[ 2019-01-26 01:06:08,683 ] [ DEBUG ] [ infer:165 ]  Outputs:
[ 2019-01-26 01:06:08,684 ] [ DEBUG ] [ infer:40 ]  output[0]: shape = <UNKNOWN>, value = <UNKNOWN>
[ ERROR ]  Shape is not defined for output 0 of "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1".
[ ERROR ]  Cannot infer shapes or values for node "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1". 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40. 
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function Slice.infer at 0x7f0591fb8378>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ 2019-01-26 01:06:08,687 ] [ DEBUG ] [ infer:215 ]  Node "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1" attributes: {'kind': 'op', 'type': 'Slice', 'pb': name: "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1"
op: "Slice"
input: "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/TensorArrayReadV3_1"
input: "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1/begin"
input: "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/stack_1"
attr {
  key: "Index"
  value {
    type: DT_INT32
  }
}
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, 'shape_attrs': ['window', 'pad', 'stride', 'output_shape', 'shape'], 'is_partial_inferred': False, 'is_output_reachable': True, 'is_undead': False, 'IE': [('layer', [('id', <function Op.substitute_ie_attrs.<locals>.<lambda> at 0x7f0572ad7d90>), 'name', 'precision', 'type'], [('data', [], []), '@ports', '@consts'])], 'axis': None, 'end': None, 'infer': <function Slice.infer at 0x7f0591fb8378>, 'start': None, 'name': 'Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1', 'precision': 'FP32', 'op': 'Slice', 'is_const_producer': False, 'dim_attrs': ['spatial_dims', 'axis', 'batch_dims', 'channel_dims']}
[ ERROR ]  Stopped shape/value propagation at "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1" node. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38. 
[ 2019-01-26 01:06:08,687 ] [ DEBUG ] [ main:331 ]  Traceback (most recent call last):
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 187, in partial_infer
    node_name)
mo.utils.error.Error: Not all output shapes were inferred or fully defined for node "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1". 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40. 

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/main.py", line 325, in main
    return driver(argv)
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/main.py", line 267, in driver
    mean_scale_values=mean_scale)
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 256, in tf2nx
    partial_infer(graph)
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 217, in partial_infer
    refer_to_faq_msg(38)) from err
mo.utils.error.Error: Stopped shape/value propagation at "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1" node. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38. 

 

0 Kudos
7 Replies
Mulcahy__Eoghan
Beginner
965 Views

I have the exact same problem for Mask R-CNN ResNet 101 COCO.

Command Run

python /opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo_tf.py --mean_values [103.94,116.78,123.68] --scale 1 --input_model frozen_inference_graph.pb -b 1 --tensorflow_object_detection_api_pipeline_config pipeline.config --offload_unsupported_operations_to_tf --log_level=DEBUG

Error Log

[ ERROR ]  Shape [ 1 -1 -1  3] is not fully defined for output 0 of "image_tensor". Use --input_shape with positive integers to override model input shapes.
[ ERROR ]  Cannot infer shapes or values for node "image_tensor".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "image_tensor".
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>.<lambda> at 0x7f844a1f1510>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ 2019-02-24 19:20:09,492 ] [ DEBUG ] [ infer:215 ]  Node "image_tensor" attributes: {'pb': name: "image_tensor"
op: "Placeholder"
attr {
  key: "dtype"
  value {
    type: DT_UINT8
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: -1
      }
      dim {
        size: -1
      }
      dim {
        size: -1
      }
      dim {
        size: 3
      }
    }
  }
}
, 'kind': 'op', 'name': 'image_tensor', 'op': 'Placeholder', 'precision': 'FP32', 'IE': [('layer', [('id', <function update_ie_fields.<locals>.<lambda> at 0x7f844a1f1e18>), 'name', 'precision', 'type'], [('data', ['auto_pad', 'epsilon', 'min', 'max', ('axis', <function update_ie_fields.<locals>.<lambda> at 0x7f844a1f1d90>), 'tiles', ('dim', <function update_ie_fields.<locals>.<lambda> at 0x7f844a1f1730>), 'num_axes', ('pool-method', 'pool_method'), 'group', ('rounding-type', 'rounding_type'), ('exclude-pad', 'exclude_pad'), 'operation', 'out-size', 'power', 'shift', 'alpha', 'beta', 'coords', 'classes', 'num', ('local-size', 'local_size'), 'region', 'knorm', 'num_classes', 'keep_top_k', 'variance_encoded_in_target', 'code_type', 'share_location', 'nms_threshold', 'confidence_threshold', 'background_label_id', 'top_k', 'eta', 'visualize', 'visualize_threshold', 'save_file', 'output_directory', 'output_name_prefix', 'output_format', 'label_map_file', 'name_size_file', 'num_test_image', 'prob', 'resize_mode', 'height', 'width', 'height_scale', 'width_scale', 'pad_mode', 'pad_value', 'interp_mode', 'img_size', 'img_h', 'img_w', 'step', 'step_h', 'step_w', ('offset', <function update_ie_fields.<locals>.<lambda> at 0x7f844a1f1c80>), 'variance', 'flip', 'clip', ('min_size', <function update_ie_fields.<locals>.<lambda> at 0x7f844a1f19d8>), ('max_size', <function update_ie_fields.<locals>.<lambda> at 0x7f844a1f1950>), ('aspect_ratio', <function update_ie_fields.<locals>.<lambda> at 0x7f844a1f1400>), 'decrease_label_id', 'normalized', 'scale_all_sizes', ('type', 'norm_type'), 'eps', 'across_spatial', 'channel_shared', 'negative_slope', 'engine', 'num_filter', ('type', 'sample_type'), ('order', <function update_ie_fields.<locals>.<lambda> at 0x7f844a1f1378>), 'pooled_h', 'pooled_w', 'spatial_scale', 'cls_threshold', 'max_num_proposals', 'iou_threshold', 'min_bbox_size', 'feat_stride', 'pre_nms_topn', 'post_nms_topn', ('type', <function update_ie_fields.<locals>.<lambda> at 0x7f844a1f12f0>), ('value', <function update_ie_fields.<locals>.<lambda> at 0x7f8450213158>), ('output', <function update_ie_fields.<locals>.<lambda> at 0x7f8447156048>), ('input_nodes_names', <function update_ie_fields.<locals>.<lambda> at 0x7f84471560d0>), ('output_tensors_names', <function update_ie_fields.<locals>.<lambda> at 0x7f8447156158>), ('real_input_dims', <function update_ie_fields.<locals>.<lambda> at 0x7f84471561e0>), ('protobuf', <function update_ie_fields.<locals>.<lambda> at 0x7f8447156268>), {'custom_attributes': None}, ('strides', <function update_ie_fields.<locals>.<lambda> at 0x7f84471562f0>), ('kernel', <function update_ie_fields.<locals>.<lambda> at 0x7f8447156378>), ('dilations', <function update_ie_fields.<locals>.<lambda> at 0x7f8447156400>), ('pads_begin', <function update_ie_fields.<locals>.<lambda> at 0x7f8447156488>), ('pads_end', <functionupdate_ie_fields.<locals>.<lambda> at 0x7f8447156510>), ('scale', <function update_ie_fields.<locals>.<lambda> at 0x7f8447156598>), 'crop_width', 'crop_height', 'write_augmented','max_multiplier', 'augment_during_test', 'recompute_mean', 'write_mean', 'mean_per_pixel', 'mode', 'bottomwidth', 'bottomheight', 'chromatic_eigvec', 'kernel_size', 'max_displacement', 'stride_1', 'stride_2', 'single_direction', 'do_abs', 'correlation_type', 'antialias', 'resample_type', 'factor', 'coeff', ('ratio', <function update_ie_fields.<locals>.<lambda> at 0x7f8447156620>)], []), '@ports', '@consts'])], 'dim_attrs': ['spatial_dims', 'axis', 'channel_dims', 'batch_dims'], 'shape_attrs': ['pad', 'output_shape', 'shape', 'window', 'stride'], 'is_input': True, 'is_output_reachable': True, 'is_undead': True, 'is_const_producer': False, 'infer': <function tf_placeholder_ext.<locals>.<lambda> at 0x7f844a1f1510>, 'data_type': <class 'numpy.uint8'>, 'shape': array([ 1, -1, -1,  3]), 'type': 'Input', 'permute_attrs': <mo.ops.op.PermuteAttrs object at 0x7f84471d1400>, 'is_partial_inferred':False}
[ ERROR ]  Stopped shape/value propagation at "image_tensor" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
[ 2019-02-24 19:20:09,492 ] [ DEBUG ] [ main:331 ]  Traceback (most recent call last):
  File "/opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 187, in partial_infer
    node_name)
mo.utils.error.Error: Not all output shapes were inferred or fully defined for node "image_tensor".
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/main.py", line 325, in main
    return driver(argv)
  File "/opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/main.py", line 267, in driver
    mean_scale_values=mean_scale)
  File "/opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 256, in tf2nx
    partial_infer(graph)
  File "/opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 217, in partial_infer
    refer_to_faq_msg(38)) from err
mo.utils.error.Error: Stopped shape/value propagation at "image_tensor" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow

http://download.tensorflow.org/models/object_detection/mask_rcnn_resnet101_atrous_coco_2018_01_28.tar.gz

0 Kudos
Shubha_R_Intel
Employee
965 Views

Hi Eoghan. -1 values in your dim will cause MO to bomb.  Kenneth and Eoghan, please refer to the following documentation :

http://docs.openvinotoolkit.org/R5/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html

I assume you downloaded the model from here ?

https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md

But from the OpenVino documentation I see this note:

NOTE: If you convert a TensorFlow* Object Detection API model to use with the Inference Engine sample applications, you must specify the --reverse_input_channels parameter also. 

Please try adding --reverse_input_channels to your command.

 

 

0 Kudos
Seib__Viktor
Beginner
965 Views

Dear all,

I have the same problem as Kenneth and Eoghan, although with a slightly different network.

OS: Ubuntu 16.04
OpenVino Version: l_openvino_toolkit_p_2018.5.445

Network from the model zoo: ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03

Command: python mo_tf.py --input_model <path1>/frozen_inference_graph.pb  --input_shape [1,640,640,3] --tensorflow_use_custom_operations_config <path2>/model_optimizer/extensions/front/tf/ssd_support.json --tensorflow_object_detection_api_pipeline_config <path1>pipeline.config

The problem seems to affect all networks from the model zoo that use the fpn shared box predictor.

@Shubha R.: adding --reverse_input_channels does not help here. This flag is used to flip the input channels from BGR to RGB - only changing the order of channels, but not affecting the shape of the tensor.
However, the problem that arises here is that the model optimizer can not infer the shape of some layers, although the input shape is given as argument (--input_shape [1,640,640,3]) and additionally inside the pipeline.config (--tensorflow_object_detection_api_pipeline_config pipeline.config)

Any advice / bugfix is highly appreciated.

Thank you very much in advance

Viktor

 

 

0 Kudos
Seib__Viktor
Beginner
965 Views

Hello again,

I managed to convert the network. The problem was that I was using the wrong "custom operations config". Since the network has "v1" in its name, I used "ssd_support.json". However, it works fine with "ssd_v2_support.json".

This solved my issue. @Kenneth and Eoghan: Maybe something similar will work for you as well? Try using another custom operations config.

Best

Viktor

0 Kudos
Shubha_R_Intel
Employee
965 Views

@Victor, thanks for getting to the root of the problem.

Shubha

0 Kudos
Ileni__Tudor
Beginner
965 Views

Hello all,

I was running:

mo_tf.py --input_shape [1,224,224,3] --input_model deploy/frozen_inference_graph.pb --tensorflow_use_custom_ope
rations_config extensions/front/tf/mask_rcnn_support_api_v1.11.json --tensorflow_object_detection_api_pipeline_config deploy/pipeline.config --reverse_input_channels 

and encounter the same problem:
.."Exception occurred during running replacer "ObjectDetectionAPIMaskRCNNROIPoolingSecondReplacement"...

The model config is similar to "object_detection/samples/configs/mask_rcnn_resnet101_atrous_coco.config" (just changed the number of classes and paths).

In my case, the problem was that I had an old version of the "models" repository, that was neither compatible with mask_rcnn_support_api_v1.11.json nor mask_rcnn_support_api_v1.7.json.

Solution:
->clone "models" repository
->move on branch r.1.13.0 (probably work with newer branches)
->retrain the model using "model_main.py" script, with tesorflow 1.12 (probably work newer version).
->convert the model using "export_inference_graph.py"
->convert with mo_tf.py with the above command

0 Kudos
Shubha_R_Intel
Employee
965 Views

Dear Tudor,

We thank you for sharing your findings with the OpenVino community.  Rock on with OpenVino ! Community sharing is super important.

Shubha

0 Kudos
Reply