Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Error converting Custom Model to IR

Singh__Aditya
Beginner
312 Views

Hi,

I am trying to convert a modified SSD_v2 to IR representation. I have added few branches to the original SSD_v2 code. I have modified the /path/to/tf/ssd_v2_support.json w.r.t end_points and start_points for the ObjectDetectionAPISSDPostProcessorRreplacement. However, it seems I am missing a start_point to mention in custom_operations_config file. 

I use the following command: python3 ./mo_tf.py --input_model /some_path/frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config /some_path/pipeline.config --tensorflow_use_custom_operations_config /somepath/extensions/front/tf/ssdtr_v2_support.json  --reverse_input_channels --output_dir output_dir --log_level DEBUG

Apologies for a laconic description. Do let me know if any further information is required I'll be able to furnish them.

I'd really appreciate some help in sorting out the issue. Thanks. 

Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /path/to/frozen_inference_graph.pb
    - Path for generated IR:     /path/to/output/
    - IR output name:     frozen_inference_graph
    - Log level:     DEBUG
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     True
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     /path/to/pipeline.config
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     ../extensions/front/tf/ssdtr_v2_support.json
Model Optimizer version:     2019.1.1-83-g28dfbfd
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.

[ WARNING ]  Locations and prior boxes shapes mismatch: "[      1 1638912]" vs "[    1     2 51216]"
[ 2019-07-02 15:53:05,835 ] [ DEBUG ] [ weights:39 ]  Swapping weights for node "WeightSharedConvolutionalBoxPredictor/TrackidPredictor/Conv2D"
[ 2019-07-02 15:53:05,835 ] [ DEBUG ] [ weights:36 ]  Increasing value for attribute "swap_xy_count" to 2 for node WeightSharedConvolutionalBoxPredictor/TrackidPredictor/weights/read/Output_0/Data_
[ 2019-07-02 15:53:05,835 ] [ DEBUG ] [ weights:36 ]  Increasing value for attribute "swap_xy_count" to 3 for node WeightSharedConvolutionalBoxPredictor/TrackidPredictor/weights/read/Output_0/Data_
[ 2019-07-02 15:53:05,835 ] [ DEBUG ] [ weights:36 ]  Increasing value for attribute "swap_xy_count" to 4 for node WeightSharedConvolutionalBoxPredictor/TrackidPredictor/weights/read/Output_0/Data_
[ 2019-07-02 15:53:05,835 ] [ DEBUG ] [ weights:36 ]  Increasing value for attribute "swap_xy_count" to 5 for node WeightSharedConvolutionalBoxPredictor/TrackidPredictor/weights/read/Output_0/Data_
[ 2019-07-02 15:53:05,843 ] [ DEBUG ] [ infer:140 ]  Inputs:
[ 2019-07-02 15:53:05,843 ] [ DEBUG ] [ infer:34 ]  input[0]: shape = [      1 1638912], value = <UNKNOWN>
[ 2019-07-02 15:53:05,843 ] [ DEBUG ] [ infer:34 ]  input[1]: shape = [    1 51216], value = <UNKNOWN>
[ 2019-07-02 15:53:05,843 ] [ DEBUG ] [ infer:34 ]  input[2]: shape = [    1     2 51216], value = <UNKNOWN>
[ 2019-07-02 15:53:05,843 ] [ DEBUG ] [ infer:142 ]  Outputs:
[ 2019-07-02 15:53:05,843 ] [ DEBUG ] [ infer:34 ]  output[0]: shape = <UNKNOWN>, value = <UNKNOWN>
[ ERROR ]  Shape is not defined for output 0 of "DetectionOutput".
[ ERROR ]  Cannot infer shapes or values for node "DetectionOutput".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "DetectionOutput". 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40. 
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function ObjectDetectionAPISSDPostprocessorReplacement.do_infer at 0x7fc431b47840>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ 2019-07-02 15:53:05,844 ] [ DEBUG ] [ infer:194 ]  Node "DetectionOutput" attributes: {'precision': 'FP32', 'kind': 'op', 'type': 'DetectionOutput', 'op': 'DetectionOutput', 'in_ports_count': 3, 'out_ports_count': 1, 'infer': <function ObjectDetectionAPISSDPostprocessorReplacement.do_infer at 0x7fc431b47840>, 'input_width': 1, 'input_height': 1, 'normalized': 1, 'share_location': 1, 'variance_encoded_in_target': 0, 'code_type': 'caffe.PriorBoxParameter.CENTER_SIZE', 'pad_mode': 'caffe.ResizeParameter.CONSTANT', 'resize_mode': 'caffe.ResizeParameter.WARP', 'old_infer': <function multi_box_detection_infer at 0x7fc431c7abf8>, 'name': 'DetectionOutput', 'clip': 1, 'confidence_threshold': 9.99999993922529e-09, 'top_k': 100, 'keep_top_k': 100, 'nms_threshold': 0.6000000238418579, 'dim_attrs': ['axis', 'spatial_dims', 'batch_dims', 'channel_dims'], 'shape_attrs': ['window', 'pad', 'output_shape', 'shape', 'stride'], 'IE': [('layer', [('id', <function Op.substitute_ie_attrs.<locals>.<lambda> at 0x7fc42f4b8bf8>), 'name', 'precision', 'type'], [('data', ['background_label_id', 'clip_after_nms', 'clip_before_nms', 'code_type', 'confidence_threshold', 'eta', 'height', 'height_scale', 'input_height', 'input_width', 'interp_mode', 'keep_top_k', 'label_map_file', 'name_size_file', 'nms_threshold', 'normalized', 'num_classes', 'num_test_image', 'output_directory', 'output_format', 'output_name_prefix', 'pad_mode', 'pad_value', 'prob', 'resize_mode', 'save_file', 'share_location', 'top_k', 'variance_encoded_in_target', 'visualize', 'visualize_threshold', 'width', 'width_scale', 'objectness_score'], []), '@ports', '@consts'])], '_in_ports': {0, 1, 2}, '_out_ports': {0}, 'is_output_reachable': True, 'is_undead': False, 'is_const_producer': False, 'is_partial_inferred': False, 'num_classes': 4}
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "DetectionOutput" node. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38. 
[ 2019-07-02 15:53:05,845 ] [ DEBUG ] [ main:318 ]  Traceback (most recent call last):
  File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 166, in partial_infer
    node_name)
mo.utils.error.Error: Not all output shapes were inferred or fully defined for node "DetectionOutput". 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40. 

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 167, in apply_replacements
    replacer.find_and_replace_pattern(graph)
  File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/extensions/middle/PartialInfer.py", line 31, in find_and_replace_pattern
    partial_infer(graph)
  File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 196, in partial_infer
    refer_to_faq_msg(38)) from err
mo.utils.error.Error: Stopped shape/value propagation at "DetectionOutput" node. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38. 

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/main.py", line 312, in main
    return driver(argv)
  File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/main.py", line 263, in driver
    is_binary=not argv.input_model_is_text)
  File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 128, in tf2nx
    class_registration.apply_replacements(graph, class_registration.ClassType.MIDDLE_REPLACER)
  File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 184, in apply_replacements
    )) from err
mo.utils.error.Error: Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "DetectionOutput" node. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38. 

 

0 Kudos
0 Replies
Reply