Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Kucukaslan__Umut
Beginner
37 Views

Stopped shape/value propagation error when converting retrained TF Object Detection API model

Hello,

I retrained the model "ssd_mobilenet_v2_coco_2018_03_29" from Tensorflow detection model zoo by following the procedures defined in Tensorflow Object Detection API using my own dataset. I also exported it using "object_detection/export_inference_graph.py" script. When I try to convert this model into IR representation, I got the following error (I got the same error when I overwrite the input shape with the parameter input_shape as well). I appreciate any help.

Thanks

 

python3 mo_tf.py --input_model /data/workspace/training_ncs_networks/train_model/models/model1/export-model-200000/frozen_inference_graph.pb --output_dir /data/workspace/training_ncs_networks/train_model/models/model1/export-model-200000 --tensorflow_use_custom_operations_config /home/logi/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /data/workspace/training_ncs_networks/train_model/models/model1/export-model-200000/pipeline.config

 

Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /data/workspace/training_ncs_networks/train_model/models/model1/export-model-200000/frozen_inference_graph.pb
    - Path for generated IR:     /data/workspace/training_ncs_networks/train_model/models/model1/export-model-200000
    - IR output name:     frozen_inference_graph
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Offload unsupported operations:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     /data/workspace/training_ncs_networks/train_model/models/model1/export-model-200000/pipeline.config
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     /home/logi/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json
Model Optimizer version:     1.5.12.49d067a0
[ ERROR ]  Cannot infer shapes or values for node "MultipleGridAnchorGenerator/ToFloat_7".
[ ERROR ]  NodeDef mentions attr 'Truncate' not in Op<name=Cast; signature=x:SrcT -> y:DstT; attr=SrcT:type; attr=DstT:type>; NodeDef: MultipleGridAnchorGenerator/ToFloat_7 = Cast[DstT=DT_FLOAT, SrcT=DT_INT32, Truncate=false, _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_MultipleGridAnchorGenerator/ToFloat_7/x_port_0_ie_placeholder_0_0). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
     [[Node: MultipleGridAnchorGenerator/ToFloat_7 = Cast[DstT=DT_FLOAT, SrcT=DT_INT32, Truncate=false, _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_MultipleGridAnchorGenerator/ToFloat_7/x_port_0_ie_placeholder_0_0)]]

Caused by op 'MultipleGridAnchorGenerator/ToFloat_7', defined at:
  File "mo_tf.py", line 31, in <module>
    sys.exit(main(get_tf_cli_parser(), 'tf'))
  File "/home/logi/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/main.py", line 325, in main
    return driver(argv)
  File "/home/logi/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/main.py", line 267, in driver
    mean_scale_values=mean_scale)
  File "/home/logi/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 256, in tf2nx
    partial_infer(graph)
  File "/home/logi/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 153, in partial_infer
    node.infer(node)
  File "/home/logi/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/front/tf/partial_infer/tf.py", line 60, in tf_native_tf_node_infer
    tf_subgraph_infer(tmp_node)
  File "/home/logi/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/front/tf/partial_infer/tf.py", line 135, in tf_subgraph_infer
    all_constants, output_tensors = get_subgraph_output_tensors(node)
  File "/home/logi/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/front/tf/partial_infer/tf.py", line 115, in get_subgraph_output_tensors
    tf.import_graph_def(graph_def, name='')
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 313, in import_graph_def
    op_def=op_def)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2630, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1204, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): NodeDef mentions attr 'Truncate' not in Op<name=Cast; signature=x:SrcT -> y:DstT; attr=SrcT:type; attr=DstT:type>; NodeDef: MultipleGridAnchorGenerator/ToFloat_7 = Cast[DstT=DT_FLOAT, SrcT=DT_INT32, Truncate=false, _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_MultipleGridAnchorGenerator/ToFloat_7/x_port_0_ie_placeholder_0_0). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
     [[Node: MultipleGridAnchorGenerator/ToFloat_7 = Cast[DstT=DT_FLOAT, SrcT=DT_INT32, Truncate=false, _device="/job:localhost/replica:0/task:0/cpu:0"](_arg_MultipleGridAnchorGenerator/ToFloat_7/x_port_0_ie_placeholder_0_0)]]

[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7f37369a4488>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Stopped shape/value propagation at "MultipleGridAnchorGenerator/ToFloat_7" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

0 Kudos
1 Reply
Shubha_R_Intel
Employee
37 Views

Umut the best way to debug this is to dump your model into a text readable file (Tensorflow documentation explains how to do this). Once you do search for node "MultipleGridAnchorGenerator/ToFloat_7", at or around. Check if MultipleGridAnchorGenerator has invalid values for shapes. For instance -1 would be such an invalid value.

Thanks,

Shubha