Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

OpenVino Model Optimizer - Stopped shape/value propagation error

Kucukaslan__Umut
Beginner
678 Views

Hello

I am trying to convert my custom tensorflow model, which I trained using a pretrained Resnet model, to Intermediate Representation in order to run it on NCS 2. However, I get the following error although I overwrite the input shape. Normally, the model has flexible input image and batch size.

Thanks.

 

Common parameters:
    - Path to the Input Model:     /home/logi/Desktop/umut/models/our_models/converted_models/retina_resnet50_standard/retina_resnet50_standard.pb
    - Path for generated IR:     /home/logi/Desktop/umut/models/our_models/converted_models/retina_resnet50_standard/ir
    - IR output name:     retina_resnet50_standard
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     [1,800,800,3]
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
TensorFlow specific parameters:
    - Input model in text protobuf format:     False
    - Offload unsupported operations:     False
    - Path to model dump for TensorBoard:     None
    - List of shared libraries with TensorFlow custom layers implementation:     None
    - Update the configuration file with input/output node names:     None
    - Use configuration file used to generate the model with Object Detection API:     None
    - Operations to offload:     None
    - Patterns to offload:     None
    - Use the config file:     None
Model Optimizer version:     1.5.12.49d067a0
[ ERROR ]  Cannot infer shapes or values for node "filtered_detections/map/while/GatherNd_2".
[ ERROR ]  indices[299] = [1, 1] does not index into param shape [120087,1]
     [[node filtered_detections/map/while/GatherNd_2 (defined at /home/logi/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/front/tf/partial_infer/tf.py:115)  = GatherNd[Tindices=DT_INT64, Tparams=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_filtered_detections/map/while/TensorArrayReadV3_1_port_0_ie_placeholder_0_0, _arg_filtered_detections/map/while/concat_port_0_ie_placeholder_0_1)]]

Caused by op 'filtered_detections/map/while/GatherNd_2', defined at:
  File "mo_tf.py", line 31, in <module>
    sys.exit(main(get_tf_cli_parser(), 'tf'))
  File "/home/logi/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/main.py", line 325, in main
    return driver(argv)
  File "/home/logi/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/main.py", line 267, in driver
    mean_scale_values=mean_scale)
  File "/home/logi/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 256, in tf2nx
    partial_infer(graph)
  File "/home/logi/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 153, in partial_infer
    node.infer(node)
  File "/home/logi/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/front/tf/partial_infer/tf.py", line 60, in tf_native_tf_node_infer
    tf_subgraph_infer(tmp_node)
  File "/home/logi/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/front/tf/partial_infer/tf.py", line 135, in tf_subgraph_infer
    all_constants, output_tensors = get_subgraph_output_tensors(node)
  File "/home/logi/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/front/tf/partial_infer/tf.py", line 115, in get_subgraph_output_tensors
    tf.import_graph_def(graph_def, name='')
  File "/home/logi/.local/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
    return func(*args, **kwargs)
  File "/home/logi/.local/lib/python3.5/site-packages/tensorflow/python/framework/importer.py", line 442, in import_graph_def
    _ProcessNewOps(graph)
  File "/home/logi/.local/lib/python3.5/site-packages/tensorflow/python/framework/importer.py", line 234, in _ProcessNewOps
    for new_op in graph._add_new_tf_operations(compute_devices=False):  # pylint: disable=protected-access
  File "/home/logi/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3440, in _add_new_tf_operations
    for c_op in c_api_util.new_tf_operations(self)
  File "/home/logi/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3440, in <listcomp>
    for c_op in c_api_util.new_tf_operations(self)
  File "/home/logi/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3299, in _create_op_from_tf_operation
    ret = Operation(c_op, self)
  File "/home/logi/.local/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1770, in __init__
    self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): indices[299] = [1, 1] does not index into param shape [120087,1]
     [[node filtered_detections/map/while/GatherNd_2 (defined at /home/logi/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/front/tf/partial_infer/tf.py:115)  = GatherNd[Tindices=DT_INT64, Tparams=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_filtered_detections/map/while/TensorArrayReadV3_1_port_0_ie_placeholder_0_0, _arg_filtered_detections/map/while/concat_port_0_ie_placeholder_0_1)]]

[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function tf_native_tf_node_infer at 0x7f2b44529158>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Stopped shape/value propagation at "filtered_detections/map/while/GatherNd_2" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

0 Kudos
1 Reply
Shubha_R_Intel
Employee
678 Views

The best way to debug problems like this is to first dump the frozen tensorflow model into a text file.  There are several resources on the internet which explain how to freeze a tensorflow model. For instance, 

https://cv-tricks.com/how-to/freeze-tensorflow-models/

Next dump the frozen model into a text file so that you can see the actual values for the tensors in the graph. Sample code on how to do this is below.

According to MO_FAQ.html, 

38. What does the message "Stopped shape/value propagation at node" mean?

Model Optimizer cannot infer shapes or values for the specified node. It can happen because of a bug in the custom shape infer function, because the node inputs have incorrect values/shapes, or because the input shapes are incorrect.

You may have to build a custom layer which tells Model Optimizer how to infer shapes so that Inference Engine will be happy. When you build a custom layer, a critical function of your Op is to infer the outgoing shape of the tensor(s) based on the incoming shape, which is usually coded within a @staticmethod  infer. For examples please study carefully the Ops code under deployment_tools\model_optimizer\mo\ops. My guess is that you did not code this function correctly,  or maybe you didn't code it at all. 

But you will better understand what the issue is if you dump your frozen model into text, which you can do via the following code:

def load_graph(frozen_graph_filename):
    # We load the protobuf file from the disk and parse it to retrieve the
    # unserialized graph_def
    with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())

    # Then, we import the graph_def into a new Graph and return it
    with tf.Graph().as_default() as graph:
        # The name var will prefix every op/nodes in your graph
        # Since we load everything in a new graph, this is not needed
        tf.import_graph_def(graph_def, name="prefix")
    return graph


if __name__ == '__main__':
    mygraph = load_graph("C:\\PATH\\frozen_inference_graph.pb")
    tf.train.write_graph(mygraph, "./", "graph.txt")

 

Thanks for using OpenVino. Hope this helps you to solve your issues.

Shubha

0 Kudos
Reply