Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6506 Discussions

Cannot Infer shapes or values for node "flatten_1/stack"

K__Mike
Beginner
1,003 Views

Hello, 

I am unable to optimize a tensorflow model using the mo.py function due to failure to infer a node due to shape error -

Command - python3 mo.py --input_model ~/Desktop/mf_project/model.pb --input_shape '(1,96,96,3)' --log_level=DEBUG

(If I do not specify input shape; error occurs as previous shape - (-1,96,96,3) does not succeed [Shape [-1 96 96  3] is not fully defined for output 0 of "conv2d_1_input". Use --input_shape with positive integers to override model input shapes.]

Output-

Partial infer for flatten_1/stack
[ 2019-03-28 23:29:45,638 ] [ DEBUG ] [ infer:72 ]  Op: Pack
[ 2019-03-28 23:29:45,638 ] [ ERROR ] [ infer:112 ]  Cannot infer shapes or values for node "flatten_1/stack".
[ 2019-03-28 23:29:45,638 ] [ ERROR ] [ infer:113 ]  all input arrays must have the same shape
[ 2019-03-28 23:29:45,638 ] [ ERROR ] [ infer:114 ]  
[ 2019-03-28 23:29:45,638 ] [ ERROR ] [ infer:115 ]  It can happen due to bug in custom shape infer function <function tf_pack_infer at 0x7f7fff438ea0>.
[ 2019-03-28 23:29:45,638 ] [ ERROR ] [ infer:116 ]  Or because the node inputs have incorrect values/shapes.
[ 2019-03-28 23:29:45,638 ] [ ERROR ] [ infer:117 ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ 2019-03-28 23:29:45,638 ] [ DEBUG ] [ infer:121 ]  Node "flatten_1/stack" attributes: {'precision': 'FP32', 'is_undead': False, 'is_partial_inferred': False, 'infer': <function tf_pack_infer at 0x7f7fff438ea0>, 'pb': name: "flatten_1/stack"
op: "Pack"
input: "flatten_1/stack/0"
input: "flatten_1/Prod"
attr {
  key: "N"
  value {
    i: 2
  }
}
attr {
  key: "T"
  value {
    type: DT_INT32
  }
}
attr {
  key: "axis"
  value {
    i: 0
  }
}
, 'IE': [('layer', [('id', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff12f158>), 'name', 'precision', 'type'], [('data', ['epsilon', 'min', 'max', ('axis', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff12f1e0>), 'tiles', ('dim', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff12f268>), 'num_axes', ('pool-method', 'pool_method'), 'group', 'rounding_type', ('exclude-pad', 'exclude_pad'), 'operation', 'out-size', 'power', 'shift', 'alpha', 'beta', 'coords', 'classes', 'num', ('local-size', 'local_size'), 'region', 'knorm', 'num_classes', 'keep_top_k', 'variance_encoded_in_target', 'code_type', 'share_location', 'nms_threshold', 'confidence_threshold', 'background_label_id', 'top_k', 'eta', 'visualize', 'visualize_threshold', 'save_file', 'output_directory', 'output_name_prefix', 'output_format', 'label_map_file', 'name_size_file', 'num_test_image', 'prob', 'resize_mode', 'height', 'width', 'height_scale', 'width_scale', 'pad_mode', 'pad_value', 'interp_mode', 'img_size', 'img_h', 'img_w', 'step', 'step_h', 'step_w', ('offset', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff12f2f0>), 'variance', 'flip', 'clip', ('min_size', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff12f378>), ('max_size', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff12f400>), ('aspect_ratio', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff12f488>), 'decrease_label_id', 'normalized', ('type', 'norm_type'), 'eps', 'across_spatial', 'value', 'mean', 'std', 'sparse', 'variance_norm', 'channel_shared', 'negative_slope', 'engine', 'num_filter', ('type', 'sample_type'), ('order', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff12f510>), 'pooled_h', 'pooled_w', 'spatial_scale', 'cls_threshold', 'max_num_proposals', 'iou_threshold', 'min_bbox_size', 'feat_stride', 'pre_nms_topn', 'post_nms_topn', ('type', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff12f598>), ('value', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff12f620>), ('output', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff12f6a8>), ('input_nodes_names', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff12f730>), ('output_tensors_names', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff12f7b8>), ('real_input_dims', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff12f840>), ('protobuf', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff12f8c8>), {'custom_attributes': None}, ('stride-x', <function spatial_getter.<locals>.<lambda> at 0x7f7fff12f950>), ('stride-y', <function spatial_getter.<locals>.<lambda> at 0x7f7fff12f9d8>), ('kernel-x', <function spatial_getter.<locals>.<lambda> at 0x7f7fff12fa60>), ('kernel-y', <function spatial_getter.<locals>.<lambda> at 0x7f7fff12fae8>), ('kernel-x', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff12fb70>), ('kernel-y', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff12fbf8>), ('dilation-x', <function spatial_getter.<locals>.<lambda> at 0x7f7fff12fc80>), ('dilation-y', <function spatial_getter.<locals>.<lambda> at 0x7f7fff12fd08>), ('pad-x', <function spatial_getter.<locals>.<lambda> at 0x7f7fff12fe18>), ('pad-y', <function spatial_getter.<locals>.<lambda> at 0x7f7fff12ff28>), ('scale', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff130048>), ('stride', <function update_ie_fields.<locals>.<lambda> at 0x7f7fff1300d0>), 'crop_width', 'crop_height', 'write_augmented', 'max_multiplier', 'augment_during_test', 'recompute_mean', 'write_mean', 'mean_per_pixel', 'mode', 'bottomwidth', 'bottomheight', 'chromatic_eigvec', 'kernel_size', 'max_displacement', 'stride_1', 'stride_2', 'single_direction', 'do_abs', 'correlation_type', 'antialias', 'resample_type', 'factor', 'coeff'], []), '@ports', '@consts'])], 'is_output_reachable': True, 'dim_attrs': ['channel_dims', 'spatial_dims', 'batch_dims', 'axis'], 'op': 'Pack', 'name': 'flatten_1/stack', 'N': 2, 'is_const_producer': False, 'axis': 0, 'shape_attrs': ['shape', 'stride', 'output_shape', 'window', 'pad'], 'kind': 'op'}
[ 2019-03-28 23:29:45,638 ] [ ERROR ] [ main:227 ]  Stopped shape/value propagation at "flatten_1/stack" node. For more information please refer to Model Optimizer FAQ, question #38.
[ 2019-03-28 23:29:45,639 ] [ DEBUG ] [ main:228 ]  Traceback (most recent call last):
  File "/opt/intel/computer_vision_sdk_2018.1.249/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 73, in partial_infer
    node.infer(node)
  File "/opt/intel/computer_vision_sdk_2018.1.249/deployment_tools/model_optimizer/mo/front/common/partial_infer/concat.py", line 74, in tf_pack_infer
    node.out_node().value = np.stack(values, node.axis)
  File "/home/facialstats/Desktop/mf_project/recEnv/lib/python3.5/site-packages/numpy/core/shape_base.py", line 416, in stack
    raise ValueError('all input arrays must have the same shape')
ValueError: all input arrays must have the same shape

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/intel/computer_vision_sdk_2018.1.249/deployment_tools/model_optimizer/mo/main.py", line 222, in main
    return driver(argv)
  File "/opt/intel/computer_vision_sdk_2018.1.249/deployment_tools/model_optimizer/mo/main.py", line 190, in driver
    mean_scale_values=mean_scale)
  File "/opt/intel/computer_vision_sdk_2018.1.249/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 141, in tf2nx
    partial_infer(graph)
  File "/opt/intel/computer_vision_sdk_2018.1.249/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 124, in partial_infer
    'For more information please refer to Model Optimizer FAQ, question #38.') from err
mo.utils.error.Error: Stopped shape/value propagation 

 

Please suggest.

0 Kudos
2 Replies
l__sw
Beginner
1,003 Views

hello.Have you solved the problem yet?I have the same problem。any reply would be appreciate!

0 Kudos
Max_L_Intel
Moderator
1,003 Views

Hello. 

Depending on your model, you might either need to use specific TF conversion parameters like --tensorflow_use_custom_operations_config, see more details here https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#tensorflow_specific_conversion_params

If you model is Object Detection API based, then please check this article https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html

Also, please have a chance to take a look at similar "stopped shape/value propagation" issue being reported and the way it was resolved https://software.intel.com/en-us/forums/intel-distribution-of-openvino-toolkit/topic/815044 

Hope this helps.
Thanks.

0 Kudos
Reply