Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Incorrect output shape after "ExpandDims"

Xiaojun_H_Intel
Employee
307 Views

I was using Model Optimizer to convert my tensorflow model to IR files, but it failed. The error message is,

[ ERROR ]  Shape [   1 1024   -1   64] is not fully defined for output 0 of "transform_net1/tconv1/Conv2D". Use --input_shape with positive integers to override model input shapes.
[ ERROR ]  Cannot infer shapes or values for node "transform_net1/tconv1/Conv2D".
[ ERROR ]  Not all output shapes were inferred or fully defined for node "transform_net1/tconv1/Conv2D".
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <function Convolution.infer at 0x7f2ea00c5c80>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ 2019-01-21 13:37:21,044 ] [ DEBUG ] [ infer:215 ]  Node "transform_net1/tconv1/Conv2D" attributes: {'output_feature_channel': 3, 'input_feature_channel': 2, 'name': 'transform_net1/tconv1/Conv2D', 'pb': name: "transform_net1/tconv1/Conv2D"
op: "Conv2D"
input: "transform_net1/ExpandDims"
input: "transform_net1/tconv1/weights/read"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "data_format"
  value {
    s: "NHWC"
  }
}
attr {
  key: "dilations"
  value {
  }
}
attr {
  key: "padding"
  value {
    s: "VALID"
  }
}
attr {
  key: "strides"
  value {
    list {
      i: 1
      i: 1
      i: 1
      i: 1
    }
  }
}
attr {
  key: "use_cudnn_on_gpu"
  value {
    b: true
  }
}
, 'batch_dims': [0], 'dim_attrs': ['batch_dims', 'spatial_dims', 'axis', 'channel_dims'], 'kernel_shape': array([ 1,  3,  1, 64]), 'IE': [('layer', [('id', <function Op.substitute_ie_attrs.<locals>.<lambda> at 0x7f2e9fe366a8>), 'name', 'precision', 'type'], [('data', ['auto_pad', 'group', ('strides', <function Convolution.backend_attrs.<locals>.<lambda> at 0x7f2e9fe36730>), ('dilations', <function Convolution.backend_attrs.<locals>.<lambda> at 0x7f2e9fe367b8>), ('kernel', <function Convolution.backend_attrs.<locals>.<lambda> at 0x7f2e9fe36840>), ('pads_begin', <function Convolution.backend_attrs.<locals>.<lambda> at 0x7f2e9fe368c8>), ('pads_end', <function Convolution.backend_attrs.<locals>.<lambda> at 0x7f2e9fe36950>), 'output'], []), '@ports', '@consts'])], 'type': 'Convolution', 'spatial_dims': array([1, 2]), 'pad': array([[0, 0],
       [0, 0],
       [0, 0],
       [0, 0]]), 'dilation': array([1, 1, 1, 1]), 'is_undead': False, 'pad_spatial_shape': array([[0, 0],
       [0, 0]]), 'is_const_producer': False, 'get_output_feature_dim': <function Conv2DFrontExtractor.extract.<locals>.<lambda> at 0x7f2e9fe349d8>, 'channel_dims': [3], 'layout': 'NHWC', 'group': 1, 'is_output_reachable': True, 'auto_pad': 'valid', 'stride': array([1, 1, 1, 1]), 'kernel_spatial': array([1, 3]), 'permute_attrs': <mo.ops.op.PermuteAttrs object at 0x7f2e9fed4240>, 'get_group': <function Conv2DFrontExtractor.extract.<locals>.<lambda> at 0x7f2e9fe34950>, 'shape_attrs': ['stride', 'pad', 'output_shape', 'window', 'shape'], 'output_spatial_shape': array([1024,   -1]), 'output_shape': array([   1, 1024,   -1,   64]), 'bias_term': False, 'infer': <function Convolution.infer at 0x7f2ea00c5c80>, 'bias_addable': True, 'get_weights_permute': Permutation(perm=array([3, 2, 0, 1]), inv=array([2, 3, 1, 0])), 'precision': 'FP32', 'kernel_spatial_idx': array([0, 1]), 'op': 'Conv2D', 'output': 64, 'is_partial_inferred': False, 'kind': 'op'}
[ ERROR ]  Stopped shape/value propagation at "transform_net1/tconv1/Conv2D" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
[ 2019-01-21 13:37:21,045 ] [ DEBUG ] [ main:331 ]  Traceback (most recent call last):
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 187, in partial_infer
    node_name)
mo.utils.error.Error: Not all output shapes were inferred or fully defined for node "transform_net1/tconv1/Conv2D".
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/main.py", line 325, in main
    return driver(argv)
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/main.py", line 267, in driver
    mean_scale_values=mean_scale)
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 256, in tf2nx
    partial_infer(graph)
  File "/opt/intel/computer_vision_sdk_2018.5.445/deployment_tools/model_optimizer/mo/middle/passes/infer.py", line 217, in partial_infer
    refer_to_faq_msg(38)) from err
mo.utils.error.Error: Stopped shape/value propagation at "transform_net1/tconv1/Conv2D" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

 

It seems something wrong with the converted tensor shape. So I printed out the shape of each tensors and found that the converted output shape of "ExpandDims" is [1 1024 1 3], while the original shape of my tensorflow model is [1 1024 3 1].

Anything missing when I converted the tf model? Thanks in advance for your help.

 

0 Kudos
0 Replies
Reply