Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Cannot infer shapes or values for node "24" during converting Onnx to OpenVINO format

Kulczykowski__Michał
1,226 Views

Hi, im trying to convert my custom pretrained PyTorch model to OpenVINO so I could run it on Movidius VPU. I managed to convert it to onnx format but i'm getting error from model optimizer during convertion.

[ 2020-01-10 18:20:59,336 ] [ DEBUG ] [ class_registration:267 ]  Run replacer <class 'extensions.middle.PartialInfer.PartialInfer'>
[ 2020-01-10 18:20:59,337 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,337 ] [ DEBUG ] [ infer:130 ]  Partial infer for 69
[ 2020-01-10 18:20:59,337 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,337 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,337 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,337 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [], value = -1
[ 2020-01-10 18:20:59,338 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,338 ] [ DEBUG ] [ infer:130 ]  Partial infer for 66
[ 2020-01-10 18:20:59,338 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,338 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,338 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,338 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [], value = -1
[ 2020-01-10 18:20:59,338 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,338 ] [ DEBUG ] [ infer:130 ]  Partial infer for 54/Dims
[ 2020-01-10 18:20:59,338 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,338 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,338 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,338 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [1], value = [0]
[ 2020-01-10 18:20:59,338 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,338 ] [ DEBUG ] [ infer:130 ]  Partial infer for 53/Dims
[ 2020-01-10 18:20:59,338 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,338 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,339 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,339 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [1], value = [0]
[ 2020-01-10 18:20:59,339 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,339 ] [ DEBUG ] [ infer:130 ]  Partial infer for 47/Dims
[ 2020-01-10 18:20:59,339 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,339 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,339 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,339 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [1], value = [0]
[ 2020-01-10 18:20:59,339 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,339 ] [ DEBUG ] [ infer:130 ]  Partial infer for 46/Dims
[ 2020-01-10 18:20:59,339 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,339 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,339 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,340 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [1], value = [0]
[ 2020-01-10 18:20:59,340 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,340 ] [ DEBUG ] [ infer:130 ]  Partial infer for 33
[ 2020-01-10 18:20:59,340 ] [ DEBUG ] [ infer:131 ]  Op: Constant
[ 2020-01-10 18:20:59,340 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,340 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,340 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [], value = 3
[ 2020-01-10 18:20:59,340 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,340 ] [ DEBUG ] [ infer:130 ]  Partial infer for 30
[ 2020-01-10 18:20:59,340 ] [ DEBUG ] [ infer:131 ]  Op: Constant
[ 2020-01-10 18:20:59,340 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,340 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,340 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [], value = 2
[ 2020-01-10 18:20:59,340 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,340 ] [ DEBUG ] [ infer:130 ]  Partial infer for 19
[ 2020-01-10 18:20:59,340 ] [ DEBUG ] [ infer:131 ]  Op: Constant
[ 2020-01-10 18:20:59,340 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,341 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,341 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [], value = 3
[ 2020-01-10 18:20:59,341 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,341 ] [ DEBUG ] [ infer:130 ]  Partial infer for 16
[ 2020-01-10 18:20:59,341 ] [ DEBUG ] [ infer:131 ]  Op: Constant
[ 2020-01-10 18:20:59,341 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,341 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,341 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [], value = 2
[ 2020-01-10 18:20:59,341 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,341 ] [ DEBUG ] [ infer:130 ]  Partial infer for deconv3.bias
[ 2020-01-10 18:20:59,341 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,341 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,341 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,342 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [3], value = [0.8196522 0.5773783 0.8242632]
[ 2020-01-10 18:20:59,342 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,342 ] [ DEBUG ] [ infer:130 ]  Partial infer for deconv3.weight
[ 2020-01-10 18:20:59,342 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,342 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,342 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,344 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [16  3  3  3], value = [[[[-4.95539512e-04 -5.94067089e-02 -6.70151860e-02]
   [ 2.04542298e-02 -7.97495842e-02 -4.66531...
[ 2020-01-10 18:20:59,345 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,345 ] [ DEBUG ] [ infer:130 ]  Partial infer for deconv2.bias
[ 2020-01-10 18:20:59,345 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,345 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,345 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,345 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [16], value = [ 0.668596    0.2760919   0.785108    0.6279784   0.6160135   0.8352151
  0.6637172   0.25968534 ...
[ 2020-01-10 18:20:59,345 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,345 ] [ DEBUG ] [ infer:130 ]  Partial infer for deconv2.weight
[ 2020-01-10 18:20:59,345 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,345 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,345 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,350 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [32 16  5  5], value = [[[[-5.19725308e-02 -6.00619242e-02 -9.64517798e-03 -5.92219420e-02
    -5.44188246e-02]
   [-4.8...
[ 2020-01-10 18:20:59,351 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,351 ] [ DEBUG ] [ infer:130 ]  Partial infer for deconv1.bias
[ 2020-01-10 18:20:59,351 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,351 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,351 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,351 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [32], value = [ 0.22308405  0.7238986  -0.99280775  0.565104    0.55582386  0.62104464
  0.8327351   0.38734016...
[ 2020-01-10 18:20:59,351 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,351 ] [ DEBUG ] [ infer:130 ]  Partial infer for deconv1.weight
[ 2020-01-10 18:20:59,351 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,351 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,351 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,353 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [64 32  3  3], value = [[[[ 1.20236389e-02  2.57867873e-02  5.69098331e-02]
   [ 2.37376057e-02 -1.43139036e-02 -7.43147...
[ 2020-01-10 18:20:59,354 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,354 ] [ DEBUG ] [ infer:130 ]  Partial infer for conv3.bias
[ 2020-01-10 18:20:59,354 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,354 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,354 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,354 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [64], value = [-0.36245     0.6176184   0.35960373 -1.1069959   0.43610823 -0.24731655
  0.43396956 -0.6956546 ...
[ 2020-01-10 18:20:59,354 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,354 ] [ DEBUG ] [ infer:130 ]  Partial infer for conv3.weight
[ 2020-01-10 18:20:59,354 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,355 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,355 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,357 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [64 32  3  3], value = [[[[ 4.47288975e-02  2.14114636e-02 -5.18078879e-02]
   [ 8.59444886e-02  5.76331429e-02  5.40303...
[ 2020-01-10 18:20:59,357 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,357 ] [ DEBUG ] [ infer:130 ]  Partial infer for conv2.bias
[ 2020-01-10 18:20:59,357 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,357 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,357 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,357 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [32], value = [-0.13925731  0.33240545 -0.76689625  0.5740756   1.0772431   0.776163
  0.5945724   0.6377642   ...
[ 2020-01-10 18:20:59,357 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,357 ] [ DEBUG ] [ infer:130 ]  Partial infer for conv2.weight
[ 2020-01-10 18:20:59,357 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,358 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,358 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,363 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [32 16  5  5], value = [[[[-2.12613330e-03 -9.51265022e-02  1.90961398e-02 -3.16635445e-02
    -2.08136253e-02]
   [ 6.9...
[ 2020-01-10 18:20:59,363 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,363 ] [ DEBUG ] [ infer:130 ]  Partial infer for conv1.bias
[ 2020-01-10 18:20:59,363 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,363 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,363 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,363 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [16], value = [ 0.98403233 -1.0333993   0.11789519 -0.46924612  1.0859997   1.3228457
  0.11064199 -0.16580322 ...
[ 2020-01-10 18:20:59,363 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,363 ] [ DEBUG ] [ infer:130 ]  Partial infer for conv1.weight
[ 2020-01-10 18:20:59,363 ] [ DEBUG ] [ infer:131 ]  Op: Const
[ 2020-01-10 18:20:59,363 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,364 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,366 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [16  3  3  3], value = [[[[ 1.29497707e-01 -1.11932114e-01 -1.40267849e-01]
   [ 2.73936130e-02  1.09693989e-01 -1.61889...
[ 2020-01-10 18:20:59,366 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,366 ] [ DEBUG ] [ infer:130 ]  Partial infer for 0
[ 2020-01-10 18:20:59,366 ] [ DEBUG ] [ infer:131 ]  Op: Parameter
[ 2020-01-10 18:20:59,366 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,366 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,366 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [2 2 2 2], value = <UNKNOWN>
[ 2020-01-10 18:20:59,367 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,367 ] [ DEBUG ] [ infer:130 ]  Partial infer for 13
[ 2020-01-10 18:20:59,367 ] [ DEBUG ] [ infer:131 ]  Op: Cast
[ 2020-01-10 18:20:59,367 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,367 ] [ DEBUG ] [ infer:32 ]  input[0]: shape = [2 2 2 2], value = <UNKNOWN>
[ 2020-01-10 18:20:59,367 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,367 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [2 2 2 2], value = <UNKNOWN>
[ 2020-01-10 18:20:59,367 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,367 ] [ DEBUG ] [ infer:130 ]  Partial infer for 14
[ 2020-01-10 18:20:59,367 ] [ DEBUG ] [ infer:131 ]  Op: Conv
[ 2020-01-10 18:20:59,368 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,368 ] [ DEBUG ] [ infer:32 ]  input[0]: shape = [2 2 2 2], value = <UNKNOWN>
[ 2020-01-10 18:20:59,370 ] [ DEBUG ] [ infer:32 ]  input[1]: shape = [16  3  3  3], value = [[[[ 1.29497707e-01 -1.11932114e-01 -1.40267849e-01]
   [ 2.73936130e-02  1.09693989e-01 -1.61889...
[ 2020-01-10 18:20:59,371 ] [ DEBUG ] [ infer:32 ]  input[2]: shape = [16], value = [ 0.98403233 -1.0333993   0.11789519 -0.46924612  1.0859997   1.3228457
  0.11064199 -0.16580322 ...
[ 2020-01-10 18:20:59,371 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,371 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [ 2 16  0  0], value = <UNKNOWN>
[ 2020-01-10 18:20:59,371 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,371 ] [ DEBUG ] [ infer:130 ]  Partial infer for 15
[ 2020-01-10 18:20:59,371 ] [ DEBUG ] [ infer:131 ]  Op: ReLU
[ 2020-01-10 18:20:59,371 ] [ DEBUG ] [ infer:142 ]  Inputs:
[ 2020-01-10 18:20:59,371 ] [ DEBUG ] [ infer:32 ]  input[0]: shape = [ 2 16  0  0], value = <UNKNOWN>
[ 2020-01-10 18:20:59,371 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-01-10 18:20:59,371 ] [ DEBUG ] [ infer:32 ]  output[0]: shape = [ 2 16  0  0], value = <UNKNOWN>
[ 2020-01-10 18:20:59,371 ] [ DEBUG ] [ infer:129 ]  --------------------
[ 2020-01-10 18:20:59,372 ] [ DEBUG ] [ infer:130 ]  Partial infer for 24
[ 2020-01-10 18:20:59,372 ] [ DEBUG ] [ infer:131 ]  Op: MaxPool
[ ERROR ]  Cannot infer shapes or values for node "24".
[ ERROR ]  0
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function Pooling.infer at 0x000002590D143D90>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ 2020-01-10 18:20:59,372 ] [ DEBUG ] [ infer:196 ]  Node "24" attributes: {'pb': input: "15"
output: "24"
output: "25"
op_type: "MaxPool"
attribute {
  name: "kernel_shape"
  ints: 1
  ints: 1
  type: INTS
}
attribute {
  name: "strides"
  ints: 1
  ints: 1
  type: INTS
}
, 'kind': 'op', '_in_ports': {0: {'control_flow': False}}, '_out_ports': {0: {}, 1: {'control_flow': False}}, 'name': '24', 'op': 'MaxPool', 'precision': 'FP32', 'type': 'Pooling', 'infer': <function Pooling.infer at 0x000002590D143D90>, 'in_ports_count': 1, 'out_ports_count': 1, 'auto_pad': None, 'window': array([1, 1, 1, 1], dtype=int64), 'stride': array([1, 1, 1, 1], dtype=int64), 'pad': array([[0, 0],
       [0, 0],
       [0, 0],
       [0, 0]], dtype=int64), 'pad_spatial_shape': array([[0, 0],
       [0, 0]], dtype=int64), 'pool_method': 'max', 'exclude_pad': 'true', 'global_pool': 0, 'output_spatial_shape': array([0, 0], dtype=int64), 'rounding_type': 'floor', 'spatial_dims': array([2, 3]), 'channel_dims': array([1], dtype=int64), 'batch_dims': array([0], dtype=int64), 'layout': 'NCHW', 'pooling_convention': 'valid', 'dim_attrs': ['spatial_dims', 'batch_dims', 'channel_dims', 'axis'], 'shape_attrs': ['shape', 'stride', 'output_shape', 'window', 'pad'], 'IE': [('layer', [('id', <function Op.substitute_ie_attrs.<locals>.<lambda> at 0x000002590D48C2F0>), 'name', 'precision', 'type'], [('data', [('strides', <function Pooling.backend_attrs.<locals>.<lambda> at 0x000002590D48C378>), ('kernel', <function Pooling.backend_attrs.<locals>.<lambda> at 0x000002590D48C400>), ('pads_begin', <function Pooling.backend_attrs.<locals>.<lambda> at 0x000002590D48C488>), ('pads_end', <function Pooling.backend_attrs.<locals>.<lambda> at 0x000002590D48C510>), ('pool-method', 'pool_method'), ('exclude-pad', 'exclude_pad'), 'rounding_type', 'auto_pad'], []), '@ports', '@consts'])], 'is_output_reachable': True, 'is_undead': False, 'is_const_producer': False, 'is_partial_inferred': False}
[ ERROR ]  0
Stopped shape/value propagation at "24" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.
Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): 0
Stopped shape/value propagation at "24" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.
[ 2020-01-10 18:20:59,374 ] [ DEBUG ] [ main:304 ]  Traceback (most recent call last):
  File "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\mo\middle\passes\infer.py", line 132, in partial_infer
    node.infer(node)
  File "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\mo\ops\pooling.py", line 133, in infer
    node.out_node().shape = output_shape
  File "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\mo\graph\graph.py", line 212, in out_node
    return self.out_nodes(control_flow=control_flow)[key]
KeyError: 0

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 273, in apply_replacements
    for_graph_and_each_sub_graph_recursively(graph, replacer.find_and_replace_pattern)
  File "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\mo\middle\pattern_match.py", line 58, in for_graph_and_each_sub_graph_recursively
    func(graph)
  File "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\extensions\middle\PartialInfer.py", line 31, in find_and_replace_pattern
    partial_infer(graph)
  File "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\mo\middle\passes\infer.py", line 198, in partial_infer
    refer_to_faq_msg(38)) from err
mo.utils.error.Error: 0
Stopped shape/value propagation at "24" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\mo\main.py", line 298, in main
    return driver(argv)
  File "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\mo\main.py", line 274, in driver
    ret_res = mo_onnx.driver(argv, argv.input_model, model_name, argv.output_dir)
  File "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\mo\pipeline\onnx.py", line 100, in driver
    class_registration.apply_replacements(graph, class_registration.ClassType.MIDDLE_REPLACER)
  File "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 293, in apply_replacements
    )) from err
mo.utils.error.Error: 0
Stopped shape/value propagation at "24" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.
Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): 0
Stopped shape/value propagation at "24" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\checkpoint-last.onnx
        - Path for generated IR:        C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\.
        - IR output name:       checkpoint-last
        - Log level:    DEBUG
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         [2,2,2,2]
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP16
        - Enable fusing:        False
        - Enable grouped convolutions fusing:   False
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
ONNX specific parameters:
Model Optimizer version:        2019.3.0-408-gac8584cb7

What is exactly the problem?

0 Kudos
13 Replies
Sahira_Intel
Moderator
1,226 Views

Hi Michal,

Can you please attach your model so I can look into it further. If you'd like to share your model privately, I can send you a PM.

Best Regards,

Sahira 

0 Kudos
Kulczykowski__Michał
1,226 Views

PM would be better

I also checked another model that i trained, which convert sucesfully to onnx but gives wrong results.

0 Kudos
Sahira_Intel
Moderator
1,226 Views

Hi Michal,

I have sent you a PM, please attach your model and I will take a look at it.

Best Regards,

Sahira 

0 Kudos
Sahira_Intel
Moderator
1,226 Views

Hi Michal,

Can you please provide the command you used to convert your model? 

Best,
Sahira

0 Kudos
Kulczykowski__Michał
1,226 Views
This is my pytorch code to transform from pytorch to onnx.

def transform_to_onnx(checkpoint_path):
    LOGGER.info(f"Transforming to onnx: {checkpoint_path}")
    checkpoint = torch.load(checkpoint_path)

    model = checkpoint['model'].cpu()
    example = torch.rand(10, 3, 540, 960)
    output_script = checkpoint_path + ".onnx"

    torch.onnx.export(model, example, output_script, verbose=True,
                      operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK)
    # Model check after conversion
    model_onnx = onnx.load(str(output_script))
    try:
        onnx.checker.check_model(model_onnx)
        LOGGER.info('ONNX check passed successfully.')
    except onnx.onnx_cpp2py_export.checker.ValidationError as exc:
        LOGGER.error('ONNX check failed with error: ' + str(exc))

 

0 Kudos
Sahira_Intel
Moderator
1,225 Views

Hi Michal,

Thank you for providing your script. Can you also provide the command you used while converting your ONNX model to IR in the Model Optimizer. Sometimes there are missing or incorrect parameters in the command which cause the conversion to fail.

Best Regards,

Sahira 

0 Kudos
Kulczykowski__Michał
1,225 Views

I tried few commands, this is one of them:

python mo_onnx.py --input_model checkpoint-last.onnx --keep_shape_ops --data_type FP16 --log_level DEBUG --disable_fusing --disable_gfusing --input_shape [1,3,540,960]

 

0 Kudos
Sahira_Intel
Moderator
1,225 Views

Hi Michal,

I apologize for the delay in my response.

I tried running your model with the command you provided, and then again using different parameters but am still running into the same errors. I am filing a bug report and will get back to you if it is fixed.

Best Regards,

Sahira  

0 Kudos
Kulczykowski__Michał
1,225 Views

Sahira R. (Intel) wrote:

Hi Michal,

I apologize for the delay in my response.

I tried running your model with the command you provided, and then again using different parameters but am still running into the same errors. I am filing a bug report and will get back to you if it is fixed.

Best Regards,

Sahira  

 

Thank you

0 Kudos
Sahira_Intel
Moderator
1,225 Views

Hi Michal,

It looks like your model uses the MaxPool operation with 2 outputs which is not supported. The model also has an Aten operation that is unsupported. 

Best Regards,

Sahira 

0 Kudos
Kulczykowski__Michał
1,224 Views

Sahira R. (Intel) wrote:

Hi Michal,

It looks like your model uses the MaxPool operation with 2 outputs which is not supported. The model also has an Aten operation that is unsupported. 

Best Regards,

Sahira 

According to this onnx specification https://github.com/onnx/onnx/blob/master/docs/Operators.md#MaxPool MaxPool has optional second output.

About aten operator, this is pytorch bug that makes converter unable to create maxunpool operator in onnx format.

0 Kudos
Broderick__Colin1
1,224 Views

Any update on this? I'm having exactly the same problem with a different model. Breaks at the MaxPool layer.

[ 2020-03-11 11:36:55,177 ] [ DEBUG ] [ infer:128 ]  --------------------
[ 2020-03-11 11:36:55,177 ] [ DEBUG ] [ infer:129 ]  Partial infer for 135
[ 2020-03-11 11:36:55,178 ] [ DEBUG ] [ infer:130 ]  Op: ReLU
[ 2020-03-11 11:36:55,179 ] [ DEBUG ] [ infer:131 ]  Inputs:
[ 2020-03-11 11:36:55,181 ] [ DEBUG ] [ infer:31 ]  input[0]: shape = [  1  64 100], value = <UNKNOWN>
[ 2020-03-11 11:36:55,182 ] [ DEBUG ] [ infer:144 ]  Outputs:
[ 2020-03-11 11:36:55,182 ] [ DEBUG ] [ infer:31 ]  output[0]: shape = [  1  64 100], value = <UNKNOWN>
[ 2020-03-11 11:36:55,183 ] [ DEBUG ] [ infer:128 ]  --------------------
[ 2020-03-11 11:36:55,183 ] [ DEBUG ] [ infer:129 ]  Partial infer for 136
[ 2020-03-11 11:36:55,184 ] [ DEBUG ] [ infer:130 ]  Op: MaxPool
[ 2020-03-11 11:36:55,185 ] [ DEBUG ] [ infer:131 ]  Inputs:
[ 2020-03-11 11:36:55,186 ] [ DEBUG ] [ infer:31 ]  input[0]: shape = [  1  64 100], value = <UNKNOWN>
[ ERROR ]  Cannot infer shapes or values for node "136".
[ ERROR ]  shape mismatch: value array of shape (2,) could not be broadcast to indexing result of shape (1,)
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function Pooling.infer at 0x000002957600B558>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).

 

0 Kudos
Broderick__Colin1
1,225 Views

SOLUTION, maybe.

Even though the optimizer supports Conv1d, BatchNorm1d, etc, it does not support MaxPool1d. To get around this you can simply unsqueeze (or your frameworks equivalent) before the maxpool layer, squeeze down again after it. So it would be something like

    x = x.unsqueeze(3)

    x = MaxPool2d(x, with appropriate stride, etc.)

    x = x.squeeze(3)

This makes no difference to the output since there are no parameters associated with the layer. If you have stored previously trained weights, you can even continue to use them.

0 Kudos
Reply