Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.
5848 Discussions

Size of weights 32 does not match kernel shape: [32 1 3 3]

Maicus
Beginner
209 Views

Hello there,

 

I'm trying to convert an mxnet model to openvino.

The input shape of the images is 96x96 in grayscale.

When I'm trying to convert it with mo_mxnet.py then I get the following error:

[ 2019-07-15 09:37:43,757 ] [ DEBUG ] [ infer:127 ]  --------------------
[ 2019-07-15 09:37:43,757 ] [ DEBUG ] [ infer:128 ]  Partial infer for conv_rd1_down_b
[ 2019-07-15 09:37:43,757 ] [ DEBUG ] [ infer:129 ]  Op: Const
[ 2019-07-15 09:37:43,757 ] [ DEBUG ] [ infer:140 ]  Inputs:
[ 2019-07-15 09:37:43,757 ] [ DEBUG ] [ infer:142 ]  Outputs:
[ 2019-07-15 09:37:43,757 ] [ DEBUG ] [ infer:34 ]  output[0]: shape = [32], value = [  3.9656727   3.1233442   7.8646393   4.28713     9.59885    -7.7935247
 -10.188561    3.451875 ...
[ 2019-07-15 09:37:43,757 ] [ DEBUG ] [ infer:127 ]  --------------------
[ 2019-07-15 09:37:43,757 ] [ DEBUG ] [ infer:128 ]  Partial infer for conv_rd1_down_w
[ 2019-07-15 09:37:43,757 ] [ DEBUG ] [ infer:129 ]  Op: Const
[ 2019-07-15 09:37:43,757 ] [ DEBUG ] [ infer:140 ]  Inputs:
[ 2019-07-15 09:37:43,757 ] [ DEBUG ] [ infer:142 ]  Outputs:
[ 2019-07-15 09:37:43,757 ] [ DEBUG ] [ infer:34 ]  output[0]: shape = [32  1  3  3], value = [[[[ 1.94546506e-01  1.07380378e+00  3.81941438e-01]
   [ 5.58629572e-01 -4.26461071e-01  1.97264...
[ 2019-07-15 09:37:43,772 ] [ DEBUG ] [ infer:127 ]  --------------------
[ 2019-07-15 09:37:43,772 ] [ DEBUG ] [ infer:128 ]  Partial infer for Input
[ 2019-07-15 09:37:43,772 ] [ DEBUG ] [ infer:129 ]  Op: Placeholder
[ 2019-07-15 09:37:43,772 ] [ DEBUG ] [ infer:140 ]  Inputs:
[ 2019-07-15 09:37:43,772 ] [ DEBUG ] [ infer:142 ]  Outputs:
[ 2019-07-15 09:37:43,772 ] [ DEBUG ] [ infer:34 ]  output[0]: shape = [ 1  1 96 96], value = <UNKNOWN>
[ 2019-07-15 09:37:43,772 ] [ DEBUG ] [ infer:127 ]  --------------------
[ 2019-07-15 09:37:43,772 ] [ DEBUG ] [ infer:128 ]  Partial infer for conv_rd1_down
[ 2019-07-15 09:37:43,772 ] [ DEBUG ] [ infer:129 ]  Op: Convolution
[ ERROR ]  Size of weights 32 does not match kernel shape: [32  1  3  3]
    Possible reason is wrong channel number in input shape

[ ERROR ]  Cannot infer shapes or values for node "conv_rd1_down".
[ ERROR ]  Cannot reshape weights to kernel shape
[ ERROR ]
[ ERROR ]  It can happen due to bug in custom shape infer function <function Convolution.infer at 0x00000201F0E8C378>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ 2019-07-15 09:37:43,772 ] [ DEBUG ] [ infer:194 ]  Node "conv_rd1_down" attributes: {'symbol_dict': {'op': 'Convolution', 'name': 'conv_rd1_down', 'attrs': {'cudnn_off': '0', 'cudnn_tune': 'None', 'dilate': '(1,1)', 'kernel': '(3,3)', 'layout': 'None', 'no_bias': '0', 'num_filter': '32', 'num_group': '1', 'pad': '(1,1)', 'stride': '(2,2)', 'workspace': '1024'}, 'inputs': [[0, 0, 0], [1, 0, 0], [2, 0, 0]]}, '_in_ports': {0, 1, 2}, '_out_ports': {0}, 'kind': 'op', 'name': 'conv_rd1_down', 'type': 'Convolution', 'op': 'Convolution', 'infer': <function Convolution.infer at 0x00000201F0E8C378>, 'precision': 'FP32', 'multiplication_transparent': True, 'multiplication_transparent_ports': [(0, 0), (1, 0)], 'in_ports_count': 3, 'out_ports_count': 1, 'bias_addable': True, 'bias_term': False, 'pad': array([[0, 0],
       [0, 0],
       [1, 1],
       [1, 1]], dtype=int64), 'pad_spatial_shape': array([[1, 1],
       [1, 1]], dtype=int64), 'dilation': array([1, 1, 1, 1], dtype=int64), 'output_spatial_shape': None, 'output_shape': None, 'stride': array([1, 1, 2, 2], dtype=int64), 'group': 1, 'output': 32, 'kernel_spatial': array([3, 3], dtype=int64), 'input_feature_channel': 1, 'output_feature_channel': 0, 'kernel_spatial_idx': None, 'reshape_kernel': True, 'spatial_dims': None, 'channel_dims': array([1], dtype=int64), 'batch_dims': array([0], dtype=int64), 'layout': 'NCHW', 'dim_attrs': ['channel_dims', 'axis', 'batch_dims', 'spatial_dims'], 'shape_attrs': ['stride', 'shape', 'window', 'output_shape', 'pad'], 'IE': [('layer', [('id', <function Op.substitute_ie_attrs.<locals>.<lambda> at 0x00000201F323D158>), 'name', 'precision', 'type'], [('data', ['auto_pad', 'group', ('strides', <function Convolution.backend_attrs.<locals>.<lambda> at 0x00000201F323D1E0>), ('dilations', <function Convolution.backend_attrs.<locals>.<lambda> at 0x00000201F323D268>), ('kernel', <function Convolution.backend_attrs.<locals>.<lambda> at 0x00000201F323D2F0>), ('pads_begin', <function Convolution.backend_attrs.<locals>.<lambda> at 0x00000201F323D378>), ('pads_end', <function Convolution.backend_attrs.<locals>.<lambda> at 0x00000201F323D400>), 'output', 'pad_value', 'mode', 'input'], []), '@ports', '@consts'])], 'is_output_reachable': True, 'is_undead': False, 'is_const_producer': False, 'is_partial_inferred': False}
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "conv_rd1_down" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
[ 2019-07-15 09:37:43,772 ] [ DEBUG ] [ main:318 ]  Traceback (most recent call last):
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\middle\passes\infer.py", line 130, in partial_infer
    node.infer(node)
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\ops\convolution.py", line 146, in infer
    raise Error("Cannot reshape weights to kernel shape")
mo.utils.error.Error: Cannot reshape weights to kernel shape

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 167, in apply_replacements
    replacer.find_and_replace_pattern(graph)
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\extensions\middle\PartialInfer.py", line 31, in find_and_replace_pattern
    partial_infer(graph)
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\middle\passes\infer.py", line 196, in partial_infer
    refer_to_faq_msg(38)) from err
mo.utils.error.Error: Stopped shape/value propagation at "conv_rd1_down" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\main.py", line 312, in main
    return driver(argv)
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\main.py", line 278, in driver
    ret_res = mo_mxnet.driver(argv, argv.input_model, model_name, argv.output_dir)
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\pipeline\mx.py", line 87, in driver
    class_registration.apply_replacements(graph, class_registration.ClassType.MIDDLE_REPLACER)
  File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 184, in apply_replacements
    )) from err
mo.utils.error.Error: Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "conv_rd1_down" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

 

 

As I can see the converter thinks that the weight is 32, but in fact the convolution conv_rd1_down does have an input with the weight conv_rd1_down_w and the bias conv_rd1_down_b.

 

So the weights conv_rd1_down_w do have the right size of [32 1 3 3] so how does the model optimizer get the weight size of 32?

I attached the params file and the symbol for completeness.

 

The Command I called the optimizer:

python mo_mxnet.py --input_model hybridNet-0000.params --input_shape [1,1,96,96] --input Input --log_level=DEBUG   

 

thanks in advance

 

Marcus

0 Kudos
1 Reply
Kenneth_C_Intel
Employee
209 Views

I have asked our developers to take a look at this 

I will let you know what they say. 

Thanks

Reply