Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

CenterNet optimization issue

Matsiuk__Markiian
724 Views

Hello,

I'm trying to convert modified version of centernet (with mobilenet backbone, from https://github.com/CaoWGG/Mobilenetv2-CenterNet)

And it is converting correctly, but only with --disable_fusing flag, without it, I've got following error:
 

Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:    ~/centernet_mobilenet_backbone.onnx
    - Path for generated IR:     ~/FP32
    - IR output name:     centernet_mobilenet_backbone
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
ONNX specific parameters:
Model Optimizer version:     2019.3.0-375-g332562022
[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  cannot reshape array of size 24 into shape (1,1,1)
[ ERROR ]  Traceback (most recent call last):
  File "/opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/mo/main.py", line 302, in main
    return driver(argv)
  File "/opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/mo/main.py", line 278, in driver
    ret_res = mo_onnx.driver(argv, argv.input_model, model_name, argv.output_dir)
  File "/opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/mo/pipeline/onnx.py", line 125, in driver
    fuse_linear_ops(graph)
  File "/opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/mo/middle/passes/fusing/fuse_linear_ops.py", line 237, in fuse_linear_ops
    is_fused = _fuse_mul(graph, node, fuse_nodes)
  File "/opt/intel/openvino_2019.3.334/deployment_tools/model_optimizer/mo/middle/passes/fusing/fuse_linear_ops.py", line 107, in _fuse_mul
    value = np.reshape(value, shape)
  File "<__array_function__ internals>", line 6, in reshape
  File "/home/mmatsi/.local/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 301, in reshape
    return _wrapfunc(a, 'reshape', newshape, order=order)
  File "/home/mmatsi/.local/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 61, in _wrapfunc
    return bound(*args, **kwds)
ValueError: cannot reshape array of size 24 into shape (1,1,1)

[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------
 

Maybe someone encountered the same problem (full debug log  and .onnx model attached in archive)

Package icon model_and_log.zip

MM

0 Kudos
1 Reply
Luis_at_Intel
Moderator
724 Views

Hi Matsiuk, Markiian,

Looks like the MO is encountering issues fusing of linear operations to Convolution. According to the documentation, many convolution neural networks include BatchNormalization and ScaleShift layers that can be presented as a sequence of linear operations: additions and multiplications. For example ScaleShift layer can be presented as Mul → Add sequence. These layers can be fused into previous Convolution or FullyConnected layers, except that case when Convolution comes after Add operation (due to Convolution paddings).

I see this network (the one you attached) has this particular exception, where Conv comes after Add operation which I assume is the reason why disabling fusing converts the model successfully. BTW I am using the latest OpenVINO release (2020.1).

# MO COMMAND
python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_onnx.py --input_model centernet_mobilenet_backbone.onnx --input_shape [1,512,512,3] --disable_fusing
#OUTPUT
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/user/842729/centernet_mobilenet_backbone.onnx
	- Path for generated IR: 	/home/user/842729/.
	- IR output name: 	centernet_mobilenet_backbone
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	[1,512,512,3]
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	False
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
ONNX specific parameters:
Model Optimizer version: 	2020.1.0-61-gd349c3ba4a

[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/user/842729/centernet_mobilenet_backbone.xml
[ SUCCESS ] BIN file: /home/user/842729/centernet_mobilenet_backbone.bin
[ SUCCESS ] Total execution time: 34.99 seconds. 
[ SUCCESS ] Memory consumed: 114 MB. 

Regards,

Luis

 

0 Kudos
Reply