- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
An ONNX model, that has been successfully converted by Model Optimizer, generated the following runtime error when InferenceEngine tries to read the IR XML file:
Error reading network: Error of validate layer: 484 with type: Deconvolution. Cannot parse parameter pads_end from IR for layer 484. Value -4,-4 cannot be casted to int.
With log_level = DEBUG, part of Model Optimizer's output contains:
[ DEBUG ] [ infer:150 ] -------------------- [ DEBUG ] [ infer:151 ] Partial infer for 484 [ DEBUG ] [ infer:152 ] Op: Deconv2D [ DEBUG ] [ infer:163 ] Inputs: [ DEBUG ] [ infer:40 ] input[0]: shape = [ 1 3 40 80], value = <UNKNOWN> [ DEBUG ] [ infer:40 ] input[1]: shape = [ 3 3 16 16], value = [[[[ ... [ DEBUG ] [ infer:165 ] Outputs: [ DEBUG ] [ infer:40 ] output[0]: shape = [ 1 3 320 640], value = <UNKNOWN> [ DEBUG ] [ infer:150 ] -------------------- ... Model Optimizer arguments: Common parameters: - Path to the Input Model: <withheld>.onnx - Path for generated IR: <withheld> - IR output name: <withheld> - Log level: DEBUG - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False ONNX specific parameters: Model Optimizer version: 1.5.12.49d067a0
And the generated IR XML file contains:
<layer id="106" name="484" precision="FP32" type="Deconvolution"> <data dilations="1,1" group="1" kernel="16,16" output="3" pads_begin="4,4" pads_end="-4,-4" strides="8,8"/> <input> <port id="0"> <dim>1</dim> <dim>3</dim> <dim>40</dim> <dim>80</dim> </port> </input> <output> <port id="2"> <dim>1</dim> <dim>3</dim> <dim>320</dim> <dim>640</dim> </port> </output> <blobs> <weights offset="<withheld>" size="9216"/> </blobs> </layer>
What could have caused Model Optimizer to generate pads_end = -4, -4 ?
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Yee, this is a known bug which will be fixed for the next release. This forum post is a related issue:
https://software.intel.com/en-us/forums/computer-vision/topic/804935
Thanks for using OpenVino !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks Shubha, looking forward to the next release!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Yee, do you have any update on this issue? Is it resolved?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dearest Gleb and Yee,
Please download OpenVino 2019 R1 and give it a whirl. Lots of issues have been fixed !
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Gleb / Shubha,
The issue has been resolved with Model Optimizer 2019.1.0-341-gc9b66a2. The Deconvolution layer's XML now reads
<layer id="120" name="484" precision="FP32" type="Deconvolution"> <data dilations="1,1" group="1" kernel="16,16" output="3" pads_begin="4,4" pads_end="4,4" strides="8,8"/> <input> <port id="0"> <dim>1</dim> <dim>3</dim> <dim>40</dim> <dim>80</dim> </port> </input> <output> <port id="2"> <dim>1</dim> <dim>3</dim> <dim>320</dim> <dim>640</dim> </port> </output> <blobs> <weights offset="<withheld>" size="9216"/> </blobs> </layer>
Thanks for your help!
Rgds, Yee Keng
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear Yee,
fantastic. Very happy to hear that this issue is resolved in 2019 R1.
Thanks for using OpenVino !
Shubha
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page