Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

2019 R2 not support ReduceL2?

zixiang__gao
Beginner
1,475 Views

Hi,

I want to convert .onnx model to OpenVino model use the newes version 2019 R2, there are some ERROR about ReduceL2.

I find this layer on the 2019 R2 Release Notes, some suggestions?

Thanks!

-----------------------------------------------

Details:

Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     /home/zixiang/MyModule.onnx
    - Path for generated IR:     /home/zixiang/.
    - IR output name:     MyModule
    - Log level:     ERROR
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
ONNX specific parameters:
Model Optimizer version:     2019.2.0-436-gf5827d4
[ ERROR ]  Cannot infer shapes or values for node "36".
[ ERROR ]  There is no registered "infer" function for node "36" with op = "ReduceL2". Please implement this function in the extensions.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #37.
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <UNKNOWN>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "36" node.
 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

0 Kudos
1 Solution
Shubha_R_Intel
Employee
1,475 Views

Dear zixiang, gao,

Yes many times --input_shape can make a difference. In fact the error also tells you :

[ ERROR ]  It can happen due to bug in custom shape infer function <UNKNOWN>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).

But there are other potential causes for the error. For instance kindly take a look at the "Ops" under C:\Program Files (x86)\IntelSWTools\openvino_2019.2.242\deployment_tools\model_optimizer\extensions\ops. Each op file in this directory is expected to have an "Infer" method which tells Model Optimizer how to infer an output shape given an input shape.

So far I'm not seeing something like "Unsupported Operation ReduceL2", i.e  Number 24 in MO FAQ .  But I don't see "ReduceL2" as a supported layer in the MO Supported Layers Doc either.

Hope it helps,

Thanks,

Shubha

 

View solution in original post

0 Kudos
7 Replies
Shubha_R_Intel
Employee
1,475 Views

Dear zixiang, gao,

May I know about your ONNX model ? Is it a private one or is it public ? What was your model optimizer command ? Did you pass in  --input_shape ?

Looking forward to your reply,

Thanks,

Shubha

 

0 Kudos
zixiang__gao
Beginner
1,475 Views

Shubha R. (Intel) wrote:

Dear zixiang, gao,

May I know about your ONNX model ? Is it a private one or is it public ? What was your model optimizer command ? Did you pass in  --input_shape ?

Looking forward to your reply,

Thanks,

Shubha

 

Hi, Shubha,

Thanks for your reply.

It's a private one, a Pytorch model, converted to onnx. Used command: python3 mo.py --input_model Name.onnx

I didn't pass in --input_shape, it's necessary?

Best regards,

Zixiang

0 Kudos
Shubha_R_Intel
Employee
1,476 Views

Dear zixiang, gao,

Yes many times --input_shape can make a difference. In fact the error also tells you :

[ ERROR ]  It can happen due to bug in custom shape infer function <UNKNOWN>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).

But there are other potential causes for the error. For instance kindly take a look at the "Ops" under C:\Program Files (x86)\IntelSWTools\openvino_2019.2.242\deployment_tools\model_optimizer\extensions\ops. Each op file in this directory is expected to have an "Infer" method which tells Model Optimizer how to infer an output shape given an input shape.

So far I'm not seeing something like "Unsupported Operation ReduceL2", i.e  Number 24 in MO FAQ .  But I don't see "ReduceL2" as a supported layer in the MO Supported Layers Doc either.

Hope it helps,

Thanks,

Shubha

 

0 Kudos
zixiang__gao
Beginner
1,475 Views

Shubha R. (Intel) wrote:

Dear zixiang, gao,

Yes many times --input_shape can make a difference. In fact the error also tells you :

[ ERROR ]  It can happen due to bug in custom shape infer function <UNKNOWN>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).

But there are other potential causes for the error. For instance kindly take a look at the "Ops" under C:\Program Files (x86)\IntelSWTools\openvino_2019.2.242\deployment_tools\model_optimizer\extensions\ops. Each op file in this directory is expected to have an "Infer" method which tells Model Optimizer how to infer an output shape given an input shape.

So far I'm not seeing something like "Unsupported Operation ReduceL2", i.e  Number 24 in MO FAQ .  But I don't see "ReduceL2" as a supported layer in the MO Supported Layers Doc either.

Hope it helps,

Thanks,

Shubha

 

Thank you very much! I will try to use --input_shape.

Thanks,

Zixiang

0 Kudos
ruivo__cedric
Beginner
1,475 Views
Hello, I'm facing the same error. did the --input_shape works for you? did you found an other way? i try to optimize the model from this repo https://github.com/naver/r2d2 i convert the net to onnx and try to optimize it with mo.py got the following trace [ ERROR ] Cannot infer shapes or values for node "92". [ ERROR ] There is no registered "infer" function for node "92" with op = "ReduceL2". Please implement this function in the extensions. For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #37. [ ERROR ] [ ERROR ] It can happen due to bug in custom shape infer function . [ ERROR ] Or because the node inputs have incorrect values/shapes. [ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape). [ 2019-12-24 11:19:57,552 ] [ DEBUG ] [ infer:196 ] Node "92" attributes: {'pb': input: "85" output: "92" op_type: "ReduceL2" attribute { name: "axes" ints: 1 type: INTS } attribute { name: "keepdims" i: 1 type: INT } , 'kind': 'op', '_in_ports': {0: {'control_flow': False}}, '_out_ports': {0: {'control_flow': False}}, 'name': '92', 'op': 'ReduceL2', 'precision': 'FP32', 'is_output_reachable': True, 'is_undead': False, 'is_const_producer': False, 'is_partial_inferred': False} [ ERROR ] There is no registered "infer" function for node "92" with op = "ReduceL2". Please implement this function in the extensions. For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #37. Stopped shape/value propagation at "92" node.
0 Kudos
h__zf
Beginner
1,475 Views

ruivo, cedric wrote:

Hello,
I'm facing the same error.
did the --input_shape works for you? did you found an other way?
i try to optimize the model from this repo https://github.com/naver/r2d2
i convert the net to onnx and try to optimize it with mo.py

got the following trace
[ ERROR ] Cannot infer shapes or values for node "92".
[ ERROR ] There is no registered "infer" function for node "92" with op = "ReduceL2". Please implement this function in the extensions.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_...), question #37.
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function .
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ 2019-12-24 11:19:57,552 ] [ DEBUG ] [ infer:196 ] Node "92" attributes: {'pb': input: "85"
output: "92"
op_type: "ReduceL2"
attribute {
name: "axes"
ints: 1
type: INTS
}
attribute {
name: "keepdims"
i: 1
type: INT
}
, 'kind': 'op', '_in_ports': {0: {'control_flow': False}}, '_out_ports': {0: {'control_flow': False}}, 'name': '92', 'op': 'ReduceL2', 'precision': 'FP32', 'is_output_reachable': True, 'is_undead': False, 'is_const_producer': False, 'is_partial_inferred': False}
[ ERROR ] There is no registered "infer" function for node "92" with op = "ReduceL2". Please implement this function in the extensions.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_...), question #37.
Stopped shape/value propagation at "92" node.

First, you can find out the corresponding layer of ReduceL2, then reimplement it.

Like this:

     norm = torch.norm(input,2,axis,True)

-> norm = torch.sum(input ** 2, dim=axis, keepdim=True) ** 0.5

Abhik_B
Novice
1,448 Views

Was having this same problem on OpenVINO 2020.3.194 when trying to convert MobileFaceNet (Pytorch) to IR through ONNX. Reimplementing the l2_norm as the thread mentioned solved it. Thanks!

0 Kudos
Reply