I am trying to get an ONNX model based on a public pytorch u-net implementation:
loaded into OpenVINO so I can use a Neural Compute Stick in a demo.
Inference works for the trained pytorch model in pytorch.
The ONNX model passes verification with the ONNX library.
I made it fully through the OpenVINO installation and both of the validation samples run.
I called the model optimizer like this:
python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model model.onnx
Here is the model optimizer output:
Model Optimizer arguments:
- Path to the Input Model: models/model.onnx
- Path for generated IR: models/.
- IR output name: model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
ONNX specific parameters:
Model Optimizer version: 2020.1.0-61-gd349c3ba4a
[ ERROR ] Concat input shapes do not match
[ ERROR ] Shape is not defined for output 0 of "101".
[ ERROR ] Cannot infer shapes or values for node "101".
[ ERROR ] Not all output shapes were inferred or fully defined for node "101".
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #40.
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function concat_infer at 0x7fb171fbeae8>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "101" node.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.
I did a cursory visual inspection looking for unsupported ONNX operators, and I find any (and I would have expected a different error message in that case anyway).
The part of the graph that is failing looks something like
Rerunning with debug output shows:
# Rerunning with
#python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model model.onnx --log_level=DEBUG
#[ 2020-02-20 11:55:19,309 ] [ DEBUG ] [ infer:129 ] Partial infer for 101
#[ 2020-02-20 11:55:19,309 ] [ DEBUG ] [ infer:130 ] Op: Concat
#[ 2020-02-20 11:55:19,309 ] [ DEBUG ] [ infer:131 ] Inputs:
#[ 2020-02-20 11:55:19,310 ] [ DEBUG ] [ infer:31 ] input: shape = [ 1 32 356 483], value = <UNKNOWN>
#[ 2020-02-20 11:55:19,310 ] [ DEBUG ] [ infer:31 ] input: shape = [ 1 32 356 484], value = <UNKNOWN>
Since pytorch's API design for onnx makes it impossible to screw up the input dimensions, since it's based on passing a valid input sample and presumably some form of tracing. ONNX didn't complain about shapes either.
It's not an issue in OpenVINO, then there would have to be two separate issues in both pytorch's ONNX export and ONNX's validation tool (for not catching pytorch's mistake).
IF the issue is in intel's shape inference, I would suspect an off-by-one issue either for Conv when there is NOT image padding, or maybe for
MaxPool[kernel_shape = [2, 2], pads = [0, 0, 0, 0], strides = [2,2]]
or maybe for UpSample.
Thank you for reaching out.
The Myriad Plugin supports the networks specified in the documentation here: https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_MYRIAD.html
However, there is a chance your ONNX model may work, in which layer is based your model?
Please see the Supported Layers by the Myriad Plugin (VPU) in the following link: https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html#supported_layers
Now I hit:
[ 2020-02-20 14:57:35,928 ] [ DEBUG ] [ infer:128 ] --------------------
[ 2020-02-20 14:57:35,929 ] [ DEBUG ] [ infer:129 ] Partial infer for 92
[ 2020-02-20 14:57:35,929 ] [ DEBUG ] [ infer:130 ] Op: Concat
[ 2020-02-20 14:57:35,929 ] [ DEBUG ] [ infer:131 ] Inputs:
[ 2020-02-20 14:57:35,929 ] [ DEBUG ] [ infer:31 ] input: shape = , value = [1 1]
[ 2020-02-20 14:57:35,929 ] [ DEBUG ] [ infer:31 ] input: shape = , value = [356 484]
[ 2020-02-20 14:57:35,929 ] [ DEBUG ] [ infer:144 ] Outputs:
[ 2020-02-20 14:57:35,929 ] [ DEBUG ] [ infer:31 ] output: shape = , value = [ 1 1 356 484]
[ ERROR ] Cannot infer shapes or values for node "94".
[ ERROR ] There is no registered "infer" function for node "94" with op = "Resize". Please implement this function in the extensions.
I attached model exported to a newer version of ONNX, as suggested by pytorch. Resize is listed as supported, but I guess there is a new flavor of that operator in ONNX that isn't supported yet. No urgency to this; I'm quickly evaluating embedded platforms for a demo and can only spend a day or so on each (and I already spent two on this).
It is possible that you are using an unsupported version of Resize; there is a limitation to the Resize operation "Opset-10 version is supported". You can see this here: https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Supported_Frameworks_Layers.html#onnx_supported_operators