Showing results for 
Search instead for 
Did you mean: 

Unsupported layer in conversion of ONNX model


My conversion of a custom ONNX model with Model Optimizer failed with the following output:

[ INFO ]  Possible outputs: '169, 70, 119, 21, 42, 28, 77, 63, 187, 196, 112, 232, 133, 214, 205, 14, 178, 35, 105, 160, 151, 56, 142, 49, 126, 91, 223, 98, 7, 84' are not input reachable. True outputs are 481, 501, 471
[ WARNING ]  Instructions/layers that do not have attribute extractors:
[ WARNING ]      Expand (3)
[ WARNING ]          484
[ WARNING ]          498
[ WARNING ]          492
[ ERROR ]  Cannot infer shapes or values for node "484".
[ ERROR ]  There is no registered "infer" function for node "484" with op = "Expand". Please implement this function in the extensions.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #37.
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <UNKNOWN>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Stopped shape/value propagation at "484" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:     <withheld>.onnx
    - Path for generated IR:     <withheld>
    - IR output name:     <withheld>
    - Log level:     INFO
    - Batch:     Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:     Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:     Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:     FP32
    - Enable fusing:     True
    - Enable grouped convolutions fusing:     True
    - Move mean values to preprocess section:     False
    - Reverse input channels:     False
ONNX specific parameters:
Model Optimizer version:

Incidentally, tensors 484, 492 and 498 were all output of nodes with the Expand operation.

Can the Model Optimizer be extended to support the ONNX Expand operation? Or will future versions of OpenVINO support this?

0 Kudos
2 Replies


There is no direct equivalent of Expand inside DLDT. It is a numpy-like broadcast according to One can implement ModelOptimizer transformation that replaces Expand operator with a series of Tile operations that do the broadcasting along each axis that need to be broadcasted. Similar transformation is already implemented for Eltwise broadcast in model-optimizer/extensions/back/ source file. It is not 100% equivalent though. It should be copied into a separate middle-end transformation that aligns in_node(0).shape and in_node(1).value similar to what EltwiseBroadcast does for in_node(0).shape and in_node(1).shape. It is also required to implement Broadcast operation inside MO with correct shape inference function. Disadvantage of this approach (producing Tile layers instead of a single Expand) is that we fixed shapes inside ModelOptimizer and won't able to change the shape in the run-time with InferenceEngine, but it may not be considered as a limitation for some models. And there is a performance penalty for sure.

Another option is to do more context-aware transformation of the model to try to eliminate Expand depending on other operations that consume the broadcasted result. To do that I would like to see the neighbors of Expand instances in the graph. Could you share some pieces of the model that involve Expand?

- Sergey


Thanks for the explanation and suggestions, Sergey!

We will work towards eliminating the Expand layer in our exported ONNX model for the time being.

/ Yee Keng