Showing results for 
Search instead for 
Did you mean: 

Model Optimizer Error




Model Optimizer arguments:

Common parameters:

               - Path to the Input Model:                /home/macnica/Desktop/bkav_pretrained_model/frozen_inference_graph.pb

               - Path for generated IR:                /opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/.

               - IR output name:            frozen_inference_graph

               - Log level:          ERROR

               - Batch:                Not specified, inherited from the model

               - Input layers:    Not specified, inherited from the model

               - Output layers:                Not specified, inherited from the model

               - Input shapes:   [1,1,1,3]

               - Mean values:   Not specified

               - Scale values:    Not specified

               - Scale factor:     Not specified

               - Precision of IR:               FP32

               - Enable fusing: True

               - Enable grouped convolutions fusing:     True

               - Move mean values to preprocess section:          False

               - Reverse input channels:             True

TensorFlow specific parameters:

               - Input model in text protobuf format:    False

               - Offload unsupported operations:           False

               - Path to model dump for TensorBoard: None

               - List of shared libraries with TensorFlow custom layers implementation:               None

               - Update the configuration file with input/output node names:    None

               - Use configuration file used to generate the model with Object Detection API:    None

               - Operations to offload: None

               - Patterns to offload:      None

               - Use the config file:        None

Model Optimizer version:   

/opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/mo/ops/ FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.

  value = value[slice_idx]

[ ERROR ]  Shape is not defined for output 0 of "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1".

[ ERROR ]  Cannot infer shapes or values for node "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1".

[ ERROR ]  Not all output shapes were inferred or fully defined for node "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1".

 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.

[ ERROR ] 

[ ERROR ]  It can happen due to bug in custom shape infer function <function Slice.infer at 0x7f20449d92f0>.

[ ERROR ]  Or because the node inputs have incorrect values/shapes.

[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).

[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.

[ ERROR ]  Stopped shape/value propagation at "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1" node.

 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

0 Kudos
1 Reply

Dear Purohit, Rushikesh S,

Did you ever resolve this ? I would like to help you with this issue but I don't have access to Dropbox. If you wish to transfer huge files (such as models) to me you can do so via Syncplicity - that is the easiest way.

You say that the model is "pre-trained". I assume this to mean that the customer did not custom train it. Is this the case ? Is it one of these models listed model optimizer supported tensorflow list ?

Thanks and I'm sorry it took so long to address this,