Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Retrained SSD Mobile V2 Optimiser Error

Daniel_UWA
Beginner
644 Views

I have retrained the MobileNet V2 with my custom dataset (Tensorflow V1). I then exported the model using the function

'python research/object_detection/export_inference_graph'

to get the frozen model.

 

I have tried many options to use the optimiser and it always produces errors.

 

mo --input_model frozen_inference_graph.pb

 

Produces the following error:

Shape [-1 -1 -1 3] is not fully defined for output 0 of "image_tensor". Use --input_shape with positive integers to override model input shapes.
[ ERROR ] Cannot infer shapes or values for node "image_tensor".
[ ERROR ] Not all output shapes were inferred or fully defined for node "image_tensor".

 

mo --input_model frozen_inference_graph.pb --input_shape [1,300,300,3]

 

Produces the following error

 

Cannot infer shapes or values for node "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1".
[ ERROR ] The non-constant start/end values for Slice operation "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1" are not supported

 

mo --input_model frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config pipeline.config --input_shape [1,300,300,3]

 

Produces the following error

Cannot infer shapes or values for node "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1".
[ ERROR ] The non-constant start/end values for Slice operation "Postprocessor/BatchMultiClassNonMaxSuppression/map/while/Slice_1" are not supported

 

Using the input shape defined here (https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model.html) for the input produces the same error as [1,300,300,3]

 

Has someone has this error before? Is there a solution or something that I need to change to get it working?

 

Thanks

 

Labels (1)
0 Kudos
2 Replies
Zulkifli_Intel
Moderator
607 Views

Hello Daniel,

Thank you for reaching out to us.

 

Please share with us your retrained MobileNet V2 model and also the link to the original model with us.

 

In the meantime, please try to convert the model using this command:

 

mo --input_model frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config pipeline.config

 --input_shape [1,300,300,3] --reverse_input_channels --transformations_config ssd_v2_support.json

 

You can get the ssd_v2_support.json file in attachment. If you install OpenVINO using pip, the file can be found in this location:

<install_location>\openvino_env\Lib\site-packages\openvino\tools\mo\front\tf\ssd_v2_support.json

 

 

Sincerely,

Zulkifli 

 

0 Kudos
Zulkifli_Intel
Moderator
573 Views

Hello Daniel,


Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.


Sincerely,

Zulkifli


0 Kudos
Reply