Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
967 Views

TF Object Detection 2 Model Zoo models not working with model optimizer

Jump to solution

With the release of Tensorflow 2 Object Detection, the Tensorflow team have uploaded a new model zoo to go with their new API.

As-is, these models don't seem to work with model optimizer, 2020.4 version. Taking the simplest example, SSD Mobilenet V2 320x320 (which can be found at http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_320x320_coco17_...)

and running model optimizer with this command line:

python mo_tf.py --saved_model_dir saved_model --tensorflow_object_detection_api_pipeline_config pipeline.config --transformations_config extensions\front\tf\ssd_v2_support.json --input_shape [1,320,320,3]

We get the below errors:

[ ERROR ] Failed to match nodes from custom replacement description with id 'ObjectDetectionAPIPreprocessorReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ] Failed to match nodes from custom replacement description with id 'ObjectDetectionAPISSDPostprocessorReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ] Shape is not defined for output 1 of "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/non_max_suppression_with_scores_39/NonMaxSuppressionV5".
[ ERROR ] Cannot infer shapes or values for node "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/non_max_suppression_with_scores_39/NonMaxSuppressionV5".
[ ERROR ] Not all output shapes were inferred or fully defined for node "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/non_max_suppression_with_scores_39/NonMaxSuppressionV5".
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #40.
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function NonMaxSuppression.infer at 0x000002AD37CF5288>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/non_max_suppression_with_scores_39/NonMaxSuppressionV5" node.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #38.

Other things we've tried:

1. Using input-checkpoint : no difference

2. Changing extensions to ssd_support_api_v1.15.json : no difference

3. Removing input shape : no diference, extra error to define the input shape

4. Trying a different model - http://download.tensorflow.org/models/object_detection/tf2/20200713/centernet_hg104_512x512_coco17_t... : no difference

I'm out of ideas at this point. I guess TF OD 2 isn't supported yet. Are there any fixes/workarounds?

Thanks

0 Kudos

Accepted Solutions
Highlighted
Moderator
653 Views

Hi @Lyons__Martin 

We have information that from Model Optimizer side NonMaxSuppressionV5 support is very limited. MO only supports NonMaxSuppressionV5 version with 5 inputs (or with 6 inputs, but with zero from the port 5), and 1 output.

With regards to TF2 models, developers team is aware it. So we expect the support to be further expanded. 

Thank you.

View solution in original post

0 Kudos
13 Replies
Highlighted
Moderator
927 Views

Greetings,


First and foremost please help to check your framework layer to ensure it is supported:

https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layer...


Next, models that are not frozen required to be freezed, you can refer here:

https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Mode...


This is how to Convert SSD Models Created with TensorFlow* Object Detection API (hoewever this is deprecated): https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_customize_model_optimizer_...



Sincerely,

Iffa





0 Kudos
Highlighted
Beginner
902 Views

Hi Iffa,

I'm not sure I understand. The link I gave in my post was to one of the official Tensorflow Object Detection 2 Model Zoo models - for Mobilenet V2 SSD. It is already frozen, I believe.

I've also tried freezing my own models using the current TF object detection scripts -  exporter_main_v2.py - which produces the TF2 saved model format which I thought was supported by model optimizer.

I don't want to dig into the potential layer operations that might not be supported by Openvino as I'd assume the intent of Openvino would be to support models created by a widely used API such as Tensorflow Object Detection.

If it's possible, can you please provide guidance on how you would freeze the model I linked (or any other model from the Object Detection 2 Model Zoo) and then convert it with model optimizer?

0 Kudos
Highlighted
Moderator
879 Views


you can view the step by step taken in the IR conversion here:

https://www.youtube.com/watch?v=QW6532LtiTc&t=196s


There are 3 ways to load non-frozen model to model optimizer:

1.Checkpoint

2.MetaGraph

3.SavedModel format of TensorFlow 1.x and 2.x versions


You can view the detailed steps here: https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Mode...


Plus,This is a detailed video on How to use the model optimizer to convert models built by object detection APIs:

https://www.youtube.com/watch?v=cbdS3BjjbaQ&t=163s



Sincrely,

Iffa


0 Kudos
Highlighted
870 Views

I too have not had any luck with the tensorflow 2 conversion of mobilenet_v2_ssd models using the model optimiser.

I am able to get the Tensorflow 1 models of  the same type to parse and be converted into IR correctly though with the upgrade to 2020.4 i had to change use fo the ssd_v2_support.json to ssd_support_api_1.15.config.

Example conversion script:

mo.py --input_model frozen_inference_graph.pb \
           --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_support_api_1.15.json \

--tensorflow_object_detection_api_pipeline_config pipeline.config \
--input_shape [1,300,300,3] \
--reverse_input_channels

 

I get the follwoing ouptut when trying to convert the ssd_mobilenet_v2_320x320_coco17 model which should be functionally equivalent:

[ ERROR ] Failed to match nodes from custom replacement description with id 'ObjectDetectionAPIPreprocessorReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ] Failed to match nodes from custom replacement description with id 'ObjectDetectionAPISSDPostprocessorReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ] Shape is not defined for output 1 of "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/non_max_suppression_with_scores_34/NonMaxSuppressionV5".
[ ERROR ] Cannot infer shapes or values for node "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/non_max_suppression_with_scores_34/NonMaxSuppressionV5".
[ ERROR ] Not all output shapes were inferred or fully defined for node "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/non_max_suppression_with_scores_34/NonMaxSuppressionV5".
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #40.
[ ERROR ]
[ ERROR ] It can happen due to bug in custom shape infer function <function NonMaxSuppression.infer at 0x7f7e4ae200d0>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/non_max_suppression_with_scores_34/NonMaxSuppressionV5" node.

 

The model optimiser call uses the saved_model_dir argument but I don't think its setting the environment up the same:

python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py

--saved_model_dir ~/Downloads/ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model/ --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_support_api_v1.15.json

--input_shape=[1,320,320,3]

--reverse_input_channels

--tensorflow_object_detection_api_pipeline_config ~/Downloads/ssd_mobilenet_v2_320x320_coco17_tpu-8/pipeline.config

0 Kudos
Highlighted
Beginner
868 Views

Iffa,

I'm aware of those links. I'm not a beginner; I've been using model optimizer for over a year now.

I have a specific question about support for the current Tensorflow Object Detection Model Zoo (in particular the new 2.x based models).

I asked if you could point me to a process to convert such a model. If you were to download the model from the zoo I referred to and tried to use model optimizer, you'd either find the same problem I have, or you wouldn't. In either case, I'd really like to see your results.

0 Kudos
Highlighted
Moderator
830 Views

Greetings,

 

From my hands on perspective, your given model is not from our official resources and it's incorrect Tensorflow model file, you can refer to my attachment. I'm not sure what the original file is but what I know is that it is not the correct model. Please help to clarify whether this is a frozen model or not because this might be the cause.

 

Instead, I download supported mobilenet model through python script in openvino's model_downloader folder: 

python downloader.py --name ssd_mobilenet_v2_coco -o <path>

 

I get the json file from 

openvino_2020.4.287\deployment_tools\model_optimizer\extensions\front\tf\ ssd_v2_support.json

 

and i get config file from the downloaded mobilenet folder:

public\ssd_mobilenet_v2_coco\ssd_mobilenet_v2_coco_2018_03_29\pipeline.config

 

convert it to IR: 

python mo_tf.py -m frozen_inference_graph.pb --tensorflow_use_custom_operations_config ssd_v2_support.json --tensorflow_object_detection_api pipeline.config

(refer attachment 2 for result)

 

and I run it with object detection sample code:

python object_detection_demo_ssd_async.py -i inputVideo.mp4 -m frozen_inference_graph.xml

(refer to attachment 3 for result)

 

Here is the video to aid you: https://www.youtube.com/watch?v=cbdS3BjjbaQ

Since you are not a beginner this would be piece of cake ;-).

Note: if nothing's wrong with your framework/model, same concept could be applied to convert custom model to IR

Sincerely,

Iffa

 

 

0 Kudos
Highlighted
810 Views

Hi Iffa,

If you look at my email, I tried to convert a tensorflow model zoo model to IR format. The model you downloaded is actually a Tensorflow 1 model and is downloaded from this page on the Tensorflow.org, tensorflow models: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.m...

The model you converted is actually the same one I was able to successfully convert.

However, our questions are about Tensorflow 2 models. There is another repository on Tensorflow.org for Tensorflow 2 models. This is where we have both been unable to convert to IR. These are official models. The Tensorflow 2 models are downloadable from: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.m...

Both of those models are official, you can get access to either Tensorflow 1 or Tensorflow 2 models using this page:

https://github.com/tensorflow/models/tree/master/research/object_detection

My Question is therefore still outstanding, Tensorflow 2 Object Detection Zoo doesn't work. Can you try to convert a tensorflow 2 model please?

Highlighted
Beginner
722 Views

Hi Iffa,

Is there any update to Peter's question (who describes the problem better than I could)?

0 Kudos
Highlighted
Moderator
680 Views

Hi @Lyons__Martin  @milani__peter1 

OpenVINO toolkit supports Tensorflow 2 models but in a very limited way, since it has been just recently introduced in latest 2020.4 release. You can find more details and instructions here https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_Convert_Mode...

The model provided in the topic's first message contains NonMaxSuppressionV5 TF operation which is currently not supported. This operation is not yet implemented within plugins, thus the model containing it cannot be converted to IR.
We apologize for the inconvenience. 

0 Kudos
Highlighted
Beginner
661 Views

Hi Max,

Thanks for the answer. It's what I expected.

One point though, you say that NonMaxSuppressionV5 is not supported, yet according to the release notes for 2020.1 (https://docs.openvinotoolkit.org/2020.1/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) it says that it is?

Moving forward, I know you can't commit to future features, but I'd like to highlight that the TF2 model zoo is essentially causing the TF1 model zoo to be deprecated so I'd hope that this is a high priority for support.

0 Kudos
Highlighted
Moderator
654 Views

Hi @Lyons__Martin 

We have information that from Model Optimizer side NonMaxSuppressionV5 support is very limited. MO only supports NonMaxSuppressionV5 version with 5 inputs (or with 6 inputs, but with zero from the port 5), and 1 output.

With regards to TF2 models, developers team is aware it. So we expect the support to be further expanded. 

Thank you.

View solution in original post

0 Kudos
Highlighted
Moderator
606 Views

This thread will no longer be monitored since we have provided our suggestions. If you need any additional information from Intel, please submit a new question.

0 Kudos
Highlighted
Beginner
446 Views

Hi, im having the same issues as op i would like to know if there is some way to convert the tensorflow 2 object detection models or is still not possible? thx for your time

0 Kudos