Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

SSD Mobilenet Conversion to IR format

ravi31
Novice
3,091 Views

* I am trying to convert a fine tuned SSD MobileNet V2 FPNLite (640x640) into the openVino representation but it is giving me errors.(mentioned below)

* The savedModel was obtained after doing transfer learning via the Tensorflow object detection API.

* The OpenVino version is 2021.3.394

* The command that I am using is

python3 mo.py --input_model /home/ravi/Downloads/out/saved_model/saved_model.pb --output_dir /home/ravi/Downloads/SSDOUT --data_type FP16

 

* But I have tried using different variants of it with/without data_type , mo_tf

I am attaching the screenshots of the errors.

 

0 Kudos
9 Replies
ravi31
Novice
3,067 Views

After that, I tried using the approach mentioned here for using a saved_model

https://docs.openvinotoolkit.org/2018_R5/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#loading-nonfrozen-models

 

I'm getting this error there.

[ WARNING ] Failed to parse a tensor with Unicode characters. Note that Inference Engine does not support string literals, so the string constant should be eliminated from the graph.
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.load.tf.loader.TFLoader'>): Unexpected exception happened during extracting attributes for node Const_30.
Original exception message: 'ascii' codec can't decode byte 0xfd in position 183: ordinal not in range(128)

0 Kudos
IntelSupport
Community Manager
3,032 Views

Hi ravi31,

Thanks for reaching out.

The error 'ascii' codec can't decode byte generally happens when you try to convert a Python 2.x str that contains non-ASCII to a Unicode string without specifying the encoding of the original string. Therefore, the  SSD MobileNet V2 FPNLite (640x640) model is a pre-trained model on Tensorflow 2 Detection Model Zoo. Thus, it should be freeze using Tensorflow 2 before converting to the OpenVINO intermediate representation (IR) format. You can refer to this Freezing Custom Models in Python* documentation. Meanwhile, can you share your model file or the source of your model for us to test it on our side?

 

Meanwhile, I would recommend you upgrade your OpenVINO to our latest version (2021.4) for better features supportability.

 

Regards,

Aznie

 

 

0 Kudos
ravi31
Novice
3,020 Views

Thanks for the reply.

But, the scripts that are available to export need one meta file but the outputs that I got from the training don't have that file.

I am attaching the folder structure of the model checkpoints and saved_model that was generated.

 

Could you refer me to any documentation for exporting the saved_model to a frozen graph? the one that you mentioned in the reply needs the names of output nodes. But, in order to get them, we need that meta file, which the TensorFlow object detection API doesn't generate.

 

Also, the OpenVino documentation clearly states that we can directly convert a saved_model.pb to IR format if we use the arguments --saved_model_dir. But, that didn't work. I'm unable to understand that.

0 Kudos
ravi31
Novice
2,998 Views

Hello

I used the code snipped provided in the OpenVino Documentation t freeze my current saved_model. But it is not able to recognize the output nodes.

Please look into this.

I am attaching the text files of the logs generated.

 

Here file.txt contains the code that I used to run the freeze operation.

And logs2.txt refers to the logs generated.

 

 

0 Kudos
IntelSupport
Community Manager
2,951 Views

Hi ravi31,

Do you use the model from our OpenVINO Detection Model Zoo source? If not, please share your model or source of your model for us to further investigate. Meanwhile, what is your version of TensorFlow to train the model?

 

Regards,

Aznie

 

0 Kudos
IntelSupport
Community Manager
2,882 Views

Hi ravi31,

Some update for you. I can convert the OpenVINO SSD MobileNet V2 FPNLite (640x640) using the command as below. Please give a try and share the result. If this command does not work on your side, you can share you model for us to further investigate,


Command:

mo.py --saved_model_dir "\Downloads\ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8\saved_model" --reverse_input_channels --input_shape=[1,640,640,3] --transformations_config "<INSTALL_DIR>\openvino_2021.4.582\deployment_tools\model_optimizer\extensions\front\tf\ssd_support_api_v2.0.json" --tensorflow_object_detection_api_pipeline_config "Downloads\ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8\pipeline.config" --output_dir "Downloads\ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8"

 

Regards,

Aznie


0 Kudos
ravi31
Novice
2,874 Views

The above command solved that ASCII issue but lead to some Optimizer errors.

I am attaching the logs.

 

0 Kudos
IntelSupport
Community Manager
2,864 Views

Hi Ravi31,

Please share your model for us to test it on our side and investigate more regarding the issue.


Regards,

Aznie


0 Kudos
IntelSupport
Community Manager
2,539 Views

Hi Ravi31,

Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored. 


Regards,

Aznie


0 Kudos
Reply