- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
* I am trying to convert a fine tuned SSD MobileNet V2 FPNLite (640x640) into the openVino representation but it is giving me errors.(mentioned below)
* The savedModel was obtained after doing transfer learning via the Tensorflow object detection API.
* The OpenVino version is 2021.3.394
* The command that I am using is
python3 mo.py --input_model /home/ravi/Downloads/out/saved_model/saved_model.pb --output_dir /home/ravi/Downloads/SSDOUT --data_type FP16
* But I have tried using different variants of it with/without data_type , mo_tf
I am attaching the screenshots of the errors.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
After that, I tried using the approach mentioned here for using a saved_model
I'm getting this error there.
[ WARNING ] Failed to parse a tensor with Unicode characters. Note that Inference Engine does not support string literals, so the string constant should be eliminated from the graph.
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.load.tf.loader.TFLoader'>): Unexpected exception happened during extracting attributes for node Const_30.
Original exception message: 'ascii' codec can't decode byte 0xfd in position 183: ordinal not in range(128)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi ravi31,
Thanks for reaching out.
The error 'ascii' codec can't decode byte generally happens when you try to convert a Python 2.x str that contains non-ASCII to a Unicode string without specifying the encoding of the original string. Therefore, the SSD MobileNet V2 FPNLite (640x640) model is a pre-trained model on Tensorflow 2 Detection Model Zoo. Thus, it should be freeze using Tensorflow 2 before converting to the OpenVINO intermediate representation (IR) format. You can refer to this Freezing Custom Models in Python* documentation. Meanwhile, can you share your model file or the source of your model for us to test it on our side?
Meanwhile, I would recommend you upgrade your OpenVINO to our latest version (2021.4) for better features supportability.
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the reply.
But, the scripts that are available to export need one meta file but the outputs that I got from the training don't have that file.
I am attaching the folder structure of the model checkpoints and saved_model that was generated.
Could you refer me to any documentation for exporting the saved_model to a frozen graph? the one that you mentioned in the reply needs the names of output nodes. But, in order to get them, we need that meta file, which the TensorFlow object detection API doesn't generate.
Also, the OpenVino documentation clearly states that we can directly convert a saved_model.pb to IR format if we use the arguments --saved_model_dir. But, that didn't work. I'm unable to understand that.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello
I used the code snipped provided in the OpenVino Documentation t freeze my current saved_model. But it is not able to recognize the output nodes.
Please look into this.
I am attaching the text files of the logs generated.
Here file.txt contains the code that I used to run the freeze operation.
And logs2.txt refers to the logs generated.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi ravi31,
Do you use the model from our OpenVINO Detection Model Zoo source? If not, please share your model or source of your model for us to further investigate. Meanwhile, what is your version of TensorFlow to train the model?
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi ravi31,
Some update for you. I can convert the OpenVINO SSD MobileNet V2 FPNLite (640x640) using the command as below. Please give a try and share the result. If this command does not work on your side, you can share you model for us to further investigate,
Command:
mo.py --saved_model_dir "\Downloads\ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8\saved_model" --reverse_input_channels --input_shape=[1,640,640,3] --transformations_config "<INSTALL_DIR>\openvino_2021.4.582\deployment_tools\model_optimizer\extensions\front\tf\ssd_support_api_v2.0.json" --tensorflow_object_detection_api_pipeline_config "Downloads\ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8\pipeline.config" --output_dir "Downloads\ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8"
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The above command solved that ASCII issue but lead to some Optimizer errors.
I am attaching the logs.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Ravi31,
Please share your model for us to test it on our side and investigate more regarding the issue.
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Ravi31,
Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.
Regards,
Aznie

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page