* I am trying to convert a fine tuned SSD MobileNet V2 FPNLite (640x640) into the openVino representation but it is giving me errors.(mentioned below)
* The savedModel was obtained after doing transfer learning via the Tensorflow object detection API.
* The OpenVino version is 2021.3.394
* The command that I am using is
python3 mo.py --input_model /home/ravi/Downloads/out/saved_model/saved_model.pb --output_dir /home/ravi/Downloads/SSDOUT --data_type FP16
* But I have tried using different variants of it with/without data_type , mo_tf
I am attaching the screenshots of the errors.
After that, I tried using the approach mentioned here for using a saved_model
I'm getting this error there.
[ WARNING ] Failed to parse a tensor with Unicode characters. Note that Inference Engine does not support string literals, so the string constant should be eliminated from the graph.
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.load.tf.loader.TFLoader'>): Unexpected exception happened during extracting attributes for node Const_30.
Original exception message: 'ascii' codec can't decode byte 0xfd in position 183: ordinal not in range(128)
Thanks for reaching out.
The error 'ascii' codec can't decode byte generally happens when you try to convert a Python 2.x str that contains non-ASCII to a Unicode string without specifying the encoding of the original string. Therefore, the SSD MobileNet V2 FPNLite (640x640) model is a pre-trained model on Tensorflow 2 Detection Model Zoo. Thus, it should be freeze using Tensorflow 2 before converting to the OpenVINO intermediate representation (IR) format. You can refer to this Freezing Custom Models in Python* documentation. Meanwhile, can you share your model file or the source of your model for us to test it on our side?
Meanwhile, I would recommend you upgrade your OpenVINO to our latest version (2021.4) for better features supportability.
Thanks for the reply.
But, the scripts that are available to export need one meta file but the outputs that I got from the training don't have that file.
I am attaching the folder structure of the model checkpoints and saved_model that was generated.
Could you refer me to any documentation for exporting the saved_model to a frozen graph? the one that you mentioned in the reply needs the names of output nodes. But, in order to get them, we need that meta file, which the TensorFlow object detection API doesn't generate.
Also, the OpenVino documentation clearly states that we can directly convert a saved_model.pb to IR format if we use the arguments --saved_model_dir. But, that didn't work. I'm unable to understand that.
I used the code snipped provided in the OpenVino Documentation t freeze my current saved_model. But it is not able to recognize the output nodes.
Please look into this.
I am attaching the text files of the logs generated.
Here file.txt contains the code that I used to run the freeze operation.
And logs2.txt refers to the logs generated.
Do you use the model from our OpenVINO Detection Model Zoo source? If not, please share your model or source of your model for us to further investigate. Meanwhile, what is your version of TensorFlow to train the model?
Some update for you. I can convert the OpenVINO SSD MobileNet V2 FPNLite (640x640) using the command as below. Please give a try and share the result. If this command does not work on your side, you can share you model for us to further investigate,
mo.py --saved_model_dir "\Downloads\ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8\saved_model" --reverse_input_channels --input_shape=[1,640,640,3] --transformations_config "<INSTALL_DIR>\openvino_2021.4.582\deployment_tools\model_optimizer\extensions\front\tf\ssd_support_api_v2.0.json" --tensorflow_object_detection_api_pipeline_config "Downloads\ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8\pipeline.config" --output_dir "Downloads\ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8"
Thank you for your question. If you need any additional information from Intel, please submit a new question as this thread is no longer being monitored.