I have trained an ssd_inception_v2 model using TensorFlow Object Detection API Pipeline with the steps provided in the following guide
I then exported the ckpt model file to a frozen inference graph following the steps mentioned in the above guide. I am able to get correct predictions using the frozen model on a test data set on a TensorFlow pipeline but I am unable to convert the same graph into OpenVINO IR format.
Following are the details:
python mo_tf.py --input_model /home/ubuntu/tensorflow/exported_graphs/frozen_inference_graph.pb --transformations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_support_api_v1.15.json --tensorflow_object_detection_api_pipeline_config /home/ubuntu/tensorflow/trained_model/ssd_inception_v2_coco_2018_01_28/pipeline.config -o /home/ubuntu/openvino/ir/
Error when converting the graph:
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.ChangePlaceholderTypes.ChangePlaceholderTypes'>): Something bad has happened with graph! Data node "Preprocessor/mul" has 2 producers
Detailed conversion log attached in the post.
Tensorflow version for model training: 1.15
OpenVINO version used for conversion: 2020.3 (also tried with version 2020.4)
Export model log attached in the post.
Before writing this post, I checked the forum if this is a known issue with conversion and found a post
but it did not resolve my issue.
Has anyone faced the same issue before and got past the conversion error?
I am using Tensorflow v1.15 which is an October, 2019 release. For privacy reasons, I have shared the model in a direct message with you.
I have replicated your issue using the guide you mentioned and managed to convert it to IR using model optimizer.
I changed "pip install --ignore-installed --upgrade tensorflow==1.14" to 1.15 and
used ssd_inception_v2_coco from the model zoo.
Version 1.13.0 of Tensorflow Models releases was used.
The transformations_config used for model optimizer is "ssd_support.json" instead of "ssd_support_api_v1.15.json".
I hope these details could help you.
Hello @Rizal_Intel, I was a bit caught up with some other work. Here is what I tried out on my machine after reading your response.
1. installed tensorflow 1.14 and 1.15 in fresh python3 virtual environments
pip freeze from 1.14
absl-py==0.10.0 astor==0.8.1 decorator==4.4.2 defusedxml==0.6.0 gast==0.4.0 google-pasta==0.2.0 grpcio==1.32.0 h5py==2.10.0 importlib-metadata==2.0.0 Keras-Applications==1.0.8 Keras-Preprocessing==1.1.2 Markdown==3.3 networkx==2.5 numpy==1.19.2 protobuf==3.13.0 six==1.15.0 tensorboard==1.14.0 tensorflow==1.14.0 tensorflow-estimator==1.14.0 termcolor==1.1.0 Werkzeug==1.0.1 wrapt==1.12.1 zipp==3.3.0
pip freeze from 1.15
absl-py==0.10.0 astor==0.8.1 decorator==4.4.2 defusedxml==0.6.0 gast==0.2.2 google-pasta==0.2.0 grpcio==1.32.0 h5py==2.10.0 importlib-metadata==2.0.0 Keras-Applications==1.0.8 Keras-Preprocessing==1.1.2 Markdown==3.3 networkx==2.5 numpy==1.19.2 opt-einsum==3.3.0 protobuf==3.13.0 six==1.15.0 tensorboard==1.15.0 tensorflow==1.15.0 tensorflow-estimator==1.15.1 termcolor==1.1.0 Werkzeug==1.0.1 wrapt==1.12.1 zipp==3.3.0
2. I used the following command for model conversion
Model conversion command:
python mo.py --input_model frozen_inference_graph.pb --transformations_config /home/acer/openvino_toolkit/deployment_tools/model_optimizer/extensions/front/tf/ssd_support.json --tensorflow_object_detection_api_pipeline_config /home/acer/Downloads/ssd_inception_v2_coco_2018_01_28/ssd_inception_v2_coco_2018_01_28/pipeline.config -o /home/acer/Downloads/
3. I got this output
Model Optimizer arguments: Common parameters: - Path to the Input Model: frozen_inference_graph.pb - Path for generated IR: /home/acer/Downloads/ - IR output name: frozen_inference_graph - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: /home/acer/Downloads/ssd_inception_v2_coco_2018_01_28/ssd_inception_v2_coco_2018_01_28/pipeline.config - Use the config file: None Model Optimizer version: The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept. [ ERROR ] Failed to match nodes from custom replacement description with id 'ObjectDetectionAPISSDPostprocessorReplacement': It means model and custom replacement description are incompatible. Try to correct custom replacement description according to documentation with respect to model node names [ ANALYSIS INFO ] Your model looks like TensorFlow Object Detection API Model. Check if all parameters are specified: --tensorflow_use_custom_operations_config --tensorflow_object_detection_api_pipeline_config --input_shape (optional) --reverse_input_channels (if you convert a model to use with the Inference Engine sample applications) Detailed information about conversion of this model can be found at https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html [ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.ChangePlaceholderTypes.ChangePlaceholderTypes'>): Something bad has happened with graph! Data node "Preprocessor/mul" has 2 producers
I downloaded a fresh copy of ssd_inception_v2_coco_2018_01_28 model from the link you had mentioned.
OpenVINO version used: 2020.3
OS: Ubuntu 18.04
I am still not able to convert the model on my machine. Could you please share your OS and OpenVINO versions used?
Not sure what is going wrong here.
The issue was replicated using OpenVINO 2020.4 on Windows 10 OS.
Could you try using version 2020.4 to see if it works?
I'll try replicating it using Ubuntu later on.
I am using Ubuntu 18.04.5 LTS and OpenVINO 2020.4.
I managed to convert a fresh download of ssd_inception_v2_coco_2018_01_28 model from the model zoo.
Before converting, have you exited the virtual environment and use the /opt/intel/openvino_xxxx/bin/setupvars.sh?
Yes, I was able to convert the original pre-trained model but the issue was with the model I shared across with you. That model was finetuned on my dataset and it did not get converted.
I will use OpenVINO 2020.4 and attempt the conversion again.
Yes, I did use setupvars.sh script before conversion.
I have managed to replicate the Object Detection API (virtualenv, training and IR conversion) steps on Ubuntu 18.04 with OpenVINO 2020.4, Tensorflow 1.15 ,Tensorflow Models 1.13 and ssd inceptionv2 model from TensorFlow 1 Detection Model Zoo.
Unfortunately, your model is not suitable to be converted to OpenVINO IR due to some changes in Tensorflow Models library.
This is an issue that exists on the main branch of Tensorflow Models (issue discussion example).
I would recommend you to redo the steps using the exact parameters shown in the guide (please use Tensorflow Models 1.13 or as in your original link, try using pre July 2020 releases).
Note that Tensorflow and Tensorflow Models are two different repositories with its own versions.
Thank you @Rizal_Intel for actively trying to resolve this. So I understand that I would need to start from scratch, I will use the appropriate versions of the TensorFlow model as you have suggested, re=train the model and attempt the intermediate model again. This whole process might take a few more days as I am working on another task, once I finish this, I will share my updates here.
Intel will no longer monitor this thread since we have provided a solution.
If you need any additional information from Intel, please submit a new question.