Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Trying with no success to use model optimizer to convert tensorflow graph

Supra__Morne_
Beginner
2,005 Views

Hi

I have been trying for days now to convert a tensorflow graph to use with Neural compute stick2.

I even re-created a VM to use the latest version of the openvino toolkit dated 01 April 2019.

I run the following command:

python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model inference_graph/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support_api_v1.10.json --tensorflow_object_detection_api_pipeline_config /home/msupra/supra-detect/models/research/object_detection/inference_graph/pets.config --reverse_input_channels --data_type FP16 --output_dir /tmp/

 

The error output:

[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (600, 600).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ]  Exception occurred during running replacer "ObjectDetectionAPIDetectionOutputReplacement" (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIDetectionOutputReplacement'>): Found the following nodes '[]' with name 'crop_proposals' but there should be exactly 1. Looks like ObjectDetectionAPIProposalReplacement replacement didn't work.

 

I have searched the internet for a solution, but have not been able to find a working solution. Please help me to get this self trained model converted to openvino.

 

Regards

Morne

0 Kudos
28 Replies
Shubha_R_Intel
Employee
1,674 Views

Dear Supra, Morne,

from where are you getting your faster rcnn model ? This definitely works on OpenVino 2019 R1. Please see the following forum post:

https://software.intel.com/en-us/forums/computer-vision/topic/806587

Thanks,

Shubha

0 Kudos
Supra__Morne_
Beginner
1,674 Views

Hi Shubha

Thanks for your response. The model is a self trained model via the tensorflow object detection api using the following command:

python3 train.py --loftostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config

I attach the pipeline config file that I used as well as the frozen inference graph.

I will also have a look at the link you provided.

Regards

Morne

0 Kudos
Supra__Morne_
Beginner
1,674 Views

Hi Shubha

I redid the training of my model on tensorflow and again tried to run mo.py and the same error. Below the command I ran and I attach the debug of the run.

python mo.py --input_model ~/tensorflow/models/research/object_detection/inference_graph/frozen_inference_graph.pb --tensorflow_use_custom_operations_config ./extensions/front/tf/faster_rcnn_support_api_v1.7.json --tensorflow_object_detection_api_pipeline_config ~/tensorflow/models/research/object_detection/training/faster_rcnn_inception_v2_pets.config --reverse_input_channels --output_dir /tmp/ --log_level=DEBUG

It is really demoralizing if things do not work and trying to find a solution is so difficult.

Regards

Morne

0 Kudos
Shubha_R_Intel
Employee
1,674 Views

Dearest Supra, Morne,

I'm sorry that you're feeling demoralized. I've PM'd you. Please send me your frozen model as a *.zip. If your zip file gets rejected because it's too big, then I will email you a folder location to drop your frozen pb into. So in response to my PM, give me your email address also. Also, out of curiosity, have you also tried : faster_rcnn_support.json and faster_rcnn_support_api_v1.10.json ? Also, why did you do --reverse_input_channels ? Did you train your model in RGB ? We use that when we want to run one of our samples (which often read images using OpenCV) since OpenCV returns BGR not RGB.

Looking forward to hearing your response.

Thanks for using OpenVino !

Shubha

 

0 Kudos
Supra__Morne_
Beginner
1,674 Views

Hi Shubha

Thanks for your response. I have sent you the zip in the PM.

I used the other json files as well. I used reverse_input_channels because of instructions I read on intel site for tensorflow model conversion. I also excluded it in other mo.py configs, but with no success.

Regards

Morne

0 Kudos
Shubha_R_Intel
Employee
1,674 Views

Dear Supra, Morne,

Got it. Thanks for sending me your *.zip. I will investigate this for you.

Thanks for your patience !

Shubha

0 Kudos
Supra__Morne_
Beginner
1,674 Views

Hi Shubha

Thanks for your help, I really appreciate it.

Regards

Morne

0 Kudos
Shubha_R_Intel
Employee
1,674 Views

Dearest Supra, Morne,

Thanks for sending the *.zip files. I will look into your issue this week, promise !

Shubha

0 Kudos
Supra__Morne_
Beginner
1,674 Views

Thanks Shubha

I have an opportunity in South Africa to do a proof of concept on object detection and going the Intel route, looks like the best option at the moment, so I would really like to see what the performance is via openvino compared to tensoflow and vc2.

Regards

Morne

0 Kudos
Shubha_R_Intel
Employee
1,674 Views

Dear Supra,

A key question I need to know the answer to from you is :

Are you starting from one of the tested and validated inception v2 tensorflow models from this list ? I understand that you have re-trained your model. I get that. You are likely using Training Custom Object Detector Tutorial to do that, which is fine.  And your pipeline config says "faster_rcnn_inception_v2". Are you starting from one of our tested and validated faster rcnn inception v2 models ?  If you are not starting from one of our tested and validated models, this could well explain the errors you're getting. 

Before training your model, please select a model from the list above (these are already pre-trained models). Try the mo_tf.py command. Does it work ? If it does, then re-training it should not break anything. 

Thanks,

Shubha

0 Kudos
Supra__Morne_
Beginner
1,674 Views

Hi Shubha

Sorry for only coming back to you now, your comments came through during the night.

I will go through all your comments during the day and provide feedback.

Regards

Morne

0 Kudos
Supra__Morne_
Beginner
1,674 Views

Hi Shubha

I used the following tensorflow model:

Faster R-CNN Inception V2 COCOfaster_rcnn_inception_v2_coco_2018_01_28.tar.gz

I will try the optimizer before training to see what happens, I will rpovide feedback.

Regards

Morne

0 Kudos
Shubha_R_Intel
Employee
1,674 Views

Dear Supra, Morne: 

Also notice that the latest version of OpenVino is now available for download 2019 R.1.0.1 - but this has nothing to do with your issue.

OK the model you are using is a good one. The MO documentation says to download from here also :

http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz

Here is the command I used to convert it. It works !

Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\model_optimizer>python mo_tf.py --input_model c:\users\sdramani\Downloads\faster_rcnn_nas_coco_2018_01_28\frozen_inference_graph.pb  --tensorflow_use_custom_operations_config "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support.json" --tensorflow_object_detection_api_pipeline_config "c:\users\sdramani\Downloads\faster_rcnn_nas_coco_2018_01_28\pipeline.config"

 

From this post:

https://software.intel.com/en-us/forums/computer-vision/topic/806587

Honestly when I studied the pipeline.config file you originally sent me, it looked messed up. Lots of things were missing. Please take the pipeline.config file from the tensorflow repo. Do not change it much ! There are a few modifications you have to make according to https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html but the changes you have made are drastic.

If the model converts OK before training then it  should not break after training. I believe that you're probably changing the pipeline.config too drastically.

Let me know what happens -

Shubha

0 Kudos
Shubha_R_Intel
Employee
1,674 Views

Dear Supra, Morne: 

Indeed this appears to be a bug. I filed a high priority bug ticket on your behalf. Thanks for your patience and again, sorry for the trouble ! Long story short, re-training the aforementioned Tensorflow model and changing the pipeline.config only slightly as the Tensorflow documentation recommends should not break model optimizer, but in this case it did.

Thanks for your patience,

Shubha 

0 Kudos
Supra__Morne_
Beginner
1,674 Views

Hi Shubha

Thanks for the feedback, I appreciate it. Lets hope for a speedy resolution of the issue.

Regards

Morne 

0 Kudos
Supra__Morne_
Beginner
1,674 Views

Hi Shubha

So we have any feedback on the bug yet?

Regards

Morne

0 Kudos
Shubha_R_Intel
Employee
1,674 Views

Dearest Supra, Morne,

no feedback on the bug but it has been assigned already to a developer.

Thanks for your patience !

Shubha

0 Kudos
Supra__Morne_
Beginner
1,674 Views

Thanks Shubha

0 Kudos
Supra__Morne_
Beginner
1,674 Views

Hi Shubha

I tried some more scenarios and managed to optimize for a ssd_mobilenet model I trained. I used the following command:

python /opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo.py --input_model trained-inference-graphs/output_inference_graph_v3.pb/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/extensions/front/tf/ssd_support.json --tensorflow_object_detection_api_pipeline_config trained-inference-graphs/output_inference_graph_v3.pb/pipeline.config --output_dir /tmp/

The final output was:

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /tmp/frozen_inference_graph.xml
[ SUCCESS ] BIN file: /tmp/frozen_inference_graph.bin
[ SUCCESS ] Total execution time: 34.56 seconds.

 

I then tried to use the new xml file with the intel provided object counter in python:

./object_counter.py -m resources/frozen_inference_graph.xml -l resources/labels.txt -n1 -lp 1 -d MYRIAD

But get error:

Initializing plugin for MYRIAD device...
Reading IR...
image_tensor <openvino.inference_engine.ie_api.InputInfo object at 0x7f52335cb3c0>
input_blob image_tensor
Traceback (most recent call last):
  File "./object_counter.py", line 558, in <module>
    sys.exit(main() or 0)
  File "./object_counter.py", line 342, in main
    print(net.inputs['data'].shape)
KeyError: 'data'

 

When I trained the model I had "jr" as the label so I assume that in the resources/labels.txt I should add jr and in resources/conf.txt I should choose a video or webcam and then jr as in the sample below:

0 jr

Are my assumptions correct?

Regards

Morne

 

0 Kudos
Supra__Morne_
Beginner
1,360 Views

Hi Shubha

Just an update on the previous message. I am using the store-traffic-monitor python example.

 

Regards

Morne

0 Kudos
Reply