I have been trying for days now to convert a tensorflow graph to use with Neural compute stick2.
I even re-created a VM to use the latest version of the openvino toolkit dated 01 April 2019.
I run the following command:
python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model inference_graph/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support_api_v1.10.json --tensorflow_object_detection_api_pipeline_config /home/msupra/supra-detect/models/research/object_detection/inference_graph/pets.config --reverse_input_channels --data_type FP16 --output_dir /tmp/
The error output:
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (600, 600).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ] Exception occurred during running replacer "ObjectDetectionAPIDetectionOutputReplacement" (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIDetectionOutputReplacement'>): Found the following nodes '' with name 'crop_proposals' but there should be exactly 1. Looks like ObjectDetectionAPIProposalReplacement replacement didn't work.
I have searched the internet for a solution, but have not been able to find a working solution. Please help me to get this self trained model converted to openvino.
Dear Supra, Morne,
from where are you getting your faster rcnn model ? This definitely works on OpenVino 2019 R1. Please see the following forum post:
Thanks for your response. The model is a self trained model via the tensorflow object detection api using the following command:
python3 train.py --loftostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config
I attach the pipeline config file that I used as well as the frozen inference graph.
I will also have a look at the link you provided.
I redid the training of my model on tensorflow and again tried to run mo.py and the same error. Below the command I ran and I attach the debug of the run.
python mo.py --input_model ~/tensorflow/models/research/object_detection/inference_graph/frozen_inference_graph.pb --tensorflow_use_custom_operations_config ./extensions/front/tf/faster_rcnn_support_api_v1.7.json --tensorflow_object_detection_api_pipeline_config ~/tensorflow/models/research/object_detection/training/faster_rcnn_inception_v2_pets.config --reverse_input_channels --output_dir /tmp/ --log_level=DEBUG
It is really demoralizing if things do not work and trying to find a solution is so difficult.
Dearest Supra, Morne,
I'm sorry that you're feeling demoralized. I've PM'd you. Please send me your frozen model as a *.zip. If your zip file gets rejected because it's too big, then I will email you a folder location to drop your frozen pb into. So in response to my PM, give me your email address also. Also, out of curiosity, have you also tried : faster_rcnn_support.json and faster_rcnn_support_api_v1.10.json ? Also, why did you do --reverse_input_channels ? Did you train your model in RGB ? We use that when we want to run one of our samples (which often read images using OpenCV) since OpenCV returns BGR not RGB.
Looking forward to hearing your response.
Thanks for using OpenVino !
Thanks for your response. I have sent you the zip in the PM.
I used the other json files as well. I used reverse_input_channels because of instructions I read on intel site for tensorflow model conversion. I also excluded it in other mo.py configs, but with no success.
I have an opportunity in South Africa to do a proof of concept on object detection and going the Intel route, looks like the best option at the moment, so I would really like to see what the performance is via openvino compared to tensoflow and vc2.
A key question I need to know the answer to from you is :
Are you starting from one of the tested and validated inception v2 tensorflow models from this list ? I understand that you have re-trained your model. I get that. You are likely using Training Custom Object Detector Tutorial to do that, which is fine. And your pipeline config says "faster_rcnn_inception_v2". Are you starting from one of our tested and validated faster rcnn inception v2 models ? If you are not starting from one of our tested and validated models, this could well explain the errors you're getting.
Before training your model, please select a model from the list above (these are already pre-trained models). Try the mo_tf.py command. Does it work ? If it does, then re-training it should not break anything.
Sorry for only coming back to you now, your comments came through during the night.
I will go through all your comments during the day and provide feedback.
I used the following tensorflow model:
Faster R-CNN Inception V2 COCOfaster_rcnn_inception_v2_coco_2018_01_28.tar.gz
I will try the optimizer before training to see what happens, I will rpovide feedback.
Dear Supra, Morne:
Also notice that the latest version of OpenVino is now available for download 2019 R.1.0.1 - but this has nothing to do with your issue.
OK the model you are using is a good one. The MO documentation says to download from here also :
Here is the command I used to convert it. It works !
Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\model_optimizer>python mo_tf.py --input_model c:\users\sdramani\Downloads\faster_rcnn_nas_coco_2018_01_28\frozen_inference_graph.pb --tensorflow_use_custom_operations_config "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support.json" --tensorflow_object_detection_api_pipeline_config "c:\users\sdramani\Downloads\faster_rcnn_nas_coco_2018_01_28\pipeline.config"
From this post:
Honestly when I studied the pipeline.config file you originally sent me, it looked messed up. Lots of things were missing. Please take the pipeline.config file from the tensorflow repo. Do not change it much ! There are a few modifications you have to make according to https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html but the changes you have made are drastic.
If the model converts OK before training then it should not break after training. I believe that you're probably changing the pipeline.config too drastically.
Let me know what happens -
Dear Supra, Morne:
Indeed this appears to be a bug. I filed a high priority bug ticket on your behalf. Thanks for your patience and again, sorry for the trouble ! Long story short, re-training the aforementioned Tensorflow model and changing the pipeline.config only slightly as the Tensorflow documentation recommends should not break model optimizer, but in this case it did.
Thanks for your patience,
I tried some more scenarios and managed to optimize for a ssd_mobilenet model I trained. I used the following command:
python /opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/mo.py --input_model trained-inference-graphs/output_inference_graph_v3.pb/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /opt/intel/openvino_2019.1.094/deployment_tools/model_optimizer/extensions/front/tf/ssd_support.json --tensorflow_object_detection_api_pipeline_config trained-inference-graphs/output_inference_graph_v3.pb/pipeline.config --output_dir /tmp/
The final output was:
[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /tmp/frozen_inference_graph.xml
[ SUCCESS ] BIN file: /tmp/frozen_inference_graph.bin
[ SUCCESS ] Total execution time: 34.56 seconds.
I then tried to use the new xml file with the intel provided object counter in python:
./object_counter.py -m resources/frozen_inference_graph.xml -l resources/labels.txt -n1 -lp 1 -d MYRIAD
But get error:
Initializing plugin for MYRIAD device...
image_tensor <openvino.inference_engine.ie_api.InputInfo object at 0x7f52335cb3c0>
Traceback (most recent call last):
File "./object_counter.py", line 558, in <module>
sys.exit(main() or 0)
File "./object_counter.py", line 342, in main
When I trained the model I had "jr" as the label so I assume that in the resources/labels.txt I should add jr and in resources/conf.txt I should choose a video or webcam and then jr as in the sample below:
Are my assumptions correct?