Community
cancel
Showing results for 
Search instead for 
Did you mean: 
A__Siva
Beginner
867 Views

Porting a Custom Trained Faster-RCNN-Inception-V2 Tensorflow model in OpenVino

Jump to solution

Openvino - Windows, On Latest April Version

  1. Trained Custom Object Detection Tensorflow based on Faster-RCNN-Inception-V2 model
  2. The output model worked fine and was able to Detect
  3. Froze the model using command

python C:\Users\AppData\Local\Continuum\anaconda3\pkgs\tensorflow-base-1.9.0-eigen_py36h45df0d8_0\Lib\site-packages\tensorflow\python\tools\freeze_graph.py --input_meta_graph E:\Model-70\model.ckpt-70.meta --output_node_names "save/restore_all" --output_graph E:\my_model_frozen.pb --input_checkpoint E:\Model-70\model.ckpt-70 --input_binary=true

This step was successful

  1. However on running 

python mo_tf.py --input_model E:\Car_Detection\McDonalds\my_model_frozen.pb 

[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "image_tensor" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
[ 2019-05-09 11:37:21,216 ] [ DEBUG ] [ main:318 ]  Traceback (most recent call last):
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\middle\passes\infer.py", line 166, in partial_infer
    node_name)
mo.utils.error.Error: Not all output shapes were inferred or fully defined for node "image_tensor".
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 167, in apply_replacements
    replacer.find_and_replace_pattern(graph)
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\extensions\middle\PartialInfer.py", line 31, in find_and_replace_pattern
    partial_infer(graph)
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\middle\passes\infer.py", line 196, in partial_infer
    refer_to_faq_msg(38)) from err
mo.utils.error.Error: Stopped shape/value propagation at "image_tensor" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\main.py", line 312, in main
    return driver(argv)
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\main.py", line 263, in driver
    is_binary=not argv.input_model_is_text)
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\pipeline\tf.py", line 128, in tf2nx
    class_registration.apply_replacements(graph, class_registration.ClassType.MIDDLE_REPLACER)
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 184, in apply_replacements
    )) from err
mo.utils.error.Error: Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "image_tensor" node.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.

Model Optimizer version:        2019.1.0-341-gc9b66a2
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.input_cut.InputCut'>): Graph contains 0 node after executing <class 'extensions.front.input_cut.InputCut'>. It considered as error because resulting IR will be empty which is not usual

Tried by passing values also

python mo_tf.py --input_model E:\my_model_frozen.pb --input_shape [1,224,224,3] --mean_values [1024,1024,1024] --scale_values [128,128,128] --log_level=DEBUG

mo.utils.error.Error: Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.user_data_repack.UserDataRepack'>): No or multiple placeholders in the model, but only one shape is provided, cannot set it.
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #32.

  • Question #1- Does Openvino support Faster-RCNN-Inception-V2 based custom trained models
  • Question #2- Any next steps or point of contact to fix this or look from Intel end ?
  • Question #3 - Is there a summary like  Google Tensorflow version that works for openvino for Object Detection, Tensorflow models supported by openvino (Specifically Faster-RCNN-Inception-V2 or those not supported) 
     
0 Kudos
1 Solution
Shubha_R_Intel
Employee
867 Views

Dear Siva,

As I explained in your previous forum post there is a bug on Model Optimizer's handling of custom trained Tensorflow Object Detection API models. I realize that this is quite unfortunate and at this time, there is no workaround for the problem. It looks like the specific error you are getting here though is different from the one in your previous post. I will file a bug on this one too. Regarding Question #3, yes there is a summary of TensorFlow models supported in the below link:

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_T...

Thanks for your patience.

Sincerely,

Shubha

View solution in original post

8 Replies
JesusE_Intel
Moderator
867 Views

Hi Siva,

The OpenVINO Toolkit supports the Frozen Faster R-CNN Inception V2 COCO from the TensorFlow Object Detection Models Zoo. You should be able to run your custom trained model. The model optimizer requires a couple more parameters to convert your frozen .pb file to IR format. 

Take a look at the How to Convert a model section in the Model Optimizer Developer Guide.

In short, you will need to use --tensorflow_use_custom_operations_config parameter. You may need to modify the .json file to match your custom model. You will also need to use --tensorflow_object_detection_api_pipeline_config to reference your pipeline.config.

Please give this a try and let me know if you run into other issues. 

Regards,

Jesus

 

 

A__Siva
Beginner
867 Views


Freeze Graph
=============
python C:\Users\AppData\Local\Continuum\anaconda3\pkgs\tensorflow-base-1.9.0-eigen_py36h45df0d8_0\Lib\site-packages\tensorflow\python\tools\freeze_graph.py --input_meta_graph E:\Model-70\model.ckpt-70.meta --output_node_names "save/restore_all" --output_graph E:\my_model_frozen.pb --input_checkpoint E:\Model-70\model.ckpt-70 --input_binary=true

Step Successful my_model_frozen.pb generated

Model Optimizer
==================
python mo_tf.py --input_model E:\my_model_frozen.pb --tensorflow_use_custom_operations_config C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support_api_v1.10.json --tensorflow_object_detection_api_pipeline_config E:\faster_rcnn_inception_v2_pets.config --reverse_input_channels --log_level=DEBUG

Error
======
Traceback (most recent call last):
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\main.py", line 312, in main
    return driver(argv)
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\main.py", line 263, in driver
    is_binary=not argv.input_model_is_text)
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\pipeline\tf.py", line 127, in tf2nx
    class_registration.apply_replacements(graph, class_registration.ClassType.FRONT_REPLACER)
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 184, in apply_replacements
    )) from err
mo.utils.error.Error: Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.input_cut.InputCut'>): Graph contains 0 node after executing <class 'extensions.front.input_cut.InputCut'>. It considered as error because resulting IR will be empty which is not usual


 

Shubha_R_Intel
Employee
868 Views

Dear Siva,

As I explained in your previous forum post there is a bug on Model Optimizer's handling of custom trained Tensorflow Object Detection API models. I realize that this is quite unfortunate and at this time, there is no workaround for the problem. It looks like the specific error you are getting here though is different from the one in your previous post. I will file a bug on this one too. Regarding Question #3, yes there is a summary of TensorFlow models supported in the below link:

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_T...

Thanks for your patience.

Sincerely,

Shubha

View solution in original post

A__Siva
Beginner
867 Views

Thank you for the response. Please give fix at the earliest. Please keep us posted. We need it urgently!!

Shubha_R_Intel
Employee
867 Views

Dear Siva,

I have attached a zip file which fixes the issue (containing *.json files). Please refer to the below post to find the attachment:

https://software.intel.com/en-us/forums/computer-vision/topic/809798

Thanks for your patience !

Shubha

Karandeep_Singh_D_
Student Ambassador
867 Views

Hey, Somehow my custom faster_rcnn_inception_v2 model is running on ncs(takes 5 sec for inference)(output classes=1)(input size=600x600).

But when i load it on NCS2 it gets stuck at plugin load.

Shubha_R_Intel
Employee
867 Views

Dear Karandeep Singh D,

Have you tried on the latest and greatest OpenVino 2019R2 which was just released last week ? 

Please try and report back.

Thanks !

Shubha

867 Views

Hi,

I am using Openvino R1.1 to convert a custom trained faster rcnn inception v2 model.

I have used the models zoo as a checkpoint with reduced no of classes. I am using TF 1.14 for training and I have TF 1.15 in the OpenVino envt.

I am able to generate the .bin and .xml files. However when i am running inference, using the below lines

plugin_dir = r'C:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\inference_engine\bin\intel64\Release' 
model_xml = '.\frozen_inference_graph.xml' 
model_bin = '.\frozen_inference_graph.bin' 

plugin = IEPlugin("CPU")
plugin.add_cpu_extension(r"C:\Program Files (x86)\IntelSWTools\openvino_2019.1.148\deployment_tools\inference_engine\bin\intel64\Release\cpu_extension_avx2.dll")

net = IENetwork(model=model_xml, weights=model_bin)

I get the below error:

raceback (most recent call last):
  File "infer.py", line 15, in <module>
    net = IENetwork(model=model_xml, weights=model_bin)
  File "ie_api.pyx", line 271, in openvino.inference_engine.ie_api.IENetwork.__cinit__
RuntimeError: Error reading network: in Layer FirstStageFeatureExtractor/InceptionV2/InceptionV2/Conv2d_1a_7x7/Relu: trying to connect an edge to non existing output port: 8.5

 

I would be grateful if you could help.

Reply