Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Error Porting Base faster_rcnn_inception_v2_coco_2018_01_28 Model

A__Siva
Beginner
1,681 Views

Error porting Tensorflow Model

1. Download base model from http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz

2. Tensorflow documentation supports this model https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html#inpage-nav-2-1

3. Using Model Optimizer converting this model with command
python mo_tf.py --input_meta_graph E:\faster_rcnn_inception_v2_coco_2018_01_28\model.ckpt.meta --log_level=DEBUG

Error

Traceback (most recent call last):
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\main.py", line 312, in main
    return driver(argv)
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\main.py", line 263, in driver
    is_binary=not argv.input_model_is_text)
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\pipeline\tf.py", line 128, in tf2nx
    class_registration.apply_replacements(graph, class_registration.ClassType.MIDDLE_REPLACER)
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 184, in apply_replacements
    )) from err
mo.utils.error.Error: Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "image_tensor" node.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.


Please tell me if any steps I am missing here. Is this OS Dependant like works in linux only ? I tried with openvino 2019 April build on Windows. Please give exact commands if it works for you. Please don't give generic comments. I have been fixing on this past few days. Here you don't need freeze graph I believe. There are multiple options to convert. Here with checkpoint I thought it would work. :( :( Is their any direct product support to sort this issue. I may be missing something but I don't get much clear response to solve issue past few days.

 

0 Kudos
1 Solution
Luis_at_Intel
Moderator
1,681 Views

Based on your command "python mo_tf.py --input_meta_graph E:\faster_rcnn_inception_v2_coco_2018_01_28\model.ckpt.meta --log_level=DEBUG" I can see that there are some flags/parameters missing. I've downloaded the model faster_rcnn_inception_v2_coco_2018_01_28 you linked and extracted in my Downloads directory.

This is how I went by to convert the TensorFlow* model with these flags/parameters specific to this model using the mo_tf.py script:

  • --data_type {FP16,FP32,half,float}
  • --tensorflow_object_detection_api_pipeline_config "<path-to>\faster_rcnn_inception_v2_coco_2018_01_28\pipeline.config"
  • --tensorflow_use_custom_operations_config "<path-to>\openvino\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support.json"
  • --input_model "<path-to>\faster_rcnn_inception_v2_coco_2018_01_28\frozen_inference_graph.pb"

For example:

python mo_tf.py --data_type=FP32 --tensorflow_object_detection_api_pipeline_config "C:\Users\user\Downloads\faster_rcnn_inception_v2_coco_2018_01_28\pipeline.config" --tensorflow_use_custom_operations_config "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support.json" --input_model "C:\Users\user\Downloads\faster_rcnn_inception_v2_coco_2018_01_28\frozen_inference_graph.pb"

If it succeeds the corresponding IR model (frozen_inference_graph.bin and frozen_inference_graph.xml) would be found in the location where the command was executed.

Note: for more information on specific TensorFlow* parameters run "python mo_tf.py --help"

 

Regards,

@Luis_at_Intel

View solution in original post

0 Kudos
5 Replies
Luis_at_Intel
Moderator
1,682 Views

Based on your command "python mo_tf.py --input_meta_graph E:\faster_rcnn_inception_v2_coco_2018_01_28\model.ckpt.meta --log_level=DEBUG" I can see that there are some flags/parameters missing. I've downloaded the model faster_rcnn_inception_v2_coco_2018_01_28 you linked and extracted in my Downloads directory.

This is how I went by to convert the TensorFlow* model with these flags/parameters specific to this model using the mo_tf.py script:

  • --data_type {FP16,FP32,half,float}
  • --tensorflow_object_detection_api_pipeline_config "<path-to>\faster_rcnn_inception_v2_coco_2018_01_28\pipeline.config"
  • --tensorflow_use_custom_operations_config "<path-to>\openvino\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support.json"
  • --input_model "<path-to>\faster_rcnn_inception_v2_coco_2018_01_28\frozen_inference_graph.pb"

For example:

python mo_tf.py --data_type=FP32 --tensorflow_object_detection_api_pipeline_config "C:\Users\user\Downloads\faster_rcnn_inception_v2_coco_2018_01_28\pipeline.config" --tensorflow_use_custom_operations_config "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support.json" --input_model "C:\Users\user\Downloads\faster_rcnn_inception_v2_coco_2018_01_28\frozen_inference_graph.pb"

If it succeeds the corresponding IR model (frozen_inference_graph.bin and frozen_inference_graph.xml) would be found in the location where the command was executed.

Note: for more information on specific TensorFlow* parameters run "python mo_tf.py --help"

 

Regards,

@Luis_at_Intel

0 Kudos
Horn__Alexander
Beginner
1,681 Views

Thanks Louis! 

Indeed - I just noticed this as well. Model optimizer works as expected if one ads all TF flags per doc:

/opt/intel/computer_vision_sdk/deployment_tools/model_optimizer$ python3 ./mo_tf.py --input_model /tmp/faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb --tensorflow_use_custom_operations_config ./extensions/front/tf/faster_rcnn_support.json --tensorflow_object_detection_api_pipeline_config /tmp/faster_rcnn_inception_v2_coco_2018_01_28/pipeline.config 
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/tmp/faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb
	- Path for generated IR: 	/opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/.
	- IR output name: 	frozen_inference_graph
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	/tmp/faster_rcnn_inception_v2_coco_2018_01_28/pipeline.config
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	/opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/./extensions/front/tf/faster_rcnn_support.json
Model Optimizer version: 	1.5.12.49d067a0

[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (600, 600).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
The graph output nodes "num_detections", "detection_boxes", "detection_classes", "detection_scores" have been replaced with a single layer of type "Detection Output". Refer to IR catalogue in the documentation for information about this layer.

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/./frozen_inference_graph.xml
[ SUCCESS ] BIN file: /opt/intel/computer_vision_sdk_2018.5.455/deployment_tools/model_optimizer/./frozen_inference_graph.bin
[ SUCCESS ] Total execution time: 41.98 seconds. 

 

 

0 Kudos
A__Siva
Beginner
1,681 Views

 

 

 

Thanks this works fine. Did a custom training for single class 

Custom Single class model failed for both

http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_coco_2018_01_28.tar.gz
http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz

The step is 
1.    Trying vanilla conversion into OpenVino
2.    Retrain for custom detection (6~ 8 hours)
3.    Validate model in tensorflow (Does some decent detections)
4.    Generate the frozen_inference.pb in tensorflow by exporting checkpoint
5.    Generate the Openvino format for custom model
 

Can you please check the attached pipeline configuration and the logs generated for model optimizer

python mo_tf.py --data_type=FP32 --tensorflow_object_detection_api_pipeline_config "E:\faster_rcnn_inception_v2_coco_2018_01_28\pipeline.config" --tensorflow_use_custom_operations_config "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support.json" --input_model "E:\faster_rcnn_inception_v2_coco_2018_01_28\frozen_inference_graph.pb"

Error

======

Traceback (most recent call last):
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\main.py", line 312, in main
    return driver(argv)
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\main.py", line 263, in driver
    is_binary=not argv.input_model_is_text)
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\pipeline\tf.py", line 127, in tf2nx
    class_registration.apply_replacements(graph, class_registration.ClassType.FRONT_REPLACER)
  File "C:\Intel\openvino_2019.1.133\deployment_tools\model_optimizer\mo\utils\class_registration.py", line 184, in apply_replacements
    )) from err
mo.utils.error.Error: Exception occurred during running replacer "ObjectDetectionAPIDetectionOutputReplacement" (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIDetectionOutputReplacement'>): Found the following nodes '[]' with name 'crop_proposals' but there should be exactly 1. Looks like ObjectDetectionAPIProposalReplacement replacement didn't work.

Now I am on this thread - https://github.com/opencv/dldt/issues/28

Almost the issue and things mentioned are the same. Please let me know if any changes / issues in the code / pipeline file

 

0 Kudos
Shubha_R_Intel
Employee
1,681 Views

Dearest A, Siva,

As explained in your other post the bug you are seeing is a known bug which I've reproduced and also filed. It's currently being addressed by the OpenVino development team.

Thanks for your patience,

Shubha

0 Kudos
arunav
Beginner
1,681 Views

Dear all,

I am very new to the community, I am also getting the following error. I tried the suggestions mentioned above, but not able to find the exact cause.

Could it be possible for you to help. 

I am attaching my debug logs and command that I am providing at the time of using the mo_tf.py.

Command: 

python3 mo_tf.py --input_meta_graph ~/ckpt/ckpt.meta --log_level=DEBUG

logs are attached.

Problem: 

mo.utils.error.Error: Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "AssignVariableOp_8" node. 

My meta file and Model Snapshot is also attached.

Model.zip

 

Kindly help 

 

Kind Regards

Arun

0 Kudos
Reply