Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Error on creating IR for Tensorflow Object Detection

Kaliyappan__Malathi
1,176 Views

Hi,

I'm using  Ubuntu18.04, Intel® Core™ i3-8109U CPU @ 3.00GHz × 4.

My OpenVINO version is R3 (openvino_2019.3.376).

I tried to Create IR for Tensorflow Object Detection model. I'm using faster_rcnn_inception_v2_coco_2018_01_28. I follow this link http://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html

While I run

sudo python3 mo_tf.py  --input_model=/home/ioz/AWS/Helmet_Project/research/object_detection/vehicle_inference_graph/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json  --tensorflow_object_detection_api_pipeline_config /home/ioz/AWS/Helmet_Project/research/object_detection/training/pipeline.config --reverse_input_channels

I got this error

[ ERROR ]  Failed to match nodes from custom replacement description with id 'ObjectDetectionAPIProposalReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ]  Found the following nodes '[]' with name 'crop_proposals' but there should be exactly 1. Looks like ObjectDetectionAPIProposalReplacement replacement didn't work.
Exception occurred during running replacer "ObjectDetectionAPIDetectionOutputReplacement" (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIDetectionOutputReplacement'>): Found the following nodes '[]' with name 'crop_proposals' but there should be exactly 1. Looks like ObjectDetectionAPIProposalReplacement replacement didn't work.

Please check the images1.png

I tried changing .json files(faster_rcnn_support_api_v1.7.json, faster_rcnn_support_api_v1.10.json, faster_rcnn_support_api_v1.13.json, faster_rcnn_support_api_v1.14.json) but the error was same.

Can you please help me to solve this.

Thanks in Advance.

0 Kudos
1 Solution
JesusE_Intel
Moderator
1,176 Views

Hi Malathi,

The development team took a look at your model and found that there are several significant differences with the original model. It would not be an easy task to enable it. We do have a guide to train a faster-rcnn to detect fruits, could you try to follow the guide with your objects and retrain?

https://software.intel.com/en-us/articles/fruit-classification-prototype

I walked though the guide and it worked for me, I used OpenVINO 2019 R3.1 to deploy and the included Tensorflow 1.14 version to train the network. 

Regards,

Jesus

View solution in original post

0 Kudos
11 Replies
JesusE_Intel
Moderator
1,176 Views

Hi Kaliyappan, Malathi,

I successfully converted the faster_rcnn_inception_v2_coco_2018_01_28 using the following command:

python3 ~/intel/openvino/deployment_tools/model_optimizer/mo_tf.py \
--input_model frozen_inference_graph.pb \
--tensorflow_use_custom_operations_config ~/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json \
--tensorflow_object_detection_api_pipeline_config pipeline.config \
--reverse_input_channels \
--batch 1

 I used the faster_rcnn_support.json from OpenVINO, and the pipeline.config included in the link above. Could you give it a try and let me know if it works for you? I am also using Ubuntu 18.04 with the latest OpenVINO package (2019.3.376).

Regards,

Jesus

0 Kudos
Kaliyappan__Malathi
1,176 Views

Hi Jesus E, Thanks for your reply.

I tried the command you gave,

python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py  --input_model frozen_inference_graph.pb  --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json  --tensorflow_object_detection_api_pipeline_config /home/ioz/AWS/Helmet_Project/research/object_detection/training/pipeline.config  --reverse_input_channels  --batch 1

But still got the same error

Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/ioz/AWS/Helmet_Project/research/object_detection/vehicle_inference_graph/frozen_inference_graph.pb
	- Path for generated IR: 	/home/ioz/AWS/Helmet_Project/research/object_detection/vehicle_inference_graph/.
	- IR output name: 	frozen_inference_graph
	- Log level: 	ERROR
	- Batch: 	1
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	True
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	/home/ioz/AWS/Helmet_Project/research/object_detection/training/pipeline.config
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	/opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json
Model Optimizer version: 	2019.3.0-408-gac8584cb7
/home/ioz/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ioz/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ioz/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ioz/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ioz/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ioz/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/ioz/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/ioz/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/ioz/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/ioz/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/ioz/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/ioz/.local/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (600, 600).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ]  Failed to match nodes from custom replacement description with id 'ObjectDetectionAPIProposalReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ]  Found the following nodes '[]' with name 'crop_proposals' but there should be exactly 1. Looks like ObjectDetectionAPIProposalReplacement replacement didn't work.
Exception occurred during running replacer "ObjectDetectionAPIDetectionOutputReplacement" (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIDetectionOutputReplacement'>): Found the following nodes '[]' with name 'crop_proposals' but there should be exactly 1. Looks like ObjectDetectionAPIProposalReplacement replacement didn't work.

I trained the model for custom dataset using faster_rcnn_inception_v2_coco_2018_01_28 pre-trained model and create inference_graph. Is that a problem?

Thanks in Advance.

0 Kudos
JesusE_Intel
Moderator
1,176 Views

Hi Malathi,

The faster_rcnn_support.json is only for the pre-trained model from the Open Model Zoo. You will need to use one of the following configuration files as you retrained the network. Make sure to match the same API version as the TensorFlow version used to train the model.

#Directory /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf

faster_rcnn_support_api_v1.7.json
faster_rcnn_support_api_v1.10.json
faster_rcnn_support_api_v1.13.json
faster_rcnn_support_api_v1.14.json

Additional information can be found here: https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html#how_to_convert_a_model

Regards,

Jesus

0 Kudos
Kaliyappan__Malathi
1,176 Views

Hi Jesus E,

I'm using OpenVINO toolkit R3 (openvino_2019.3.376).

I tried the all .json files mentioned above, but the error was same as in image1 (Please find the attached image)

These are the steps I followed:

1) I customized the pipeline.config

2)Then trained the RCNN model using this command

python3 train.py --logtostderr --train_dir=object_detection/training/ --pipeline_config_path=object_detection/training/pipeline.config

3)After completed the training I create .pb file from .ckpt files using this command

python3 export_inference_graph.py \            
--input_type image_tensor \                                                                      
--pipeline_config_path object_detection/training/pipeline.config \  
--trained_checkpoint_prefix object_detection/training/model.ckpt-84211 \                                                  
--output_directory object_detection/New_inference_graph

While creating .pb to IR, I got the error.

Can you please help me with this.

Thanks in advance.

 

0 Kudos
JesusE_Intel
Moderator
1,176 Views

Hi Malathi,

All the steps so far look correct, could you share your model and necessary files for me to walk though your steps?

I will send you a private message. 

Regards,

Jesus

0 Kudos
JesusE_Intel
Moderator
1,176 Views

Hi Malathi,

Thanks for sending the model over PM, I took a look and I was not able to convert the model to IR format. I am having the development team take a look, I will let you know what we find out.

Regards,

Jesus

0 Kudos
Kaliyappan__Malathi
1,176 Views

Hi Jesus E,

Can you please let me know If there is any update about this issue.

 

Thanks in Advance

0 Kudos
JesusE_Intel
Moderator
1,177 Views

Hi Malathi,

The development team took a look at your model and found that there are several significant differences with the original model. It would not be an easy task to enable it. We do have a guide to train a faster-rcnn to detect fruits, could you try to follow the guide with your objects and retrain?

https://software.intel.com/en-us/articles/fruit-classification-prototype

I walked though the guide and it worked for me, I used OpenVINO 2019 R3.1 to deploy and the included Tensorflow 1.14 version to train the network. 

Regards,

Jesus

0 Kudos
Kaliyappan__Malathi
1,176 Views

Hi Jesus E.,

Thanks for your reply.

I have referred the link you gave. I need a clarification about the training process,

These are the steps I have followed for training

1) For training

 python3 train.py --logtostderr --train_dir=object_detection/training/ --pipeline_config_path=object_detection/training/faster_rcnn_inception_v2_coco.config

2) For creating inference graph

python3 export_inference_graph.py \            
--input_type image_tensor \                                                                      
--pipeline_config_path object_detection/training/faster_rcnn_inception_v2_coco.config \  
--trained_checkpoint_prefix object_detection/training/model.ckpt-116105 \                                                  
--output_directory object_detection/helmet_inference_graph

Please check the config file attached below

In that document they use ssd_mobilenet_v1_pets.config file  for training

(venv)$ python3 train.py --logtostderr --train_dir=training/ -- pipeline_config_path = training/ssd_mobilenet_v1_pets.config

But while creating inference graph they used faster_rcnn_inception_v_coco.config

(venv)$ python3 export_inference_graph.py \
	    --input_type  image_tensor \
	    --pipeline_config_path training/faster_rcnn_inception_v_coco.config \
	    --trained_checkpoint_prefix  training/model.ckpt-56129 \
	    --output_directory  faster_rcnn_inception_inference_graph

Is that a correct process?
 

Do I have to change the config file as like ssd_mobilenet_v1_pets.config

 

Thanks in advance

0 Kudos
Kaliyappan__Malathi
1,176 Views

Hi Jesus E.,

I followed the link you gave . I got the result.

Thank you :)

0 Kudos
JesusE_Intel
Moderator
1,176 Views

Awesome! Thanks for letting me know.

Regards,

Jesus

0 Kudos
Reply