Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6493 Discussions

Cannot convert custom faster rcnn model using openvino R3

Abdul_Aziz__Nurul_F1
477 Views

Hi,

I try to convert custom faster rcnn inception v2 coco model using Openvino R3 (2019.3.0-408-gac8584cb7). I'm running this command:

 

python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model=<frozen_inference_graph_dir>frozen_inference_graph.pb  \
--output=detection_boxes,detection_classes,detection_features,detection_multiclass_scores,detection_scores,raw_detection_boxes,raw_detection_scores \
--model_name =faster_rcnnv2 \
--tensorflow_use_custom_operations_config=/opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json\
--tensorflow_object_detection_api_pipeline_config=<pipeline_file_dir>faster_rcnn_inception_v2_coco.config --data_type=FP16

 

and the error I received are:

 

[ ERROR ]  Failed to match nodes from custom replacement description with id 'ObjectDetectionAPIProposalReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ]  Found the following nodes '[]' with name 'crop_proposals' but there should be exactly 1. Looks like ObjectDetectionAPIProposalReplacement replacement didn't work.
Exception occurred during running replacer "ObjectDetectionAPIDetectionOutputReplacement" (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIDetectionOutputReplacement'>): Found the following nodes '[]' with name 'crop_proposals' but there should be exactly 1. Looks like ObjectDetectionAPIProposalReplacement replacement didn't work.

 

I already take a look on this https://software.intel.com/en-us/forums/intel-distribution-of-openvino-toolkit/topic/809407 and still cannot convert it.

Thank you.

0 Kudos
1 Reply
Luis_at_Intel
Moderator
477 Views

Hi Abdul Aziz, Nurul Fatin Nadiah,

Thanks for reaching out. I was able to convert the faster_rcnn_inception_v2_coco model from the tensorflow repo here with the command below. I could suggest to omit on your command the --output flag and include the --batch 1 flag as well, just to see if it converts successfully. If that doesn't work, may I ask if possible to share your .pb file together with your pipeline.config? You can share it via PM in case you don't want to post them publicly.

 

python "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo_tf.py" \
-m "C:\Users\user\843948-frcnn-inceptionv2-coco\faster_rcnn_inception_v2_coco_2018_01_28\frozen_inference_graph.pb" \
--model_name =faster_rcnnv2 \
--tensorflow_use_custom_operations_config="C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support.json" \
--tensorflow_object_detection_api_pipeline_config="C:\Users\user\843948-frcnn-inceptionv2-coco\faster_rcnn_inception_v2_coco_2018_01_28\pipeline.config" \
--data_type=FP16 \
--batch 1

 

Regards,

Luis

 

0 Kudos
Reply