- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am trying to convert Tensorflow model to Windows 10 OpenVino IR files. I have downloaded a pre-trained model from the following link:
http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_coco_2018_01_28.tar.gz
then I used 2 types of commands.
1st command:
(tensorflow_cpu) C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer>python mo_tf.py --input_model="C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\open_model_zoo\models\public\faster_rcnn_resnet101_coco_2018_01_28\frozen_inference_graph.pb" --output=detection_boxes,detection_scores,num_detections --tensorflow_use_custom_operations_config "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support.json" --output_dir "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\open_model_zoo\models\public\faster_rcnn_resnet101_coco_2018_01_28" --tensorflow_object_detection_api_pipeline_config "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\open_model_zoo\models\public\faster_rcnn_resnet101_coco_2018_01_28\pipeline.config" --reverse_input_channels
Unfortunately, I got following error message. How I can solve this problem?
1st error:
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (600, 600).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
The graph output nodes "num_detections", "detection_boxes", "detection_classes", "detection_scores" have been replaced with a single layer of type "Detection Output". Refer to IR catalogue in the documentation for information about this layer.
[ ERROR ] -------------------------------------------------
[ ERROR ] ----------------- INTERNAL ERROR ----------------
[ ERROR ] Unexpected exception happened.
[ ERROR ] Please contact Model Optimizer developers and forward the following information:
[ ERROR ] [Errno 13] Permission denied: 'C:\\Program Files (x86)\\IntelSWTools\\openvino_2019.3.379\\deployment_tools\\open_model_zoo\\models\\public\\faster_rcnn_resnet101_coco_2018_01_28\\frozen_inference_graph.bin'
[ ERROR ] Traceback (most recent call last):
File "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\mo\main.py", line 298, in main
return driver(argv)
File "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\mo\main.py", line 247, in driver
is_binary=not argv.input_model_is_text)
File "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\mo\pipeline\tf.py", line 237, in tf2nx
meta_info=meta_info)
File "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\mo\pipeline\common.py", line 132, in prepare_emit_ir
serialize_constants(graph, bin_file)
File "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\mo\back\ie_ir_ver_2\emitter.py", line 41, in serialize_constants
with open(bin_file_name, 'wb') as bin_file:
PermissionError: [Errno 13] Permission denied: 'C:\\Program Files (x86)\\IntelSWTools\\openvino_2019.3.379\\deployment_tools\\open_model_zoo\\models\\public\\faster_rcnn_resnet101_coco_2018_01_28\\frozen_inference_graph.bin'
[ ERROR ] ---------------- END OF BUG REPORT --------------
[ ERROR ] -------------------------------------------------
2nd command:
(tensorflow_cpu) C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer>python mo_tf.py --input_model="C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\open_model_zoo\models\public\faster_rcnn_resnet101_coco_2018_01_28\frozen_inference_graph.pb" --output=detection_boxes,detection_scores,num_detections --tensorflow_use_custom_operations_config "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support_api_v1.7.json" --output_dir "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\open_model_zoo\models\public\faster_rcnn_resnet101_coco_2018_01_28" --tensorflow_object_detection_api_pipeline_config "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\open_model_zoo\models\public\faster_rcnn_resnet101_coco_2018_01_28\pipeline.config" --reverse_input_channels
2nd error:
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
Specify the "--input_shape" command line parameter to override the default shape which is equal to (600, 600).
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ ERROR ] Failed to match nodes from custom replacement description with id 'ObjectDetectionAPIProposalReplacement':
It means model and custom replacement description are incompatible.
Try to correct custom replacement description according to documentation with respect to model node names
[ ERROR ] Found the following nodes '[]' with name 'crop_proposals' but there should be exactly 1. Looks like ObjectDetectionAPIProposalReplacement replacement didn't work.
Exception occurred during running replacer "ObjectDetectionAPIDetectionOutputReplacement" (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPIDetectionOutputReplacement'>): Found the following nodes '[]' with name 'crop_proposals' but there should be exactly 1. Looks like ObjectDetectionAPIProposalReplacement replacement didn't work.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
>>> PermissionError: [Errno 13] Permission denied: 'C:\\Program Files (x86)\\IntelSWTools\\openvino_2019.3.379\\deployment_tools\\open_model_zoo\\models\\public\\faster_rcnn_resnet101_coco_2018_01_28\\frozen_inference_graph.bin'
You are trying to write the output graph to a directory where you have no write access.
Either set --output_dir to any other directory (e.g. your desktop) or run the command as administrator.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Poca,
I already tried to change the output directory to D and run it as administrator. Unfortunately, I got another error.
my command:
(tensorflow_cpu) C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer>
(tensorflow_cpu) C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer>python mo_tf.py --input_model="C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\open_model_zoo\models\public\faster_rcnn_resnet101_coco_2018_01_28\frozen_inference_graph.pb" --output=detection_boxes,detection_scores,num_detections --tensorflow_use_custom_operations_config "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support.json" --output_dir "D:\" --tensorflow_object_detection_api_pipeline_config "C:\Program Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\open_model_zoo\models\public\faster_rcnn_resnet101_coco_2018_01_28\pipeline.config" --reverse_input_channels
my error:
mo_tf.py: error: unrecognized arguments: Files (x86)\IntelSWTools\openvino_2019.3.379\deployment_tools\open_model_zoo\models\public\faster_rcnn_resnet101_coco_2018_01_28\pipeline.config --reverse_input_channels
Do you have any idea? Thanks in advanced.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi YAKNO, MARLINA,
It seems like the issue might be due to a space or new line character in the file path. Please re-run the command by providing the file paths manually, if you are copying them from a document. I have tried to convert the model using all the arguments that you have specified and is working fine.
Regards,
Ram prasad

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page