Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Alonzo__Magaly
Beginner
131 Views

Faster rcnn Model optimizer error

Hi,

I'm trying to run model optimizer on my faster rcnn custom model.

This model has 1 input (image:0) and 4 outputs (num_detections:0  detection_boxes:0  detection_scores:0  detection_classes:0)

I a working in a ubuntu 16.04 VM (if it can influence the result).

I run :

python3 mo_tf.py --input_model /home/movidius/inference_graph.pb --tensorflow_use_custom_operations_config extensions/front/tf/faster_rcnn_support.json --tensorflow_object_detection_api_pipeline_config  ~/model_test/pipeline.config

 

And got this output:

Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/movidius/inference_graph.pb
	- Path for generated IR: 	/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/.
	- IR output name: 	inference_graph
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	/home/movidius/model_test/pipeline.config
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json
Model Optimizer version: 	1.2.185.5335e231
[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  0
[ ERROR ]  Traceback (most recent call last):
  File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/main.py", line 321, in main
    return driver(argv)
  File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/main.py", line 263, in driver
    mean_scale_values=mean_scale)
  File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 171, in tf2nx
    class_registration.apply_replacements(graph, class_registration.ClassType.FRONT_REPLACER)
  File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 102, in apply_replacements
    replacer.find_and_replace_pattern(graph)
  File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/front/tf/replacement.py", line 91, in find_and_replace_pattern
    self.replace_sub_graph(graph, match)
  File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/front/common/replacement.py", line 115, in replace_sub_graph
    new_sub_graph = self.generate_sub_graph(graph, match)
  File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/extensions/front/tf/ObjectDetectionAPI.py", line 133, in generate_sub_graph
    sub_node = match.output_node(0)[0]
  File "/opt/intel/computer_vision_sdk_2018.3.343/deployment_tools/model_optimizer/mo/front/subgraph_matcher.py", line 130, in output_node
    return self._output_nodes_map[port]
KeyError: 0

[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------

 

Any idea of what's wrong ?

 

Regards.

Magaly

0 Kudos
3 Replies
Severine_H_Intel
Employee
131 Views

Hi Magaly, 

as you have re-trained your own faster rcnn model, you should use the  faster_rcnn_support_api_v1.7.json.

Best, 

Severine

Alonzo__Magaly
Beginner
131 Views

Hi,

Thank you for your answer

Indeed I've retrained the model and tried with the *.json file but only changing the input an output tensors (which are basically what I've modified from the original fast-rcnn). I finally had the optimizer working but from the *.meta file. And tthis is enough for the tests we are making but it will be convenient to use the .pb later so I'll dive in the *.json file.

Thank you again

Regards,

Magay

Rosenberg__Eran
Beginner
131 Views

Hi,

I trained a model faster_rcnn_resnet50 on oxford pets database, using tensorflow object detction api.

I fail to model optimize frozen_inference_graph.pb. 

C:\Intel\computer_vision_sdk_2018.3.343\deployment_tools\model_optimizer>python mo_tf.py --input_model d:\TFS\LPR\IP\MAIN\SRC\PythonProjects\TensorFlow\FreezeGraph\FreezeGraph\faster_rcnn_resnet50_pets_shay\frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config d:\TFS\LPR\IP\MAIN\SRC\PythonProjects\TensorFlow\FreezeGraph\FreezeGraph\faster_rcnn_resnet50_pets_shay\faster_rcnn_resnet50_pets_shay.config
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      d:\TFS\LPR\IP\MAIN\SRC\PythonProjects\TensorFlow\FreezeGraph\FreezeGraph\faster_rcnn_resnet50_pets_shay\frozen_inference_graph.pb
        - Path for generated IR:        C:\Intel\computer_vision_sdk_2018.3.343\deployment_tools\model_optimizer\.
        - IR output name:       frozen_inference_graph
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Not specified, inherited from the model
        - Input shapes:         Not specified, inherited from the model
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         Not specified
        - Precision of IR:      FP32
        - Enable fusing:        True
        - Enable grouped convolutions fusing:   True
        - Move mean values to preprocess section:       False
        - Reverse input channels:       False
TensorFlow specific parameters:
        - Input model in text protobuf format:  False
        - Offload unsupported operations:       False
        - Path to model dump for TensorBoard:   None
        - Update the configuration file with input/output node names:   None
        - Use configuration file used to generate the model with Object Detection API:  d:\TFS\LPR\IP\MAIN\SRC\PythonProjects\TensorFlow\FreezeGraph\FreezeGraph\faster_rcnn_resnet50_pets_shay\faster_rcnn_resnet50_pets_shay.config
        - Operations to offload:        None
        - Patterns to offload:  None
        - Use the config file:  None
Model Optimizer version:        1.2.185.5335e231
[ ERROR ]  Node Preprocessor/map/while/ResizeToRange/unstack has more than one outputs. Provide output port explicitly.

 

 

If I can optimize on the meta checkpoint file it would be great. Can you tell me how?

Thanks.

My files can be viewed at:

https://www.dropbox.com/sh/dh1c325m0t22qsn/AAAJRfedjbF0uMsTLWyS6uVYa?dl=0

Reply