Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.
5763 Discussions

Model Optimizer not working for FasterRCNN Resnet50 COCO


I have been trying to use the model optimizer on the FasterRCNN Resnet50 COCO model ( this one, more exactly) without success.
I have downloaded the model using the downloader tool and I want to use the model with the object detection python sample, however I can't seem to convert it to IR  FP16.
This is the log:

pi@raspberrypi:~/openvino/model-optimizer $ ./ --reverse_input_channels --input_shape=[1,600,1024,3] --input=image_tensor --output=detection_scores,detection_boxes,num_detections --transformations_config=extensions/front/tf/faster_rcnn_support.json --tensorflow_object_detection_api_pipeline_config=/home/pi/open_model_zoo/tools/downloader/public/faster_rcnn_resnet50_coco/faster_rcnn_resnet50_coco_2018_01_28/pipeline.config --input_model=/home/pi/open_model_zoo/tools/downloader/public/faster_rcnn_resnet50_coco/faster_rcnn_resnet50_coco_2018_01_28/frozen_inference_graph.pb --data_type=FP16
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/pi/open_model_zoo/tools/downloader/public/faster_rcnn_resnet50_coco/faster_rcnn_resnet50_coco_2018_01_28/frozen_inference_graph.pb
- Path for generated IR: /home/pi/openvino/model-optimizer/.
- IR output name: frozen_inference_graph
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: image_tensor
- Output layers: detection_scores,detection_boxes,num_detections
- Input shapes: [1,600,1024,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP16
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: True
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: /home/pi/open_model_zoo/tools/downloader/public/faster_rcnn_resnet50_coco/faster_rcnn_resnet50_coco_2018_01_28/pipeline.config
- Use the config file: None
- Inference Engine found in: /home/pi/openvino/bin/armv7l/Release/lib/python_api/python3.7/openvino
Inference Engine version: 2.1.custom_master_4946d2e62c86ee4b63fb1ef9f9e612c3baded4a7
Model Optimizer version: custom_master_4946d2e62c86ee4b63fb1ef9f9e612c3baded4a7
[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
The graph output nodes have been replaced with a single layer of type "DetectionOutput". Refer to the operation set specification documentation for more information about the operation.
[ ERROR ] Cannot infer shapes or values for node "SecondStageBoxPredictor/BoxEncodingPredictor/biases/read/_0__cf__6/SwappedWeights".
[ ERROR ] Cannot cast array data from dtype('int64') to dtype('int32') according to the rule 'safe'
[ ERROR ] It can happen due to bug in custom shape infer function <function Gather.infer at 0x9a95e4f8>.
[ ERROR ] Or because the node inputs have incorrect values/shapes.
[ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information.
[ ANALYSIS INFO ] Your model looks like TensorFlow Object Detection API Model.
Check if all parameters are specified:
--input_shape (optional)
--reverse_input_channels (if you convert a model to use with the Inference Engine sample applications)
Detailed information about conversion of this model can be found at
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "SecondStageBoxPredictor/BoxEncodingPredictor/biases/read/_0__cf__6/SwappedWeights" node.
For more information please refer to Model Optimizer FAQ, question #38. (

Does anyone know what I'm doing wrong, or does anybody have the .bin and .xml files for this model and wouldn't mind giving them to me too?


0 Kudos
1 Solution

I've missed that you try convert model on RaspberryPi board. That may not work for various reasons. For example TF module built for ARM may have some limitations, or it might be not sufficient amount of memory available on the board

View solution in original post

4 Replies

For Open Model Zoo models we recommend to use Model Downloader and Model Converter scripts, provided in Open Model Zoo. Obviously, model converter will call Model Optimizer to convert model to IR, but the benefit from using Model Optimizer in this way is that for all models, distributed through Open Model Zoo we have prepared and validated appropriate Model Optimizer parameters, and script will construct MO command line with correct parameters. You may take a look on these parameters for each OMZ model, in file model.yml and call MO with these params, but OMZ script will do this in more easy way for you, just call python --name faster_rcnn_resnet50_coco.



I did try that and the result was the same. However, I managed to convert to model using the openvino and model-optimizer for windows and I transferred the results to my raspberry. 


I've missed that you try convert model on RaspberryPi board. That may not work for various reasons. For example TF module built for ARM may have some limitations, or it might be not sufficient amount of memory available on the board


Hi Sebastian,

This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.