- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm trying to use the model optimizer to convert a retrained faster_rcnn_inception TF model to IR but I'm having issues. The model optimizer works great on the original faster_rcnn_inception_v2_coco_2018_01_28 model from the Object Detection Model Zoo using this command:
python mo_tf.py --input_model c:\Intel\models\orig_tf_model\frozen_inference_graph.pb \ --tensorflow_use_custom_operations_config extensions/front/tf/faster_rcnn_support.json \ --tensorflow_object_detection_api_pipeline_config C:\Intel\models\orig_tf_model\pipeline.config \ --input_shape [1,600,600,3]
However, with my re-trained faster_rcnn_inception model I get this output:
python mo_tf.py --input_model c:\Intel\models\trained_12_29_all_640\frozen\frozen_inference_graph.pb \ --tensorflow_use_custom_operations_config extensions/front/tf/faster_rcnn_support_api_v1.7.json \ --tensorflow_object_detection_api_pipeline_config C:\Intel\models\trained_12_29_all_640\frozen\pipeline.config \ --input_shape [1,600,600,3] Model Optimizer arguments: Common parameters: - Path to the Input Model: c:\Intel\models\trained_12_29_all_640\frozen\frozen_inference_graph.pb - Path for generated IR: c:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\. - IR output name: frozen_inference_graph - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: [1,600,600,3] - Mean values: Not specified - Scale values: Not specified - Scale factor: Not specified - Precision of IR: FP32 - Enable fusing: True - Enable grouped convolutions fusing: True - Move mean values to preprocess section: False - Reverse input channels: False TensorFlow specific parameters: - Input model in text protobuf format: False - Offload unsupported operations: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: C:\Intel\models\trained_12_29_all_640\frozen\pipeline.config - Operations to offload: None - Patterns to offload: None - Use the config file: c:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\extensions/front/tf/faster_rcnn_support_api_v1.7.json Model Optimizer version: 1.5.12.49d067a0 c:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\mo\front\tf\loader.py:122: RuntimeWarning: Unexpected end-group tag: Not all data was converted graph_def.ParseFromString(f.read()) [ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size. The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept. [ ERROR ] Graph contains 0 node after executing partial_infer. It considered as error because resulting IR will be empty which is not usual
Exactly the same output is produced if I use --tensorflow_use_custom_operations_config extensions/front/tf/faster_rcnn_support.json
Any idea what is going wrong? This error seems to be the crux of the matter: c:\Intel\computer_vision_sdk_2018.5.445\deployment_tools\model_optimizer\mo\front\tf\loader.py:122: RuntimeWarning: Unexpected end-group tag: Not all data was converted
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I figured out what was wrong. The frozen_inference_graph.pb file was corrupted when I moved it from one place to another. When I ran the model optimizer on a non-corrupted copy it worked perfectly with the faster_rcnn_support_api_v1.7.json custsom operations config.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I figured out what was wrong. The frozen_inference_graph.pb file was corrupted when I moved it from one place to another. When I ran the model optimizer on a non-corrupted copy it worked perfectly with the faster_rcnn_support_api_v1.7.json custsom operations config.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page