Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

Unrecognized command for convertation Maskrcnn Object Detection API Tensorflow

Karmeo
Novice
830 Views

Hello! When converting a model from the Object detection API, follow the instructions: https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html

I get the following error:

Command:

python .\mo_tf.py --input_model=C:\Users\Anna\Downloads\inference_graph\inference_graph\saved_model\saved_model.pb --transformation_config C:\Users\Anna\Downloads\inference_graph\inference_graph\saved_model\mask_rcnn_support_api_v1.14.json --tensorflow_object_detection_api_pipline_config C:\Users\Anna\Downloads\inference_graph\inference_graph\pipeline.config

Model optimazer:
usage: mo_tf.py [-h] [--input_model INPUT_MODEL] [--model_name MODEL_NAME]
[--output_dir OUTPUT_DIR] [--input_shape INPUT_SHAPE]
[--scale SCALE] [--reverse_input_channels]
[--log_level {CRITICAL,ERROR,WARN,WARNING,INFO,DEBUG,NOTSET}]
[--input INPUT] [--output OUTPUT] [--mean_values MEAN_VALUES]
[--scale_values SCALE_VALUES]
[--data_type {FP16,FP32,half,float}] [--disable_fusing]
[--disable_resnet_optimization]
[--finegrain_fusing FINEGRAIN_FUSING] [--disable_gfusing]
[--enable_concat_optimization] [--move_to_preprocess]
[--extensions EXTENSIONS] [--batch BATCH] [--version]
[--silent]
[--freeze_placeholder_with_value FREEZE_PLACEHOLDER_WITH_VALUE]
[--generate_deprecated_IR_V2] [--keep_shape_ops] [--steps]
[--input_model_is_text] [--input_checkpoint INPUT_CHECKPOINT]
[--input_meta_graph INPUT_META_GRAPH]
[--saved_model_dir SAVED_MODEL_DIR]
[--saved_model_tags SAVED_MODEL_TAGS]
[--tensorflow_subgraph_patterns TENSORFLOW_SUBGRAPH_PATTERNS]
[--tensorflow_operation_patterns TENSORFLOW_OPERATION_PATTERNS]
[--tensorflow_custom_operations_config_update TENSORFLOW_CUSTOM_OPERATIONS_CONFIG_UPDATE]
[--tensorflow_use_custom_operations_config TENSORFLOW_USE_CUSTOM_OPERATIONS_CONFIG]
[--tensorflow_object_detection_api_pipeline_config TENSORFLOW_OBJECT_DETECTION_API_PIPELINE_CONFIG]
[--tensorboard_logdir TENSORBOARD_LOGDIR]
[--tensorflow_custom_layer_libraries TENSORFLOW_CUSTOM_LAYER_LIBRARIES]
[--disable_nhwc_to_nchw]

System error:


mo_tf.py: error: unrecognized arguments: --transformation_config C:\Users\Anna\Downloads\inference_graph\inference_graph\saved_model\mask_rcnn_support_api_v1.14.json --tensorflow_object_detection_api_pipline_config C:\Users\Anna\Downloads\inference_graph\inference_graph\pipeline.config
PS C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer>

0 Kudos
1 Solution
JesusE_Intel
Moderator
821 Views

Hi Karmeo,


There is a typo in your parameter name, it should be --tranformations_config. You can also use --tensorflow_use_custom_operations_config to specify the configuration file.


Regards,

Jesus


View solution in original post

0 Kudos
4 Replies
JesusE_Intel
Moderator
822 Views

Hi Karmeo,


There is a typo in your parameter name, it should be --tranformations_config. You can also use --tensorflow_use_custom_operations_config to specify the configuration file.


Regards,

Jesus


0 Kudos
Karmeo
Novice
795 Views

 

Thank. Ok, I corrected the typo and now I get the following error:

python .\mo.py --input_model=C:\Users\Anna\Downloads\inference_graph\inference_graph\saved_model\saved_model.pb --input_checkpoint=C:\Users\Anna\Downloads\inference_graph\inference_graph\saved_model\checkpoint --tensorflow_use_custom_operations_config=C:\Users\Anna\Downloads\inference_graph\inference_graph\saved_model\mask_rcnn_support_api_v1.14.json --tensorflow_object_detection_api_pipeline_config=C:\Users\Anna\Downloads\inference_graph\inference_graph\saved_model\pipeline.config


Model Optimizer arguments:
Common parameters:
- Path to the Input Model: C:\Users\Anna\Downloads\inference_graph\inference_graph\saved_model\saved_model.pb
- Path for generated IR: C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\.
- IR output name: saved_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: C:\Users\Anna\Downloads\inference_graph\inference_graph\saved_model\pipeline.config
- Operations to offload: None
- Patterns to offload: None
- Use the config file: C:\Users\Anna\Downloads\inference_graph\inference_graph\saved_model\mask_rcnn_support_api_v1.14.json
Model Optimizer version: 2019.3.0-408-gac8584cb7
2020-08-13 10:21:49.787582: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll


[ FRAMEWORK ERROR ] Error parsing message
TensorFlow cannot read the model file: "C:\Users\Anna\Downloads\inference_graph\inference_graph\saved_model\saved_model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #43.


Cannot load input model: Error parsing message
TensorFlow cannot read the model file: "C:\Users\Anna\Downloads\inference_graph\inference_graph\saved_model\saved_model.pb" is incorrect TensorFlow model file.
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph

Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message.
For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #43.

I got the saved_model.pb file after training the model from Object_detection_API

I offer the sources in the downloaded archive

https://yadi.sk/d/ZikVWOo8fvi35w

0 Kudos
JesusE_Intel
Moderator
783 Views

Hi Karmeo,

 

Please make sure to freeze the TensorFlow model after training. Take a look at the Freezing Custom Models in Python* section in the documentation. Also, you are using an outdated version of OpenVINO toolkit, please update to the latest 2020.4 or 2020.3 LTS release.

 

Regards,

Jesus

 

0 Kudos
JesusE_Intel
Moderator
769 Views

Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question.


0 Kudos
Reply