- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We are trying to convert a ssd_mobiledet model for a FP32 and FP16. I have tried considering two patterns (input arguments):
1. python3 ~/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo_tf.py --saved_model_dir saved_model/ --output_dir eff_ir --reverse_input_channels --tensorflow_use_custom_operations_config pipeline.config --input_shape [1,320,320,3] --input_checkpoint checkpoint
We've got:
```
[ WARNING ] Use of deprecated cli option --tensorflow_use_custom_operations_config detected. Option use in the following releases will be fatal. Please use --transformations_config cli option instead
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: None
- Path for generated IR: /home/macnicadhw/Documents/ambev-autoML/models/mobiledet-20210824T181234Z-001/mobiledet/eff_ir
- IR output name: saved_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,320,320,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: True
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: /home/macnicadhw/Documents/ambev-autoML/models/mobiledet-20210824T181234Z-001/mobiledet/pipeline.config
- Inference Engine found in: /home/macnicadhw/intel/openvino_2021.4.689/python/python3.6/openvino
Inference Engine version: 2021.4.1-3926-14e67d86634-releases/2021/4
Model Optimizer version: 2021.4.1-3926-14e67d86634-releases/2021/4
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.transformations_config.TransformationsConfig'>): Failed to parse custom replacements configuration file '/home/macnicadhw/Documents/ambev-autoML/models/mobiledet-20210824T181234Z-001/mobiledet/pipeline.config': Expecting value: line 1 column 1 (char 0).
For more information please refer to Model Optimizer FAQ, question #70. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=70#question-70)
```
2. python3 ~/intel/openvino_2021.4.689/deployment_tools/model_optimizer/mo_tf.py --saved_model_dir saved_model/ --output_dir eff_ir --transformations_config ~/intel/openvino_2021.4.689/deployment_tools/model_optimizer/extensions/front/tf/ssd_support_api_v1.15.json --tensorflow_use_custom_operations_config pipeline.config --input_shape [1,320,320,3] --input_checkpoint checkpoint
we've got:
```
[ WARNING ] Use of deprecated cli option --tensorflow_use_custom_operations_config detected. Option use in the following releases will be fatal. Please use --transformations_config cli option instead
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: None
- Path for generated IR: /home/macnicadhw/Documents/ambev-autoML/models/mobiledet-20210824T181234Z-001/mobiledet/eff_ir
- IR output name: saved_model
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: [1,320,320,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Use the config file: /home/macnicadhw/Documents/ambev-autoML/models/mobiledet-20210824T181234Z-001/mobiledet/pipeline.config
- Inference Engine found in: /home/macnicadhw/intel/openvino_2021.4.689/python/python3.6/openvino
Inference Engine version: 2021.4.1-3926-14e67d86634-releases/2021/4
Model Optimizer version: 2021.4.1-3926-14e67d86634-releases/2021/4
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
[ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.front.transformations_config.TransformationsConfig'>): Failed to parse custom replacements configuration file '/home/macnicadhw/Documents/ambev-autoML/models/mobiledet-20210824T181234Z-001/mobiledet/pipeline.config': Expecting value: line 1 column 1 (char 0).
For more information please refer to Model Optimizer FAQ, question #70. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=70#question-70)
```
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi GustavoLMourao,
Thanks for reaching out.
Which OpenVINO ssd_mobilenet model you are using? Please share your model or your model source for us to test it on our machine. Based on your error, it means that your file for custom replacement configuration provided with the --transformations_config flag cannot be parsed. In particular, it should have a valid JSON structure. For more details, refer to JSON Schema Reference.
Meanwhile, try to use the command below to convert your model and see if the same issue arises or not. Make sure you are using the correct configuration file.
Command:
mo.py --saved_model_dir "Download\ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8\saved_model" --reverse_input_channels --input_shape=[1,640,640,3] --transformations_config "<INSTALL_DIR>\openvino_2021.4.582\deployment_tools\model_optimizer\extensions\front\tf\ssd_support_api_v2.0.json" --tensorflow_object_detection_api_pipeline_config "Download\ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8\pipeline.config" --output_dir "Download\ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8"
Regards,
Aznie
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi GustavoLMourao,
Thanks for reaching out.
Which OpenVINO ssd_mobilenet model you are using? Please share your model or your model source for us to test it on our machine. Based on your error, it means that your file for custom replacement configuration provided with the --transformations_config flag cannot be parsed. In particular, it should have a valid JSON structure. For more details, refer to JSON Schema Reference.
Meanwhile, try to use the command below to convert your model and see if the same issue arises or not. Make sure you are using the correct configuration file.
Command:
mo.py --saved_model_dir "Download\ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8\saved_model" --reverse_input_channels --input_shape=[1,640,640,3] --transformations_config "<INSTALL_DIR>\openvino_2021.4.582\deployment_tools\model_optimizer\extensions\front\tf\ssd_support_api_v2.0.json" --tensorflow_object_detection_api_pipeline_config "Download\ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8\pipeline.config" --output_dir "Download\ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8"
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Aznie,
we have tried those parameters:
python3 mo.py \
--input_model path/to/mobiledet/frozen_inference_graph.pb \
--transformations_config transformations_config/tf/ssd_support_api_v1.15.json \
--tensorflow_object_detection_api_pipeline_config path/to/mobiledet/pipeline.config
And we got the converted model successfully.
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Gustavo,
Glad that you've been able to convert your model to IR successfully. However, I would like to highlight to you that you need to add --reverse_input_channels parameter for accurate inference results if you want to use your model, which is a TensorFlow Object Detection API model, with Inference Engine sample applications.
For more information, please refer to the following pages:
When to Reverse Input Channels
Regards,
Munesh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Gustavo,
This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.
Regards,
Aznie
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page