I have coverted some object detection models to IR format, in these cases, I do as follows:
python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py \ --input_model frozen_inference_graph.pb \ --data_type FP16 \ --reverse_input_channels \ --batch 1 \ --tensorflow_use_custom_operations_config /opt/intel/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json \ --tensorflow_object_detection_api_pipeline_config pipeline.config
It works, however, I also found that someone doesn't set these two parameters. What's more, I didn't modify the file path in pipline.config such as:
train_input_reader { label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt" tf_record_input_reader { input_path: "PATH_TO_BE_CONFIGURED/mscoco_train.record" }
Doesn't it matter?
Link Copied
Hi we,
Thanks for reaching out. The paths to be configured are only used when training the model. When using the Model Optimizer, you only need to specify the pipeline.config with the --tensorflow_object_detection_api_pipeline_config as you mentioned.
Please let me know if this answers your question.
Regards,
Jesus
Thanks, I see, I feel the need to learn more about the document.
For more complete information about compiler optimizations, see our Optimization Notice.