Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6402 Discussions

Quantized SSD MobileNet v1 conversion and benchmark

lipkin__semen
Beginner
778 Views

Hi,

I'm trying to benchmark two SSD MobileNet v1 models (float and quantized ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03) from TF Object Detection API's zoo (the quantized model was obtained by adding the graph_rewriter block to pipeline config) and I experience several problems:

1. The float model was converted fine, but benchmark result is only 1.92 fps for 1 thread on CPU (I use Intel® Core™ i7-6700 CPU @ 3.40GHz × 4 in VirtualBox).

2. The quantized model fails to convert. I try to convert it with the following command:

/opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py \--input_model=/home/osboxes/trained_models/quantized_model/frozen_inference_graph.pb --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config ~/trained_models/ssd_mobilenet_v1_fpn_input640_iv_voc+coco+ivcloud_3cl_2019_04_08_quantized/pipeline_from_SAMPLES_ssd_mobilenet_v1_fpn_640.config --output_dir ~/trained_models/converted/quantized/ --reverse_input_channels

 

But receive the message:

[ ERROR ]  -------------------------------------------------
[ ERROR ]  ----------------- INTERNAL ERROR ----------------
[ ERROR ]  Unexpected exception happened.
[ ERROR ]  Please contact Model Optimizer developers and forward the following information:
[ ERROR ]  Exception occurred during running replacer "ObjectDetectionAPISSDPostprocessorReplacement (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPISSDPostprocessorReplacement'>)": FakeQuantWithMinMaxVars
[ ERROR ]  Traceback (most recent call last):
  File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 167, in apply_replacements
    replacer.find_and_replace_pattern(graph)
  File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/front/tf/replacement.py", line 89, in find_and_replace_pattern
    self.replace_sub_graph(graph, match)
  File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/front/common/replacement.py", line 131, in replace_sub_graph
    new_sub_graph = self.generate_sub_graph(graph, match)  # pylint: disable=assignment-from-no-return
  File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/extensions/front/tf/ObjectDetectionAPI.py", line 924, in generate_sub_graph
    _relax_reshape_nodes(graph, pipeline_config)
  File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/extensions/front/tf/ObjectDetectionAPI.py", line 161, in _relax_reshape_nodes
    assert (old_reshape_node.op == 'Reshape'), old_reshape_node.op
AssertionError: FakeQuantWithMinMaxVars

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/main.py", line 312, in main
    return driver(argv)
  File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/main.py", line 263, in driver
    is_binary=not argv.input_model_is_text)
  File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/pipeline/tf.py", line 127, in tf2nx
    class_registration.apply_replacements(graph, class_registration.ClassType.FRONT_REPLACER)
  File "/opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/mo/utils/class_registration.py", line 190, in apply_replacements
    )) from err
Exception: Exception occurred during running replacer "ObjectDetectionAPISSDPostprocessorReplacement (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPISSDPostprocessorReplacement'>)": FakeQuantWithMinMaxVars

[ ERROR ]  ---------------- END OF BUG REPORT --------------
[ ERROR ]  -------------------------------------------------

Is there any way to work around this error and benchmark the quantized model? And also what could be the reason for float model to run so slow

 

0 Kudos
4 Replies
Shubha_R_Intel
Employee
778 Views

Dear lipkin, semen,

It could be a model optimizer bug. Can you kindly attach your new pipeline.config for the quantized model here ?

Thanks,

Shubha

0 Kudos
lipkin__semen
Beginner
778 Views

Hi, Shubha,

Thanks for answering fast, here is the config for quantized model.

0 Kudos
Shubha_R_Intel
Employee
778 Views

Thank you ,lipkin, semen. I will take a look.

Shubha

0 Kudos
Shubha_R_Intel
Employee
778 Views

Dear lipkin, semen,

By "Quantized Model" do you mean Tensorflow Lite ? I'm reading about Quantized Models Tensorflow Documentation here. OpenVino does not support Tensorflow Lite today.

We definitely support INT8 quantization but not at the Model Optimizer stage. Instead we have a suite of calibration tools that handle this.  Please read my detailed reply to dldt github issue 171 .

Hope it helps,

Thanks,

Shubha

 

0 Kudos
Reply