Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Rahila_T_Intel
Employee
125 Views

unable to convert openvino FP32 faster rcnn model to INT8

Hi,

I was trying to do INT8 optimization on a tensorflow model . 

Model Name- faster_rcnn_inception_v2_coco_2018_01_28

Initially created OpenVINO IR files using below command:

python mo_tf.py --input_model frozen_inference_graph.pb --output_dir IR-FP32 --data_type FP32 --tensorflow_use_custom_operations_config openvino_2021.1.110/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support_api_v1.14.json --tensorflow_object_detection_api_pipeline_config pipeline.config --reverse_input_channels --input_shape [1,450,450,3]

Which created .xml and .bin files with the below 2 warnings:

[ WARNING ] Model Optimizer removes pre-processing block of the model which resizes image keeping aspect ratio. The Inference Engine does not support dynamic image size so the Intermediate Representation file is generated with the input image size of a fixed size.
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
The graph output nodes "num_detections", "detection_boxes", "detection_classes", "detection_scores" have been replaced with a single layer of type "Detection Output". Refer to IR catalogue in the documentation for information about this layer.

[ WARNING ] Network has 2 inputs overall, but only 1 of them are suitable for input channels reversing.
Suitable for input channel reversing inputs are 4-dimensional with 3 channels
All inputs: {'image_tensor': [1, 3, 450, 450], 'image_info': [1, 3]}
Suitable inputs {'image_tensor': [1, 3, 450, 450]}

 

Now, I need to convert this fp32 models to int8. I tried with faster_rcnn_resnet50_coco_int8.json and faster_rcnn_resnet50_coco_int8.yml

But, it gives the below error :

return data.reshape(input_shape) if not self.disable_resize_to_input else data
ValueError: cannot reshape array of size 1843200 into shape (1,3,450,450)

 

Could you please help me to do the optimization.

0 Kudos
10 Replies
Iffa_Intel
Moderator
113 Views

Greetings,


You can refer here to Calibrate your FP32 model to INT8: https://www.youtube.com/watch?v=XkD8ae8uWes


and also the scale values are really important and they need to be precise according to your model. This is how you can get the exact scale values: https://www.youtube.com/watch?v=-8_yRzN-fTY



Sincerely,

Iffa


Rahila_T_Intel
Employee
105 Views

Could you  please suggest with an example of json and yml file to be used while doing OpenVINO 

Post-Training Optimization on my specified openvino FP32 faster_rcnn_inception_v2_coco_2018_01_28 model?

 

 

Tags (1)
Iffa_Intel
Moderator
100 Views

This might help:

https://docs.openvinotoolkit.org/2021.2/workbench_docs_Workbench_DG_Import_TensorFlow.html


Check on the other tabs too.



Sincerely,

Iffa


Rahila_T_Intel
Employee
97 Views

You have shared the link to convert .pb to fp32/fp16, which I have done already. 

 

I need the help to do INT8 conversion. Could you please help to do convert the FP32 faster_rcnn_inception_v2_coco_2018_01_28 model to int8?

Iffa_Intel
Moderator
95 Views

If possible can you share your model here?



Sincerely,

Iffa


Rahila_T_Intel
Employee
92 Views

Iffa_Intel
Moderator
85 Views

We are investigating this and will get back to you asap.


Sincerely,

iffa


Rahila_T_Intel
Employee
84 Views

Thank you

 

Iffa_Intel
Moderator
48 Views

Greetings,

We were able to quantize the FP32 IR model.

Note: If you are using Linux use python3 instead of python command.

First, use model downloader to download the model: ( you should find these in deployment_tools/tools/model_downloader)

python downloader.py --name faster_rcnn_inception_v2-coco

Then convert using converter.py:

python converter.py --name faster_rcnn_inception_v2-coco

if this doesn't work use :

python converter.py --name faster_rcnn_inception_v2-coco --mo <location of model_optimizer> --precision FP32

 

 

Next, you need to download and cut the

coco dataset:

 https://docs.openvinotoolkit.org/latest/workbench_docs_Workbench_DG_Download_and_Cut_Datasets.html#c...

 

I attached the files that you need to use. You may refer here to perform the POT: https://docs.openvinotoolkit.org/latest/pot_configs_examples_README.html

Then you should have the INT8 file.

Sincerely,

Iffa

 

 

 

Iffa_Intel
Moderator
37 Views

Greetings,


Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question. 


Sincerely,

Iffa