Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Tomasz_S_Intel
Employee
456 Views

How to convert ssd_mobilenet_v1_coco OpenVino model from FP32 -> INT8

I am trying to quantize OpenVino SSD-Mobilenet COCO model from FP32 to INT8 according to: https://docs.openvinotoolkit.org/2019_R1/_inference_engine_tools_calibration_tool_README.html

I downloaded ssd_mobilenet_v1_coco TensorFlow model and converted into OpenVino IR format.

I also downloaded COCO dataset.

According to calibration tool for object detection annotations should be in VOC format.

I used this code to convert annotations into VOC format: https://github.com/JingyuanHu/COCO2VOC

After successful annotations conversions I copied them into directory where COCO images lives. Hopefully this is correct – I could not found any ‘official’ documentation about VOC format. I downloaded VOCtest_06-Nov-2007.tar but its directory structure looks different and contains more files.

My directory looks like below:

  …

├── 000000581317.jpg

├── 000000581317.xml

├── 000000581357.jpg

└── 000000581357.xml

Above Xml files are in VOC format

 

Then I executed calibration tool:

calibration_tool -m "ssd_mobilenet_v1_coco_2018_01_28/tomek_ssd_support/model.xml" -i COCO2VOC/VOC -subset 32

[ INFO ] InferenceEngine:

        API version ............ 1.6

        Build .................. custom_releases/2019/R1_c9b66a26e4d65bb986bb740e73f58c6e9e84c7c2

[ INFO ] Parsing input parameters

[ INFO ] Loading plugin

 

        API version ............ 1.6

        Build .................. 22443

        Description ....... MKLDNNPlugin

[ INFO ] Loading network files

[ INFO ] Preparing input blobs

[ INFO ] Batch size is 1

[ INFO ] Collecting accuracy metric in FP32 mode to get a baseline, collecting activation statistics

[ ERROR ] Inference problem:

 

The validation dataset in /home/tsadowsk/tomek-model-conversion/coco-dataset/COCO2VOC/VOCis empty. Check the dataset file or folder and the labels file

Looks like some file is missing as above directory contains .jpg and .xml pairs or they should be organized differently.

I also looked at config file: https://software.intel.com/en-us/forums/computer-vision/topic/807243 but this looks to be conversion from Tensorflow FP32 into OpenVino INT8 and it does not contain reference to COCO dataset.

 

Could you help me how to convert the model from OpenVino FP32 into INT8 format? 

Thanks in advance.

0 Kudos
3 Replies
Shubha_R_Intel
Employee
456 Views

Dear Tomasz S

We definitely support INT8 precision. But in order to use it you must convert your model to INT8 using the calibration tool. Before using the calibration tool, however, you must generate a *.json and a *.pickle using convert_annotation.py, where you must know which dataset you want to use in advance. Please read here for additional information on annotation converters:
https://github.com/opencv/dldt/blob/2019/tools/accuracy_checker/accuracy_checker/annotation_converte...

The below steps don't exactly pertain to ssd_mobilenet_v1_coco.  But hopefully you can extrapolate the steps for ssd_mobilenet_v1_coco based on the hints below.

So the steps to do that are:
1)https://github.com/opencv/dldt/blob/2019/inference-engine/tools/accuracy_checker_tool/convert_annota...

2)https://github.com/opencv/dldt/blob/2019/inference-engine/tools/calibration_tool/calibrate.py

3)https://github.com/opencv/dldt/blob/2019/inference-engine/tools/accuracy_checker_tool/accuracy_check...

So in summary, step 1) will create a *.json and a *.pickle file which will be consumed by Step 2) in the form of a "definitions.yml" file, out plops INT8 IR if everything worked ok. Then in Step 3) you check the accuracy of the INT8 IR created by step 2). Many model flavors are definitely supported.

You can get some sample configuration (*.yml) files here:
https://software.intel.com/en-us/forums/computer-vision/topic/807243

I know it's a lot of information. But the basic idea is that until you use our tools to generate proper INT8 IR, INT8 precision cannot be supported in OpenVino.

Here are some sample commands which you can use as guidance:

python convert_annotation.py imagenet --annotation_file /media/user/icv_externalN/omz-validation-datasets/ImageNet/val.txt --labels_file /media/user/icv_externalN/omz-validation-datasets/ImageNet/synset_words.txt -ss 2000 -o ~/annotations -a imagenet_calibration.pickle -m imagenet_calibration.json
(to create *.json and *.pickle for step 2)

python calibrate.py --config ~/inception_v4.yml --definition ~/definitions.yml -M /home/bob/intel/openvino/deployment_tools/model_optimizer --tf_custom_op_config_dir ~/tf_custom_op_configs --models ~/models --source /media/user/icv_externalN/omz-validation-datasets --annotations ~/annotations –cfc

python convert_annotation.py imagenet --annotation_file /media/user/icv_externalN/omz-validation-datasets/ImageNet/val.txt --labels_file /media/user/icv_externalN/omz-validation-datasets/ImageNet/synset_words.txt -o ~/annotations -a imagenet.pickle -m imagenet.json
(do it again before you do accuracy check, step 3) - create new *.json and *.pickle).

python accuracy_check.py --config ~/inception_v4.yml -d ~/definitions.yml -M /home/bob/intel/openvino/deployment_tools/model_optimizer --tf_custom_op_config_dir ~/tf_custom_op_configs --models ~/models --source /media/user/icv_externalN/omz-validation-datasets --annotations ~/annotations -tf dlsdk -td CPU

Hope it helps. Post here if you are confused or need more help.

Thanks,

Shubha

Tomasz_S_Intel
Employee
456 Views

Hi Shubha,

Many thanks for your reply.

Would it be possible to send me some examples for object detection (above examples concerns classification for ImageNet)?

I mean sample command lines for convert_annotation.py,  calibrate.py,  accuracy_check.py.

Thanks,

Tomek

Shubha_R_Intel
Employee
456 Views

Dear Tomasz S,

Unfortunately I don't have that right now (nor do I have the bandwidth to generate such samples). But you can start with annotation converters, find your data set (for object detection). The command-lines really shouldn't look much different actually, between classification and object detection.

Shubha

 

Reply