Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Calibrate MobileNetV2 (FP32 - INT8)

ChunShen_S_Intel
Employee
762 Views

Currently I am trying to calibrate SSD COCO MobileNet V2 model in IR format from FP32 to INT8.

What I have currently

1. mobilenetV2.xml

2. mobilenetV2.bin

3. mobilenetV2.mapping

4. COCO Dataset

- Directory containing (.xml) files with annotations for images

- Label Map Classes (.txt) in VOC format

- Images dataset (.jpg)

According to here, https://docs.openvinotoolkit.org/latest/_inference_engine_tools_calibration_tool_README.html ,  the step 1 is to Convert Data Annotations file which will create .json and .pickle file

My command line: python3 convert_annotation.py mscoco_detection --annotation_file /home/cvalgo/Downloads/ChunShen/COCOdevkit/COCO2017/json/instances_val2017.json -o /opt/intel/openvino/deployment_tools/tools/accuracy_checker_tool/COCO_convertANNO/ -a coco

The output I am getting: (i) coco *cannot be opened, not sure what file is this  

(ii) mscoco_detection.json

{"label_map": {"0": "person", "1": "bicycle", "2": "car", "3": "motorcycle", "4": "airplane", "5": "bus", "6": "train", "7": "truck", "8": "boat", "9": "traffic light", "10": "fire hydrant", "11": "stop sign", "12": "parking meter", "13": "bench", "14": "bird", "15": "cat", "16": "dog", "17": "horse", "18": "sheep", "19": "cow", "20": "elephant", "21": "bear", "22": "zebra", "23": "giraffe", "24": "backpack", "25": "umbrella", "26": "handbag", "27": "tie", "28": "suitcase", "29": "frisbee", "30": "skis", "31": "snowboard", "32": "sports ball", "33": "kite", "34": "baseball bat", "35": "baseball glove", "36": "skateboard", "37": "surfboard", "38": "tennis racket", "39": "bottle", "40": "wine glass", "41": "cup", "42": "fork", "43": "knife", "44": "spoon", "45": "bowl", "46": "banana", "47": "apple", "48": "sandwich", "49": "orange", "50": "broccoli", "51": "carrot", "52": "hot dog", "53": "pizza", "54": "donut", "55": "cake", "56": "chair", "57": "couch", "58": "potted plant", "59": "bed", "60": "dining table", "61": "toilet", "62": "tv", "63": "laptop", "64": "mouse", "65": "remote", "66": "keyboard", "67": "cell phone", "68": "microwave", "69": "oven", "70": "toaster", "71": "sink", "72": "refrigerator", "73": "book", "74": "clock", "75": "vase", "76": "scissors", "77": "teddy bear", "78": "hair drier", "79": "toothbrush"}}

Question 1: Is this output result correct? We are expecting a .pickle file isn't it?

Step 2: Calibration

python calibrate.py --config ~/inception_v1.yml --definition ~/defenitions.yml -M /home/user/intel/openvino/deployment_tools/model_optimizer --tf_custom_op_config_dir ~/tf_custom_op_configs --models ~/models --source /media/user/calibration/datasets --annotations ~/annotations

Question 2: How can I get the config file (.xml) for mobilenetv2? Is the defenitions.yml file inside openVINO be used for any model? What is the tf_custom_op_config? The input parameter for --annotations is the mscoco_detection.json I got in step 1?

Can someone help me out? Thanks in advance!

0 Kudos
3 Replies
Shubha_R_Intel
Employee
762 Views

Dear Sea, Chun Shen,

Our online documentation for the Python Calibration Tools are not great. I wrote a detailed answer for dldt github issue 171 . It doesn't deal with the COCO dataset, instead the commands are for Imagenet. But the command-lines should be similar.

Please report back should you have issues -

Thanks,

Shubha

0 Kudos
ChunShen_S_Intel
Employee
762 Views

Yes I read through that in other forum post. But I am still stuck at calibrate.py because of few reasons below

1. --config   ,        How can get the local configuration (.yml) for mobilenetV2 when it is not available inside OpenVINO as mobilenetV1?

2. --definition    ,  I assume definition.yml inside OpenVINO can be shared among all models. Not sure is my assumption correct?

3. --models       , why this input is expecting a (.pb) file and the output is in (.xml) in FP32 data type. Why calibrate.py is like doing the same thing as Model Optimizer? I thought calibrate.py should be something like I provide a (.xml) in FP32 data type and the output will still be (.xml) but in INT8?

0 Kudos
Shubha_R_Intel
Employee
762 Views

Dear Sea, Chun Shen,

Here are answers to your questions:

1) We don't have a config file for mobilenetV2 unfortunately. You will have to create one

2) yes you can use the same definition.yml for all models. Notice that definition.yml is nothing but datasets - and it contains no information which is model specific

3) This is a good question. Calibrate.py kind of does the same thing as Model Optimizer except that, if the layers in the model are supported - they should be converted to INT8 not FP32. So if you are seeing still layers in FP32 after calibrate.py is successfully executed, these could be unsupported layers. Not all layers are convertible to INT8.

Please read The IE INT8 Document for details. Most likely if you're still seeing FP32 it's because the layers are not supportd for INT8 inference.

Hope it helps,

Thanks,

Shubha

0 Kudos
Reply