Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Beginner
48 Views

INT 8 Quantization (Fill input 'input.1' with random values (image is expected))

Hi,

I am trying to convert FP32 model to INT 8 model. But I am getting the following error.

openvino.tools.calibration INFO: Accuracy checker meta data: '{'calculate_mean': False, 'scale': 1, 'names': ['mean', 'std'], 'postfix': ' '}'
[ INFO ] Network input 'input.1' precision FP32, dimensions (NCHW): 1 3 256 256
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Infer Request 0 filling
[ INFO ] Fill input 'input.1' with random values (image is expected)
openvino.tools.calibration INFO: Was not achieved: original network accuracy: 0.2401  (latency: 20.8 ms) VS INT8 accuracy: 0.2674  (latency 87.9054 ms), accuracy drop 11.3636%
 openvino.tools.calibration INFO: Required threshold of accuracy drop cannot be achieved with any INT8 quantization. Minimal accuracy drop: 4.545%

Can you please help me to solve this error.

Thanks,

Suchithra

0 Kudos
4 Replies
Highlighted
Moderator
48 Views

Hi Suchithra. 

At first, please make sure that your command is complete and accurate per https://docs.openvinotoolkit.org/latest/_inference_engine_tools_calibration_tool_README.html

Command line example:

python calibrate.py --config ~/inception_v1.yml --definition ~/defenitions.yml 
-M /home/user/intel/openvino/deployment_tools/model_optimizer 
--tf_custom_op_config_dir ~/tf_custom_op_configs --models ~/models 
--source /media/user/calibration/datasets --annotations ~/annotations

If you still get an error, then please provide us with a full listing of this error along with a command you use.

Best regards, Max.

0 Kudos
Highlighted
Beginner
48 Views

Hi Max,

Thanks for your reply.

PFA.

The attachment contains the command line I used and the full list of errors.

Best Regards,

Suchithra

0 Kudos
Highlighted
Moderator
48 Views

Hi Suchithra.

I'd like to know if you already generated a *.json and a *.pickle annotation files using convert_annotation.py tool per https://docs.openvinotoolkit.org/2019_R3.1/_inference_engine_tools_calibration_tool_README.html 

Looking at the output it seems like calibrate.py went into an infinite loop. Is that the correct statement? Or do you have any progress after running this for a while?

In the output I see a reference to efficientnet_b4_eighth_level.xml model, so I want to check if you use one of the supported TensorFlow models (.pb) listed here https://docs.openvinotoolkit.org/2019_R3.1/_docs_MO_DG_prepare_model_convert_model_Convert_Model_Fro... or a different one?
Please also see the list of validated models for INT8 quantization - https://docs.openvinotoolkit.org/2019_R3.1/_docs_IE_DG_Int8Inference.html

And would you please also specify the dataset used? Is the dataset one of the common ones mentioned here https://docs.openvinotoolkit.org/2019_R3.1/_docs_Workbench_DG_Download_and_Cut_Datasets.html

Also, can you please try running calibrate.py in simplified -sm mode for IR .xml model? According to the first link that I pointed above.

 

Thank you.
Best regards, Max.

0 Kudos
Highlighted
Moderator
48 Views

Dear Suchithra,

Please also have a chance to learn and test a completely new calibration tool within Post-Training Optimization toolkit as a part of OpenVINO toolkit latest 2020.1 build - http://docs.openvinotoolkit.org/latest/_README.html

You can download it here https://software.intel.com/en-us/openvino-toolkit/choose-download

Best regards, Max.

0 Kudos