Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Quantization 8bit for yolov4

Kartikeya
Beginner
3,258 Views

Hi,

I am trying to convert fp32 yolo model(trained on custom classes) into an int8 low precision quantized model. However upon conversion I am unable to see any bounding boxes(unlike fp32/fp16) when I try to do inference even though .xml and .bin files are generated. I have tried Default and Accuracy aware training both. I am able to achieve the conversion through command line tools. DL Workbench does not convert to int8 IR also.
How can I tackle the above problem?

0 Kudos
3 Replies
Sebastian_M_Intel
Moderator
3,236 Views

Hello Kartikeya, 

 

Thank you for posting on the Intel® communities.  

 

Based on what you are reporting this seems to be related to the OpenVINO™ toolkit, we will move this thread to the proper sub-forum for better assistance and kindly wait for a response.  

 

Best regards, 

 

Regards, 

 

Sebastian M  

Intel Customer Support Technician  


0 Kudos
IntelSupport
Community Manager
3,205 Views

Hi,

Thanks for reaching out.

 

Yolov4 is not validated with OpenVINO. Only yolov1,yolov2 and yolov3 are validated. The information is available in the following page :

https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html#yolov3-to-ir

 

The following page contains the list of topologies that have been validated for 8-bit inference feature.

https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_Int8Inference.html#low_precision_8_bit_integer_inference_workflow.

 

There are three Yolo models (TensorFlow Yolov3,Caffe Yolo v1 tiny and Caffe Yolov3 ) in the above list.

I suggest you use any of these models for post training quantization

  

Here is an additional reference regarding the workflow of converting a model from FP32 to INT8:

https://docs.openvinotoolkit.org/latest/workbench_docs_Workbench_DG_Int_8_Quantization.html

 

Regards,

Aznie


0 Kudos
IntelSupport
Community Manager
3,166 Views

Hi Kartikeya,


This thread will no longer be monitored since we have provided a solution. If you need any additional information from Intel, please submit a new question.


Regards,

Aznie


0 Kudos
Reply