Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6480 Discussions

The difference between openvino results and quantized openvino results in Real-ESRGAN

jchin
Employee
593 Views

I converted the REAL-ESRGAN model into openvino (2021.4) and executed it. Although it is not perfect but it can run like below

jchin_0-1658218645508.png

the left picture is original picture with bicubic interpolation, left is I run real-esrgan with openvino. The IR file is in IRv10 which is converted from onnx

But after I use POT(Post-Training Optimization Toolkit) to convert the IR file from fp32 to the int8, the result is totally wrong; there are several minus float numbers in the output  

 

jchin_1-1658219274592.png

Could anyone tell me why the POT run differently with the expectation? Thanks a lot

 

Here is my inference processing code

The IR files are here

https://drive.google.com/file/d/1CL8cHWEC78W6-Vbu3bAfmE1wpxhPJMBY/view?usp=sharing

https://drive.google.com/file/d/1J3OQyEJdMlRHi2z4bBhNi4WfI1PD3UII/view?usp=sharing

 

0 Kudos
2 Replies
Iffa_Intel
Moderator
569 Views

Greetings,

 

Compressing a full precision model into a smaller size format definitely has its own drawback.

The inferencing time would be faster but the trade-off is the accuracy.

 

If your use case requires to be accurate, such as clinical related results, it is not recommended to shrink the model as you need to bear with a less accurate prediction. Meanwhile, if your use case requires fast results without the need for prediction to be really accurate, then it is suitable to use a smaller size format.

 

This might help you to understand better.

 

Sincerely,

Iffa

 

0 Kudos
Iffa_Intel
Moderator
535 Views

Greetings,


Intel will no longer monitor this thread since we have provided a solution. If you need any additional information from Intel, please submit a new question. 



Sincerely,

Iffa


0 Kudos
Reply