Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

openvino lower precision

rongrong__wang
Beginner
339 Views

https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Int8Inference.html

in this document, i see the sentence:

This means that 8-bit inference can only be performed with the CPU plugin on the layers listed above. All other layers are executed in the format supported by the CPU plugin: 32-bit floating point format (fp32).

so, i should use the data format fp32 if i use the GPU and other layers that are not in the list?

Thank you!

0 Kudos
2 Replies
Shubha_R_Intel
Employee
339 Views

Dear rongrong, wang

INT8 inference is supported by GPU but the performance is not good on our current GPUs so I would advise against using it. 

Thanks,

Shubha

 

0 Kudos
rongrong__wang
Beginner
339 Views

Shubha R. (Intel) wrote:

Dear rongrong, wang

INT8 inference is supported by GPU but the performance is not good on our current GPUs so I would advise against using it. 

Thanks,

Shubha

 

Thank you very much! I tried it and it really is like this

0 Kudos
Reply