Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

OpenVINO CPU FP32 & CPU FP16 Performance

Anjaneya_Srujit_Ram
923 Views

Hi,

I had downloaded the latest release of OpenVINO (2019 R2). I am running the "classification_sample" demo python sample present in 2019.1.1 version in 2019 R2 environment with FP32 and FP16 precisions on CPU. 

I am observing the same throughput performance with FP16 and FP32 formats on CPU. Is there any modifications required to improve performance while running the model with FP16 precision. 

 

Thank you.

0 Kudos
1 Reply
Shubha_R_Intel
Employee
923 Views

Dear Ramachandruni, Anjaneya Srujit,

It's not advise-able to draw conclusions based on one sample and one model. Whether FP16 on CPU would make a big difference or not is based on several factors, one of which is the model itself. For instance, heavily pipe-lined models (for instance classification followed by Object Detection) would observe greater performance gain than a simple model. I encourage you to perform some experiments with the  benchmark_app and learn about the different performance knobs available to you. Also the OpenVino Performance Topics Document should be valuable to you as well.

Thanks !

Shubha

 

0 Kudos
Reply