Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6367 Discussions

Devices (CPU, GPU etc) characteristics versus Inference accuracy on ML models

RobsonEsteves
Novice
433 Views

Hi all,

A doubt that I would like to share with you all: do you think that the HW characteristics like: CPU, GPU, clock speed, amount of RAM, etc can affect (increase or decrease) the ACCURACY of an inference? Or it would only influence the PERFORMANCE of the inference process itself as FPS, Latency etc, but keeping the same levels of accuracy defined by the original AI model used?

Thanks.

Robson

 

Labels (2)
0 Kudos
3 Replies
Iffa_Intel
Moderator
409 Views

Hi,


your understanding is correct. The accuracy depends on the Neural Network model that you used, especially its precision.

FP32 would definitely have better accuracy compared to the FP16 precision which the size has been halved.

This is why choosing the right precision according to your use case is important.


For example, in a scenario where fast inferencing time is required (perhaps in IOT parking use case) without the need for the result to be that accurate, then FP16 would be a good choice. Meanwhile, in use case that requires definite and concise result such as Medical result/applications, accuracy is important and therefore, FP32 would be a better choice. However, you need to bear with longer inferencing time.


Cordially,

Iffa


0 Kudos
RobsonEsteves
Novice
399 Views
0 Kudos
Iffa_Intel
Moderator
383 Views

Glad that helps!


Intel will no longer monitor this thread since this issue has been resolved. If you need any additional information from Intel, please submit a new question.



Cordially,

Iffa


0 Kudos
Reply