- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi all,
A doubt that I would like to share with you all: do you think that the HW characteristics like: CPU, GPU, clock speed, amount of RAM, etc can affect (increase or decrease) the ACCURACY of an inference? Or it would only influence the PERFORMANCE of the inference process itself as FPS, Latency etc, but keeping the same levels of accuracy defined by the original AI model used?
Thanks.
Robson
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
your understanding is correct. The accuracy depends on the Neural Network model that you used, especially its precision.
FP32 would definitely have better accuracy compared to the FP16 precision which the size has been halved.
This is why choosing the right precision according to your use case is important.
For example, in a scenario where fast inferencing time is required (perhaps in IOT parking use case) without the need for the result to be that accurate, then FP16 would be a good choice. Meanwhile, in use case that requires definite and concise result such as Medical result/applications, accuracy is important and therefore, FP32 would be a better choice. However, you need to bear with longer inferencing time.
Cordially,
Iffa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Glad that helps!
Intel will no longer monitor this thread since this issue has been resolved. If you need any additional information from Intel, please submit a new question.
Cordially,
Iffa
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page