- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I prepared INT8 model, and tested it on a new CPU. I found that the inferece speed gets fast, this is OK. But, I tested the inference using the same model on a bit old CPU, I found that it does not get fast. How should I check whether CPU supports INT8 model or not?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Timosy,
Thanks for reaching out.
The slow inference on old CPU is expected based on the hardware configuration and model layer framework. Refer to the Intel® Distribution of OpenVINO™ toolkit Benchmark Results for the inference performance on a specified hardware configuration.
Apart from that, INT8 is supported with CPU plugin. Check out the Supported Devices, Supported Model Formats, and Supported Layers documentation for more details.
Regards,
Aznie
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Timosy,
Thanks for reaching out.
The slow inference on old CPU is expected based on the hardware configuration and model layer framework. Refer to the Intel® Distribution of OpenVINO™ toolkit Benchmark Results for the inference performance on a specified hardware configuration.
Apart from that, INT8 is supported with CPU plugin. Check out the Supported Devices, Supported Model Formats, and Supported Layers documentation for more details.
Regards,
Aznie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Timosy,
This thread will no longer be monitored since we have provided information. If you need any additional information from Intel, please submit a new question.
Regards,
Aznie
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page