- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I installed OpenVINO 2020.3 for Windows using the pre compiled binaries from https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_windows.html#Install-Core-Components
Then, I run the sample hello_classification using resnet-50-tf (FP 32) and measured the time of inference on the Intel Atom x5 z8350 using the MKLDNN plugin (cpu) and the time was roughly 800ms
Is it possible to improve the inference performance by compiling some modules (such as MKLDNN plugin) specifically for a certain device?
I wonder because I had to recompile the MKLDNN plugin for the Intel Atom according to this Issue on Github https://github.com/openvinotoolkit/openvino/issues/387
If I use the MKLDNN plugin as provided by OpenVINO 2021.1 I don't have the above github issue, but the time of inference does not improve either.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Davi,
Thank you for reaching out to us. OpenVINO has tools that can be utilized for inference performance.
a. Benchmark C++ Tool:
Benchmark C++ Tool is designed to estimate deep learning inference performance on supported devices. Please refer to the following link for more information:
https://docs.openvinotoolkit.org/2021.1/openvino_inference_engine_samples_benchmark_app_README.html
b. Post-training Optimization Toolkit:
Post-training Optimization Toolkit (POT) is designed to accelerate the inference of deep learning models by applying special methods without model retraining or fine-tuning, like post-training quantization. Please refer to the following link for more information:
https://docs.openvinotoolkit.org/latest/pot_README.html
For more information regarding the performance optimization guide, you can refer to the following link:
Besides, please use the OpenVINO toolkit 2021.1 version.
Regards,
Adli
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Davi,
Thank you for reaching out to us. OpenVINO has tools that can be utilized for inference performance.
a. Benchmark C++ Tool:
Benchmark C++ Tool is designed to estimate deep learning inference performance on supported devices. Please refer to the following link for more information:
https://docs.openvinotoolkit.org/2021.1/openvino_inference_engine_samples_benchmark_app_README.html
b. Post-training Optimization Toolkit:
Post-training Optimization Toolkit (POT) is designed to accelerate the inference of deep learning models by applying special methods without model retraining or fine-tuning, like post-training quantization. Please refer to the following link for more information:
https://docs.openvinotoolkit.org/latest/pot_README.html
For more information regarding the performance optimization guide, you can refer to the following link:
Besides, please use the OpenVINO toolkit 2021.1 version.
Regards,
Adli
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Davi,
This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.
Regards,
Adli
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page