Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6506 Discussions

Optimize inference performance on the Intel Atom x5 z8350

fabda01
Beginner
2,268 Views

I installed OpenVINO 2020.3 for Windows using the pre compiled binaries from https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_windows.html#Install-Core-Components

Then, I run the sample hello_classification using resnet-50-tf (FP 32) and measured the time of inference on the Intel Atom x5 z8350 using the MKLDNN plugin (cpu) and the time was roughly 800ms

Is it possible to improve the inference performance by compiling some modules (such as MKLDNN plugin) specifically for a certain device?

I wonder because I had to recompile the MKLDNN plugin for the Intel Atom according to this Issue on Github https://github.com/openvinotoolkit/openvino/issues/387

If I use the MKLDNN plugin as provided by OpenVINO 2021.1 I don't have the above github issue, but the time of inference does not improve either.

 

0 Kudos
1 Solution
IntelSupport
Community Manager
2,230 Views

Hi Davi,

 

Thank you for reaching out to us. OpenVINO has tools that can be utilized for inference performance.

 

a. Benchmark C++ Tool:

Benchmark C++ Tool is designed to estimate deep learning inference performance on supported devices. Please refer to the following link for more information:

https://docs.openvinotoolkit.org/2021.1/openvino_inference_engine_samples_benchmark_app_README.html

 

b. Post-training Optimization Toolkit:

Post-training Optimization Toolkit (POT) is designed to accelerate the inference of deep learning models by applying special methods without model retraining or fine-tuning, like post-training quantization. Please refer to the following link for more information:

https://docs.openvinotoolkit.org/latest/pot_README.html

 

For more information regarding the performance optimization guide, you can refer to the following link:

https://docs.openvinotoolkit.org/latest/openvino_docs_optimization_guide_dldt_optimization_guide.html

 

Besides, please use the OpenVINO toolkit 2021.1 version.

 

Regards,

Adli


View solution in original post

2 Replies
IntelSupport
Community Manager
2,231 Views

Hi Davi,

 

Thank you for reaching out to us. OpenVINO has tools that can be utilized for inference performance.

 

a. Benchmark C++ Tool:

Benchmark C++ Tool is designed to estimate deep learning inference performance on supported devices. Please refer to the following link for more information:

https://docs.openvinotoolkit.org/2021.1/openvino_inference_engine_samples_benchmark_app_README.html

 

b. Post-training Optimization Toolkit:

Post-training Optimization Toolkit (POT) is designed to accelerate the inference of deep learning models by applying special methods without model retraining or fine-tuning, like post-training quantization. Please refer to the following link for more information:

https://docs.openvinotoolkit.org/latest/pot_README.html

 

For more information regarding the performance optimization guide, you can refer to the following link:

https://docs.openvinotoolkit.org/latest/openvino_docs_optimization_guide_dldt_optimization_guide.html

 

Besides, please use the OpenVINO toolkit 2021.1 version.

 

Regards,

Adli


IntelSupport
Community Manager
2,213 Views

Hi Davi,


This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.


Regards,

Adli


0 Kudos
Reply