Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.
5541 Discussions

Optimize inference performance on the Intel Atom x5 z8350

fabda01
Beginner
419 Views

I installed OpenVINO 2020.3 for Windows using the pre compiled binaries from https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_windows.htm...

Then, I run the sample hello_classification using resnet-50-tf (FP 32) and measured the time of inference on the Intel Atom x5 z8350 using the MKLDNN plugin (cpu) and the time was roughly 800ms

Is it possible to improve the inference performance by compiling some modules (such as MKLDNN plugin) specifically for a certain device?

I wonder because I had to recompile the MKLDNN plugin for the Intel Atom according to this Issue on Github https://github.com/openvinotoolkit/openvino/issues/387

If I use the MKLDNN plugin as provided by OpenVINO 2021.1 I don't have the above github issue, but the time of inference does not improve either.

 

0 Kudos
1 Solution
IntelSupport
Community Manager
381 Views

Hi Davi,

 

Thank you for reaching out to us. OpenVINO has tools that can be utilized for inference performance.

 

a. Benchmark C++ Tool:

Benchmark C++ Tool is designed to estimate deep learning inference performance on supported devices. Please refer to the following link for more information:

https://docs.openvinotoolkit.org/2021.1/openvino_inference_engine_samples_benchmark_app_README.html

 

b. Post-training Optimization Toolkit:

Post-training Optimization Toolkit (POT) is designed to accelerate the inference of deep learning models by applying special methods without model retraining or fine-tuning, like post-training quantization. Please refer to the following link for more information:

https://docs.openvinotoolkit.org/latest/pot_README.html

 

For more information regarding the performance optimization guide, you can refer to the following link:

https://docs.openvinotoolkit.org/latest/openvino_docs_optimization_guide_dldt_optimization_guide.htm...

 

Besides, please use the OpenVINO toolkit 2021.1 version.

 

Regards,

Adli


View solution in original post

2 Replies
IntelSupport
Community Manager
382 Views

Hi Davi,

 

Thank you for reaching out to us. OpenVINO has tools that can be utilized for inference performance.

 

a. Benchmark C++ Tool:

Benchmark C++ Tool is designed to estimate deep learning inference performance on supported devices. Please refer to the following link for more information:

https://docs.openvinotoolkit.org/2021.1/openvino_inference_engine_samples_benchmark_app_README.html

 

b. Post-training Optimization Toolkit:

Post-training Optimization Toolkit (POT) is designed to accelerate the inference of deep learning models by applying special methods without model retraining or fine-tuning, like post-training quantization. Please refer to the following link for more information:

https://docs.openvinotoolkit.org/latest/pot_README.html

 

For more information regarding the performance optimization guide, you can refer to the following link:

https://docs.openvinotoolkit.org/latest/openvino_docs_optimization_guide_dldt_optimization_guide.htm...

 

Besides, please use the OpenVINO toolkit 2021.1 version.

 

Regards,

Adli


View solution in original post

IntelSupport
Community Manager
364 Views

Hi Davi,


This thread will no longer be monitored since this issue has been resolved. If you need any additional information from Intel, please submit a new question.


Regards,

Adli


Reply