Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

How to decrease inference time?

souravmondal13
Beginner
543 Views

Initially, I used a keras hd5 file for inference and it took around 54 sec to produce the result on a 5 sec 30fps video input file. Then I used openvino tool kit for optimization and inference now inference is taking around 50 sec, and I haven't gained much. Since I am new to openvino I don't know much about optimization, please help

Thanks,

Sourav

0 Kudos
3 Replies
samontab
Valued Contributor II
539 Views

It really depends on the network you're using. Maybe it's already as optimised as it can be. Here are some explanations of how OpenVINO optimises a model.

Other alternatives are: use another model, or modify your current network to make it lighter while reducing some accuracy.

0 Kudos
souravmondal13
Beginner
533 Views

one more thing, I saw in a couple of examples that we can use CPU extension like sse4, avx2. According to the latest release, these extensions are moved to plugin. I don't know how to use CPU plugins either, can you help me with this?

Thanks again,

Sourav

0 Kudos
Iffa_Intel
Moderator
520 Views

Greetings,

 

CPU extensions are moved to plugin in the latest version of OpenVINO. The extensions are loaded automatically while loading the plugin.


You can refer here for our latest documentation:

https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_CPU.html

 

Sincerely,

Iffa



0 Kudos
Reply