Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

VINO model optimizer R2 version slowing down the network

K__Mike
Beginner
265 Views

Hello,

 

I have a neural network consisting of a few basic layers - Conv2Ds, MatMuls, Normalizations, etc.

 

Creating a .xml and .bin by model optimizer in Version 2.300 and processing the images in my test folder getting an output for each using VINO generated model takes actually longer than taking shorter than without using VINO.

 

Can anyone explain why? My VINO does not have Dropout layers but still is VINO not supposed to condense the remaining layers or at-least give similar time performance when using a .xml/bin generated model from a regular tensorflow file?

 

Best

Mike

0 Kudos
1 Reply
Shubha_R_Intel
Employee
265 Views

Dear Mike, 

Can you kindly try your experiments on the newly released 2019 R1 OpenVino version ? Several fixes and performance improvements have been made in that release.

Also I am not sure what you mean by this :

getting an output for each using VINO generated model takes actually longer than taking shorter than without using VINO.

How are you getting an output ? Are you running one of the OpenVino samples on the generated IR ? 

Please read more about our new release here:

https://software.intel.com/en-us/blogs/2019/04/02/improved-parallelization-extended-deep-learning-capabilities-in-intel-distribution

Thanks,

Shubha

0 Kudos
Reply