Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6404 Discussions

Difference between inference using cv2 and inference using IENetwork.,

Lee__Hanbeen
Beginner
710 Views

Hi. I am a student who is learning how to use NCS2.

I want to use raspberry pi to operate the model.

I have a question about the source implementation.

All the source code that I write in Raspberry Pi is learned from this site. Here

I usually write the source in the following way.

MySourceCode

However, some others have written the source in a different way than OpenCV.

`from openvino.inference_engine import IENetwork, IEPlugin`

Others import and use the above modules, 

I wonder how the differ from the way OpenCV runs, 

Is there any difference in speed?

 

 

0 Kudos
1 Solution
Maksim_S_Intel
Employee
710 Views

You are using InferenceEngine backend in OpenCV dnn module, but it is possible to use it directly: http://docs.openvinotoolkit.org/latest/_ie_bridges_python_docs_api_overview.html

There are pros and cons in both methods:

  • OpenCV dnn module allows you do load networks in different popular formats (Caffe, TensorFlow, Torch, etc.) while InferenceEngine can only load networks converted by ModelOptimizer (xml+bin format). It's convenience vs. performance tradeoff.
  • On ARM platform OpenCV dnn module can also run some networks which can not be run by InferenceEngine, because it does not have CPU backend on this platform and can not fallback layers which are not supported by Myriad plugin.
  • OpenCV installed via pip or system package will not have InferenceEngine backend in dnn module, so you will need to either use OpenCV provided with OpenVINO distribution either build it yourself.

 

View solution in original post

0 Kudos
3 Replies
Maksim_S_Intel
Employee
711 Views

You are using InferenceEngine backend in OpenCV dnn module, but it is possible to use it directly: http://docs.openvinotoolkit.org/latest/_ie_bridges_python_docs_api_overview.html

There are pros and cons in both methods:

  • OpenCV dnn module allows you do load networks in different popular formats (Caffe, TensorFlow, Torch, etc.) while InferenceEngine can only load networks converted by ModelOptimizer (xml+bin format). It's convenience vs. performance tradeoff.
  • On ARM platform OpenCV dnn module can also run some networks which can not be run by InferenceEngine, because it does not have CPU backend on this platform and can not fallback layers which are not supported by Myriad plugin.
  • OpenCV installed via pip or system package will not have InferenceEngine backend in dnn module, so you will need to either use OpenCV provided with OpenVINO distribution either build it yourself.

 

0 Kudos
Lee__Hanbeen
Beginner
710 Views

Thank you for your reply.
I am using IE as a backend in OpenCV.
However, in spite of being a simple CNN, it takes 4.7 seconds to infer a (128, 64) color image, and even ends up inferring a (40, 40) grayscale image at the end of a long inference time.
I think this is a serious performance degradation. Where can I find the cause?

here is my question associated above phenomenon.

0 Kudos
Lee__Hanbeen
Beginner
710 Views

Resolved.
Instead of using IE as a backend in OpenCV,
Using IE directly, the inference time was shortened from 4.7 seconds to 0.01 seconds.
But there is still a problem. The inference for color images (128, 64) is normal, while the grayscale image is still ending at the end of infinite time.

I have written the relevant source code on my GITHUB. here
It is in Korean, but you can see only the source at the bottom.

Personally, I was surprised that the performance difference was so great.

Thanks.

0 Kudos
Reply