Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6404 Discussions

Difference between OpenVINO inference engine and OpenCV backend/target method

KSehairi
Novice
1,289 Views

Hello,

In script 1, I used OpenVINO to load my model and run it on NCS2

 

IECore().read_network
IECore().load_network
next(iter(mydetector.input_info))

 

In script 2, I used OpenCV backend/target instructions

 

mydetector = cv2.dnn.readNet(ArchPath, modelPath)
mydetector.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE)
mydetector.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD)

 

my model is optimized using OpenVINO mo script for 32 bits and 16bits

I noticed that there is no difference in processing time either for 32 bits and 16 bits.

I know that NCS2 support 16 bits only. So, when running 32 bits model does the NCS2 convert it to 16 automatically?

does it make sense that the first method has the same processing time with the second method, if yes why?

Any help is very appreciated

 

0 Kudos
1 Solution
Peh_Intel
Moderator
1,255 Views

Hi KSehairi,


According to the Supported Model Formats, VPU plugins is only supported FP16 models. Some FP32 models might be able inferenced by Intel® Neural Compute Stick 2 (NCS2). But we cannot guarantee their performance and also all the FP32 models can be inferenced by NCS2. Hence, it is always recommended to choose FP16 models to be inferenced by NCS2.

 

OpenCV DNN module can use OpenVINO™ Inference Engine as an accelerator backend. However, we are not confidence to say that the processing time of using OpenVINO™ API would be the same as using OpenCV DNN API.


Please note that only subset of OpenVINO™ features is supported in OpenCV DNN. For some advanced tasks, like models quantization, you still need to use OpenVINO™ API and tools directly.



Regards,

Peh


View solution in original post

5 Replies
Peh_Intel
Moderator
1,256 Views

Hi KSehairi,


According to the Supported Model Formats, VPU plugins is only supported FP16 models. Some FP32 models might be able inferenced by Intel® Neural Compute Stick 2 (NCS2). But we cannot guarantee their performance and also all the FP32 models can be inferenced by NCS2. Hence, it is always recommended to choose FP16 models to be inferenced by NCS2.

 

OpenCV DNN module can use OpenVINO™ Inference Engine as an accelerator backend. However, we are not confidence to say that the processing time of using OpenVINO™ API would be the same as using OpenCV DNN API.


Please note that only subset of OpenVINO™ features is supported in OpenCV DNN. For some advanced tasks, like models quantization, you still need to use OpenVINO™ API and tools directly.



Regards,

Peh


KSehairi
Novice
1,244 Views

Thank Peh_Intel for your reply.

Is it possible that the OpenCV DNN API (setPreferableBackend/setPreferableTarget) uses the same OpenVINO™ API to run inferences on Intel platforms?

I will check further with the OpenCV DNN API.

0 Kudos
Hairul_Intel
Moderator
1,172 Views

Hi KSehairi,

 

As mentioned earlier, only some features of OpenVINO™ API are supported in OpenCV DNN, which uses OpenVINO's Inference Engine as an accelerator backend.

 

OpenVINO™ Runtime API meanwhile provides a common API to deliver best possible performance on Intel platform of your choice.

 

 

Regards,

Hairul


KSehairi
Novice
1,146 Views

Hi Hairul,

Thank you a lot for this clarification.

Best regards

0 Kudos
Hairul_Intel
Moderator
1,115 Views

Hi KSehairi,

Glad to be of help.

 

This thread will no longer be monitored since we have provided information. If you need any additional information from Intel, please submit a new question.

 

 

Regards,

Hairul


0 Kudos
Reply