Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

NCS2 use Caffe model

Hnnu_liulei
Beginner
842 Views

When I use opencv with NCS, why the NCS can use Caffe Model directly?

Why don't need to convert the Caffe model to openvino format.

 

net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"])
net.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD)

0 Kudos
4 Replies
Wan_Intel
Moderator
810 Views

Hi Hnnu_liulei,

Thank you for reaching out to us!

 

For your information, I have verified your code and I’m able to do inference on Intel® Neural Compute Stick 2 with Caffe framework using OpenCV with Intel’s Deep Learning Inference Engine (DL IE).

 

Based on the development team’s response, we can use the Inference Engine API to read limited Caffe frameworks directly.

 

Hence, we suggest users use the Inference Engine API to read the Intermediate Representation (IR), or ONNX framework to execute the model on devices.

 

Details of Intel’s Deep Learning Inference Engine backend is available at the following page:

https://github.com/opencv/opencv/wiki/Intel%27s-Deep-Learning-Inference-Engine-backend

 

Inference Engine Developer Guide is available at the following page:

https://docs.openvino.ai/2021.4/openvino_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html

 

 

Regards,

Wan

 

0 Kudos
Hnnu_liulei
Beginner
800 Views

Thank you for your reply.

I also want to know whether all caffe models are supported, and whether there will be a difference in running speed between running the caffe model directly and converting to IR and ONNX models. Another question is whether it can directly support other framework models in addition to the caffe model.

0 Kudos
Wan_Intel
Moderator
790 Views

Hi Hnnu_liulei,

 

To answer your first question, we regret to inform you that not all Caffe models can be directly used with the Inference Engine API. As for now, the frameworks supported by Inference Engine API are Intermediate Representation (IR) and ONNX.

 

To answer your second question, when we convert a model into IR format, Model Optimizer performs several optimizations. For example, certain primitives like linear operations (BatchNorm and ScaleShift), are automatically fused into convolutions.

 

For your information, I have compared the inference time when running classification.py using the squeezenet1.1 model.

 

The average inference time of using IR format (47.625) is lesser than the average inference time of using Caffe Framework (49.2968). You may refer to the attachment below for the comparison result.

 

Hence, we recommend you convert supported deep learning frameworks such as Caffe, TensorFlow, Kaldi, MXNet, or ONNX into Intermediate Representation to infer with the Inference Engine.

 

Default Model Optimizer Optimizations are available at the following page:

https://docs.openvino.ai/2021.4/openvino_docs_MO_DG_Default_Model_Optimizer_Optimizations.html

 

 

Regards,

Wan

 

0 Kudos
Wan_Intel
Moderator
746 Views

Hi Hnnu_liulei,

Thank you for your question.

 

This thread will no longer be monitored since we have provided a solution. 

If you need any additional information from Intel, please submit a new question.

 

 

Regards,

Wan


0 Kudos
Reply