When I use opencv with NCS, why the NCS can use Caffe Model directly?
Why don't need to convert the Caffe model to openvino format.
net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"])
Thank you for reaching out to us!
For your information, I have verified your code and I’m able to do inference on Intel® Neural Compute Stick 2 with Caffe framework using OpenCV with Intel’s Deep Learning Inference Engine (DL IE).
Based on the development team’s response, we can use the Inference Engine API to read limited Caffe frameworks directly.
Hence, we suggest users use the Inference Engine API to read the Intermediate Representation (IR), or ONNX framework to execute the model on devices.
Details of Intel’s Deep Learning Inference Engine backend is available at the following page:
Inference Engine Developer Guide is available at the following page:
Thank you for your reply.
I also want to know whether all caffe models are supported, and whether there will be a difference in running speed between running the caffe model directly and converting to IR and ONNX models. Another question is whether it can directly support other framework models in addition to the caffe model.
To answer your first question, we regret to inform you that not all Caffe models can be directly used with the Inference Engine API. As for now, the frameworks supported by Inference Engine API are Intermediate Representation (IR) and ONNX.
To answer your second question, when we convert a model into IR format, Model Optimizer performs several optimizations. For example, certain primitives like linear operations (BatchNorm and ScaleShift), are automatically fused into convolutions.
The average inference time of using IR format (47.625) is lesser than the average inference time of using Caffe Framework (49.2968). You may refer to the attachment below for the comparison result.
Hence, we recommend you convert supported deep learning frameworks such as Caffe, TensorFlow, Kaldi, MXNet, or ONNX into Intermediate Representation to infer with the Inference Engine.
Default Model Optimizer Optimizations are available at the following page:
Thank you for your question.
This thread will no longer be monitored since we have provided a solution.
If you need any additional information from Intel, please submit a new question.