- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
In script 1, I used OpenVINO to load my model and run it on NCS2
IECore().read_network
IECore().load_network
next(iter(mydetector.input_info))
In script 2, I used OpenCV backend/target instructions
mydetector = cv2.dnn.readNet(ArchPath, modelPath)
mydetector.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE)
mydetector.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD)
my model is optimized using OpenVINO mo script for 32 bits and 16bits
I noticed that there is no difference in processing time either for 32 bits and 16 bits.
I know that NCS2 support 16 bits only. So, when running 32 bits model does the NCS2 convert it to 16 automatically?
does it make sense that the first method has the same processing time with the second method, if yes why?
Any help is very appreciated
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi KSehairi,
According to the Supported Model Formats, VPU plugins is only supported FP16 models. Some FP32 models might be able inferenced by Intel® Neural Compute Stick 2 (NCS2). But we cannot guarantee their performance and also all the FP32 models can be inferenced by NCS2. Hence, it is always recommended to choose FP16 models to be inferenced by NCS2.
OpenCV DNN module can use OpenVINO™ Inference Engine as an accelerator backend. However, we are not confidence to say that the processing time of using OpenVINO™ API would be the same as using OpenCV DNN API.
Please note that only subset of OpenVINO™ features is supported in OpenCV DNN. For some advanced tasks, like models quantization, you still need to use OpenVINO™ API and tools directly.
Regards,
Peh
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi KSehairi,
According to the Supported Model Formats, VPU plugins is only supported FP16 models. Some FP32 models might be able inferenced by Intel® Neural Compute Stick 2 (NCS2). But we cannot guarantee their performance and also all the FP32 models can be inferenced by NCS2. Hence, it is always recommended to choose FP16 models to be inferenced by NCS2.
OpenCV DNN module can use OpenVINO™ Inference Engine as an accelerator backend. However, we are not confidence to say that the processing time of using OpenVINO™ API would be the same as using OpenCV DNN API.
Please note that only subset of OpenVINO™ features is supported in OpenCV DNN. For some advanced tasks, like models quantization, you still need to use OpenVINO™ API and tools directly.
Regards,
Peh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank Peh_Intel for your reply.
Is it possible that the OpenCV DNN API (setPreferableBackend/setPreferableTarget) uses the same OpenVINO™ API to run inferences on Intel platforms?
I will check further with the OpenCV DNN API.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi KSehairi,
As mentioned earlier, only some features of OpenVINO™ API are supported in OpenCV DNN, which uses OpenVINO's Inference Engine as an accelerator backend.
OpenVINO™ Runtime API meanwhile provides a common API to deliver best possible performance on Intel platform of your choice.
Regards,
Hairul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Hairul,
Thank you a lot for this clarification.
Best regards
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi KSehairi,
Glad to be of help.
This thread will no longer be monitored since we have provided information. If you need any additional information from Intel, please submit a new question.
Regards,
Hairul
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page