- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello i did some test on inference with the same model in openCv and with native SDK, and surprisingly I get a better result with openCv than with the SDK, and I'm talking about a big difference.
Im on a rpi 3b, raspbian stretch, openVino last version and 2019 R3 model
Is that the expected behavior??
Best Regards
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Ignacio,
Which model did you use to compare results? Are you able to include your outputs? I would like to verify this on my end if possible.
Are you using the Neural Compute Stick 2 on the RPI?
Just to clarify: the NCSDK (mentioned in the subject line) is now unsupported.
Best Regards,
Sahira
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello
First i have to say sorry as i double posted this and that not my intention, i first send the question and as i do not see published i sent it again.....
Here is the original version, you can find code and model linked on last post
https://software.intel.com/en-us/forums/intel-distribution-of-openvino-toolkit/topic/856367
The model is the one that i downloaded from here
https://download.01.org/opencv/2019/open_model_zoo/R3/20190905_163000_models_bin/
Yes im using Neural Compute Stick 2 on the rpi, and yes sorry i meant openvino inference engine not ncsdk
I did not do anything extrange on my code, only follow code samples and i was surprised to find that openCv is faster than openvino inference_engine.
I even tried asyn version on both approach and still openCv is faster
Maybe im doing something wrong, thats why im asking if this is the expected behaviour
Best Regards
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page