- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi There,
We have tried the official object_detection_sample_ssd (shipped along with openVINO R4), and compared the perf between NCS (Myriad2) and NCS2 (Myriad X), we found the perf gain is very limited. Say for NCS (Myriad2) is about 5.8 fps, and for NCS2 (MyriadX) is about 6.3 fps. Does anyone know if any settings can be tuned? Since NCS2 is said to have about at least 3x perf on this network.
Any hints will be highly appreciated~
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello, anyone help to answer this question please? Since I did see intel show ssd-mobilenet demo with ncs2, which was able to run at around 30fps, on the intel AIDC event.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Just trying to popup this thread, see if anyone could help on this. thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Where does your ssd-mobilenet model comes from? I tried the mobilenet-ssd downloaded by using model_downloader. That network is trained by using caffe and its input resolution is 300x300. Its performance on ncs2 is expected on my env, about 30 fps when running with 'benchmark_app' in the package.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
same here. I am going to try it with the new openvino R5 ...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The model of your own training, the computing power is similar to mobilenet_ssd, but the input is 512*512. Is NCS2 particularly sensitive to the size of the model input?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear folks,
Please upgrade to 2019 R1 if you haven't already.
Thanks for using OpenVino !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, I have a problem. Using SSD Mobilenet v1 with 90 classes_num in config I get 0.05 seconds inference time, however , when I use my own trained model with only 1 classes_num I get 4.7 seconds inference time. Have I missed something. I use ssd_v2_support.json for model optimization.

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page