Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

Object detection model performance is slower in movidius stick

idata
Employee
559 Views

Hi,

 

I converted tiny-ssd to movidius graph and the inference speed is slower than what i achieve in pc. In my pc it takes about 0.16 seconds per frame, whereas on movidius it takes 0.34 seconds per frame.

 

The tiny-ssd model is written in caffe, and it uses combination of squeezenet and ssd-mobilenet layers.

 

What could be the reasons for the reduce in performance, kindly let me know if somebody knows what is wrong.

0 Kudos
3 Replies
idata
Employee
299 Views

Is your stick connected via USB3?

 

Also what speed is your CPU / system
0 Kudos
idata
Employee
299 Views

my stick is connected via USB 2.0.

 

Sorry that i have mentioned my pc is fast. Actually when i ran ssd-mobilenet on movidius it took 0.103 - 0.106 seconds per frame. I am trying to improve the fps so i decided to run tiny-ssd thinking it might run faster than ssd-mobilenet.

 

But tiny-ssd is slower than ssd-mobilenet. Is there any way to see what is happening in the movidius while running inference like memory usage, Intermediate layers output etc..,

0 Kudos
idata
Employee
299 Views

@gopinath You can use the device.get_option() api call to get a list of device_options like RO_CURRENT_MEMORY_USED.

 

You can use the mvNCProfile tool to check layer time/memory bandwith usage. To check intermediate layer outputs, you would have to use the mvNCCompile tool with the -on option to specify a specific layer to use as the output node. This will generate a graph file that will produce the intermediate result from that specified layer.

 

You may be able to get more performance using another NCS device or by using threading. You can use https://github.com/movidius/ncappzoo/tree/ncsdk2/apps/video_objects_scalable for reference.

0 Kudos
Reply