Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6404 Discussions

Exception: mvncStatus.ERROR while doing make run for GoogLenet

idata
Employee
589 Views

Hello,

 

I'm getting below error while doing "make run" from GoogLenet.

 

making prereqs

 

(cd ../../data/ilsvrc12; make)

 

make[1]: Entering directory '/home/parth/workspace/ncsdk/ncsdk/examples/data/ilsvrc12'

 

make[1]: Leaving directory '/home/parth/workspace/ncsdk/ncsdk/examples/data/ilsvrc12'

 

making prototxt

 

Prototxt file already exists

 

making caffemodel

 

caffemodel file already exists

 

making compile

 

mvNCCompile -w bvlc_googlenet.caffemodel -s 12 deploy.prototxt

 

mvNCCompile v02.00, Copyright @ Movidius Ltd 2016

 

Layer inception_3b/1x1 forced to im2col_v2, because its output is used in concat

 

Layer inception_3b/pool_proj forced to im2col_v2, because its output is used in concat

 

Layer inception_4a/1x1 forced to im2col_v2, because its output is used in concat

 

Layer inception_4a/pool_proj forced to im2col_v2, because its output is used in concat

 

Layer inception_4b/1x1 forced to im2col_v2, because its output is used in concat

 

Layer inception_4b/pool_proj forced to im2col_v2, because its output is used in concat

 

Layer inception_4c/1x1 forced to im2col_v2, because its output is used in concat

 

Layer inception_4c/pool_proj forced to im2col_v2, because its output is used in concat

 

Layer inception_4d/1x1 forced to im2col_v2, because its output is used in concat

 

Layer inception_4d/pool_proj forced to im2col_v2, because its output is used in concat

 

Layer inception_4e/1x1 forced to im2col_v2, because its output is used in concat

 

Layer inception_4e/pool_proj forced to im2col_v2, because its output is used in concat

 

Layer inception_5a/1x1 forced to im2col_v2, because its output is used in concat

 

Layer inception_5a/pool_proj forced to im2col_v2, because its output is used in concat

 

Layer inception_5b/1x1 forced to im2col_v2, because its output is used in concat

 

Layer inception_5b/pool_proj forced to im2col_v2, because its output is used in concat

 

/usr/local/bin/ncsdk/Controllers/FileIO.py:52: UserWarning: You are using a large type. Consider reducing your data sizes for best performance

 

"Consider reducing your data sizes for best performance\033[0m")

 

making run

 

./run.py

 

Device 0 Address: 3 - VID/PID 03e7:2150

 

Starting wait for connect with 2000ms timeout

 

Found Address: 3 - VID/PID 03e7:2150

 

Found EP 0x81 : max packet size is 512 bytes

 

Found EP 0x01 : max packet size is 512 bytes

 

Found and opened device

 

Performing bulk write of 865724 bytes…

 

Successfully sent 865724 bytes of data in 456.977870 ms (1.806693 MB/s)

 

Boot successful, device address 3

 

Device 0 Address: 2 - VID/PID 03e7:f63b

 

Found Address: 2 - VID/PID 03e7:f63b

 

done

 

Booted 2 -> VSC

 

Traceback (most recent call last):

 

File "./run.py", line 66, in

 

graph = device.AllocateGraph(blob)

 

File "/usr/local/lib/python3.5/dist-packages/mvnc/mvncapi.py", line 203, in AllocateGraph

 

raise Exception(Status(status))

 

Exception: mvncStatus.ERROR

 

Makefile:91: recipe for target 'run' failed

 

make: *** [run] Error 1

 

Now i have read https://ncsforum.movidius.com/discussion/comment/1052/#Comment_1052

 

but even after changing USB filterType Remote to "No" didn't help.

 

My Env.

 

Ubuntu : 16.04 LTS

 

VirtualBox : 5.2.12

 

Thanks,

 

Pshah618
0 Kudos
2 Replies
idata
Employee
310 Views

Hi @pshah618, are you able to run the hello_ncs_py or hello_ncs_cpp examples in the ncsdk/examples/apps directory? These examples verify that the API is able to communicate with the USB devices correctly.

 

Assuming device communication is working, try doing make clean in the GooGleNet example directory and then make run again.

 

If it still does not work, please provide your host OS version and NCSDK version.

0 Kudos
idata
Employee
310 Views

Hi @Heather_at_Intel ,

 

Yes, I'm able to run hello_ncs_cpp. But still it didn't resolve the issue even after the changes you have suggested.

 

It got resolved when I changed the USB 2.0 [it was 3.0]. So I just stopped the VM, Went to settings and changed to USB 2.0. then rebooted the VM.

 

It's working fine afterwards. Do you think this change will affect the performance ?

 

Thanks,

 

Parth
0 Kudos
Reply