Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

AttributeError: module 'mvnc.mvncapi' has no attribute 'global_set_option'

idata
Employee
2,712 Views

while running real time object recognition using caffe graph i got following error ,

 

python3 ./detectionExample/Main.py --video 0

 

Traceback (most recent call last):

 

File "./detectionExample/Main.py", line 6, in

 

from ObjectWrapper import *

 

File "/home/pi/Downloads/YoloV2NCS-master/detectionExample/ObjectWrapper.py", line 17, in

 

class ObjectWrapper():

 

File "/home/pi/Downloads/YoloV2NCS-master/detectionExample/ObjectWrapper.py", line 18, in ObjectWrapper

 

mvnc.global_set_option(mvnc.GlobalOption.RW_LOG_LEVEL, 2)

 

AttributeError: module 'mvnc.mvncapi' has no attribute 'global_set_option'

 

pls help out

 

thanking you
0 Kudos
30 Replies
idata
Employee
1,643 Views

@Hashir

 

You seem to be using NCSDK v1.xx.xx.

 

I think that YoloV2NCS is upgrading to NCSDK v2.xx.xx.

 

https://github.com/duangenquan/YoloV2NCS

 

https://movidius.github.io/ncsdk/ncapi/python_api_migration.html#global-functions

 

NCSDK v1

 

SetGlobalOption()

 

NCSDK v2

 

global_set_option
0 Kudos
idata
Employee
1,643 Views

@PINTO

 

Of course your right ,i allready installed NCSDK v1 instead of v2 bcoz python programs written in v1 don't support in v2 . so can i use yolo realtime applications in v1 version withou upgrading to v2

 

Thanking you

0 Kudos
idata
Employee
1,643 Views

@PINTO

 

Thanks @PINTO , it seem to be working well after importing YOLOV2NCS for version 1(NCSDK v1). thanks a lot .

 

can you tell me whether v1 or v2 which one is better and efficient for real time object recognition in raspberry pi3 , bcz am new guy in this field. i hope you can help me

 

thanking you

0 Kudos
idata
Employee
1,643 Views

@Hashir

 

As a result of actual implementation, I think that there is no big difference in performance between NCSDK v1 and NCSDK v2.

 

For example, the following is an example of my MobileNet-SSD, but there was no clear difference in performance between NCSDK v1 and NCSDK v2.

 

https://github.com/PINTO0309/MobileNet-SSD/tree/v1.0

 

https://github.com/PINTO0309/MobileNet-SSD/tree/v2.0

 

An example of TinyYolo's v1 version is below.

 

By default, the threshold is set to miscellaneous, so adjust as you like. (default threshold = 0.3)

 

https://github.com/PINTO0309/TinyYolo

 

The fastest program in my implementation is as follows.

 

Unlike TinyYolo, sacrificing accuracy and pursuing high speed.

 

https://github.com/PINTO0309/MobileNet-SSD-RealSense

 

According to experience, it is important to create a lightweight model and create an efficient program, rather than care about the NCSDK version.

 

I will be happy if you will be helpful.
0 Kudos
idata
Employee
1,643 Views

@PINTO

 

Thanks @PINTO , but after running real time object recognition using caffe it seem to be very low speed means low fps and couldn't detect object clearly.. so according to ur valuable comments i think lightweight models may become faster enough, ist ?. but more number of classes will get high accuracy , am i right ?

0 Kudos
idata
Employee
1,643 Views

@PINTO

 

Hey @PINTO , I would like to know that can i use YOLOv3 or YOLOv2 weight instead of TinyYOLO for getting better accuracy and efficiency

 

I appreciate any help!

0 Kudos
idata
Employee
1,643 Views

@Hashir

 

 

so according to ur valuable comments i think lightweight models may become faster enough, ist ?

 

Yes, I think so.

 

YoloV2 and YoloV3 is very very heavy weight.

 

If you are using RaspberryPi for your device, it will not be useful.

 

TinyYolo is sufficiently light weight, accuracy is very low, the balance between accuracy and speed is bad.

 

but more number of classes will get high accuracy , am i right ?

 

There is no correlation between the number of classes and the accuracy.

 

I would like to know that can i use YOLOv3 or YOLOv2 weight instead of TinyYOLO for getting better accuracy and efficiency

 

Although accuracy improves, it probably will not survive use.

 

If you use a PC equipped with Core i5 or i7 + GPU you will lose stress.

 

In terms of efficiency, I think that it is meaningful only to make the program structure smart.

 

0 Kudos
idata
Employee
1,643 Views

@PINTO

 

Thank u very very much for your responses in each of my doubts…

 

So after seeing Ur GitHub repo about MobileNetSSD , some doubts are arises even before trying those examples in Ur repo

 

1) How can I create my own customised datasets in MobileNet like yolo or tensorflow

 

2) is there any other models having more accurate other then MobileNet for pi

 

3) can I convert my own yolo weight nd cfg files into MobileNetSSD or MobileNetSSD.caffe models nd porotxt files ?

 

4) for the time being can I run those examples in Ur GitHub repo in MobileNet in Ubuntu desktop for training purposes

 

Thanking you

0 Kudos
idata
Employee
1,643 Views

@PINTO

 

After creating all tf.records file for my own datasets I'm tensorflow, i can't train my own model,.. it shows some errors like some tensor can't convert etc

 

If you know something about that pls help me out

0 Kudos
idata
Employee
1,643 Views

@Hashir

 

 

1) How can I create my own customised datasets in MobileNet like yolo or tensorflow

 

4) can I run those examples in Ur GitHub repo in MobileNet in Ubuntu desktop for training purposes?

 

I have never done it before, but the following is likely to be helpful.

 

https://github.com/movidius/ncappzoo/tree/master/caffe/SSD_MobileNet

 

https://github.com/FreeApe/VGG-or-MobileNet-SSD

 

https://github.com/chuanqi305/MobileNet-SSD

 

https://github.com/avBuffer/MobilenetSSD_caffe

 

2) is there any other models having more accurate other then MobileNet for pi

 

In the past, I got Intel 's Ashwin Vijayakumar to introduce "MobileNetSSD" and compared the performance of TinyYolo and MobileNetSSD.

 

As a result, MobileNetSSD judged better balance between accuracy and speed than TinyYolo.

 

I am not interested in TinyYolo anymore.

 

Also, since it is aimed at detecting objects with low-spec RaspberryPi, I am no longer interested in models other than MobileNetSSD and MobileNetSSDLite.

 

Sorry.

 

3) can I convert my own yolo weight nd cfg files into MobileNetSSD or MobileNetSSD.caffe models nd porotxt files ?

 

Since Yolo and SSD are completely different models, I think that it is not a good idea to think about mutual conversion.

 

After creating all tf.records file for my own datasets I'm tensorflow, i can't train my own model,.. it shows some errors like some tensor can't convert etc

 

I can not do it easily, and there seems to be quite a lot of restrictions.

 

Many engineers around the world are thinking about the same thing, but it does not work perfectly.

 

First of all, please read all the following articles firmly.

 

You have a choice of "OpenVINO" but unfortunately it will not work on the ARM architecture.

 

x86_64 or amd64 architecture only…

 

https://ncsforum.movidius.com/discussion/746/is-it-possible-to-use-tensorflow-ssd-mobilenet-on-ncs/p1

 

 

There is many sample program in ncappzoo, so please refer to that.

0 Kudos
idata
Employee
1,643 Views

@PINTO

 

Thank very much for your kind support and help… I will let you know if there any issues after following ur GitHub repo steps..

 

Thanking you

0 Kudos
idata
Employee
1,643 Views

@PINTO

 

Bfr am doing Ur GitHub example in MobileNetSSD in real time object recognition, I would like to know how you capture or identify the distance from object in SingleStickSSDwithRealSense.py..

 

Thanks in advance

0 Kudos
idata
Employee
1,643 Views

@Hashir

 

◆Precondition

 

You are using a RealSense D435 instead of a USB Camera.

 

◆Mechanism of distance measurement

 

D435 shoots with a camera, and it also emits infrared rays.

 

It measures the distance by the difference of infrared flight time.

 

It is generally called "TOF(Time of Flight)".

 

◆Originality of my program

 

1.Calculate the center point of the object detected by MobileNetSSD.

 

box_left (X coordinate) box_right (X coordinate) box_top (Y coordinate) box_bottom (Y coordinate)

 

2.It passes the center point coordinates to the API(get_distance) of D435 and gets the distance.

 

meters = depth_frame.as_depth_frame().get_distance(box_left+int((box_right-box_left)/2), box_top+int((box_bottom-box_top)/2))

 

box_left+int((box_right-box_left)/2) = X coordinate box_top+int((box_bottom-box_top)/2) = Y coordinate
0 Kudos
idata
Employee
1,642 Views

@PINTO

 

Thanks @PINTO . U said that without using Intel realsense cam we can't measure distance, isn't ? . But am using RPi noIR camera . So can I get the distance by using this camera nd do I needed ultrasonic or IR sensors extra to do this ?

 

Thanks in advance

0 Kudos
idata
Employee
1,642 Views

@Hashir

 

 

U said that without using Intel realsense cam we can't measure distance, isn't ?

 

Yes.

 

So can I get the distance by using this camera nd do I needed ultrasonic or IR sensors extra to do this ?

 

Yes. That's right.

 

 

However, there is also a method to easily measure the distance with OpenCV + USBCamera as described in the following article.

 

I think that the accuracy drops considerably compared to ToF.

 

Sorry for the Japanese article.

 

http://opencv.blog.jp/python/%E7%B0%A1%E6%98%93%E8%B7%9D%E9%9B%A2%E8%A8%88%E6%B8%AC

 

I will think about incorporating it into my repository in the future with reference to the above mentioned article.

0 Kudos
idata
Employee
1,642 Views

@Hashir

 

Or, my Japanese article.

 

However, it is implemented by C++.

 

https://qiita.com/PINTO/items/c5d69c0c0c58d4ded3f8
0 Kudos
idata
Employee
1,642 Views

@PINTO

 

Thanks for your valuable information.. but unfortunately I don't know nothing about japanese lang.. pls do me a favour.. can u pls give those article in English instead of Japanese lang

 

Thanks in advance

0 Kudos
idata
Employee
1,643 Views

@PINTO

 

while installing opencv wrapper i got folowing error. can u pls help me out

 

/home/pi/librealsense/wrappers/opencv/dnn/rs-dnn.cpp:6:32: fatal error: librealsense2/rs.hpp: No such file or directory

 

#include

 

^

 

compilation terminated.

 

dnn/CMakeFiles/rs-dnn.dir/build.make:62: recipe for target 'dnn/CMakeFiles/rs-dnn.dir/rs-dnn.cpp.o' failed

 

make[2]: *** [dnn/CMakeFiles/rs-dnn.dir/rs-dnn.cpp.o] Error 1

 

CMakeFiles/Makefile2:250: recipe for target 'dnn/CMakeFiles/rs-dnn.dir/all' failed

 

make[1]: *** [dnn/CMakeFiles/rs-dnn.dir/all] Error 2

 

Makefile:127: recipe for target 'all' failed

 

make: *** [all] Error 2

 

Thanks in advance

0 Kudos
idata
Employee
1,642 Views

@Hashir

 

How about with the command below?

 

$ cd ~/librealsense/build $ make uninstall $ cd .. $ rm -r -f build;mkdir build;cd build $ cmake .. -DBUILD_EXAMPLES=true -DCMAKE_BUILD_TYPE=Release -DBUILD_CV_EXAMPLES=true $ make -j1 $ sudo make install
0 Kudos
idata
Employee
1,401 Views

@Hashir

 

◆"簡易距離計測" convert to English by Google Chrome

 

https://drive.google.com/open?id=1ajM1ReLMim8W-_KJ1P_Qe42y4ql2IDoQ
0 Kudos
Reply