Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6392 Discussions

Running Mobilenet v2 SSD object detector on Raspberry with openVINO

FPART1
Beginner
2,895 Views

Dear colleagues,

I have installed openVINO in my Raspberry, in order to run a Mobilenet v2 SSD object detector, but I'm struggling to get this working.

I've understood from the documentation that SSD object detector API doesn't work for Movidius VPU sticks, so the auternative I see is to run it via Python code thru the openVINO openCV which is running the inference in the VPU stick. But this is not using inference engine, am I correct in this?

In order to try to use the inference engine I searched for any pre trained IR mobilenet SSD trained in imagenet, but I only found one trained in COCO, which is available in the model zoo. Is there any other place to look?

I have a pre trained model from Tensorflow zoo trained with imagenet, how can I ensure that this network will be compatible with the VPU stick, may I assume that any tensorflow model can be read via open VINO openCV via CV.dnn.readnetwork ? I have the openVINO framework successfully installed in my raspberry.

Thanks in advance for your support!

Regards,

Felipe

 

0 Kudos
1 Solution
Luis_at_Intel
Moderator
2,178 Views

Hi @FPART1​,

 

Thanks for contacting us. I am not sure where in the documentation it says that the SSD Object Detection API isn't supported by the Intel(R) Neural Compute Stick 2 (or NCS1), if you can please share that document so I can take a look.

 

As far as looking for pre-trained models, yes, you are looking at the right place. You can take a look at the open_model_zoo and also use the model_downloader script to download these models to your device (more information here since you are using the Raspberry* Pi).

 

For your last question, you can convert your pre-trained models to IR (Intermediate Representation) using the Model Optimizer. One thing you can do to ensure your model works with the Myriad* VPUs is to take a look at the list of supported models and layers by the Myriad* plugin and try to remove the use of any unsupported layers found in your model. I hope you have found this helpful, please let me know if you have any other questions.

 

 

Regards,

@Luis_at_Intel​ 

View solution in original post

10 Replies
Luis_at_Intel
Moderator
2,179 Views

Hi @FPART1​,

 

Thanks for contacting us. I am not sure where in the documentation it says that the SSD Object Detection API isn't supported by the Intel(R) Neural Compute Stick 2 (or NCS1), if you can please share that document so I can take a look.

 

As far as looking for pre-trained models, yes, you are looking at the right place. You can take a look at the open_model_zoo and also use the model_downloader script to download these models to your device (more information here since you are using the Raspberry* Pi).

 

For your last question, you can convert your pre-trained models to IR (Intermediate Representation) using the Model Optimizer. One thing you can do to ensure your model works with the Myriad* VPUs is to take a look at the list of supported models and layers by the Myriad* plugin and try to remove the use of any unsupported layers found in your model. I hope you have found this helpful, please let me know if you have any other questions.

 

 

Regards,

@Luis_at_Intel​ 

FPART1
Beginner
2,178 Views

 

 

 

 

 

Hi @Luis_at_Intel​ , thanks for your answer and useful links!

About the API compatibility I've understood that not all features are available for VPUs, like the model optimizer. I remember I've seen something about the Assync object detector limitation, but I could not find where to send you. Anyway thanks for clarifying it.

As you recommended I've downloaded the Mobilenet v1 SSD COCO model, via the model downloader. I've done the config file(pbtxt) using tf_txt_graph_ssd.py script. Having both files I tried to load it in openCV:

 

net = cv2.dnn.readNetFromTensorflow( pbfile, pbtxtfile)

but I got the following error:

 

Preprocessor/sub:Sub(Preprocessor/mul)(Preprocessor/sub/y)

OpenCV(3.4.1) Error: Unspecified error (Unknown layer type Sub in op Preprocessor/sub) in populateNet, file /home/piwheels/packaging/opencv-python/opencv/modules/dnn/src/tensorflow/tf_importer.cpp, line 1582

Traceback (most recent call last):

 File "test.py", line 36, in <module>

  net = cv2.dnn.readNetFromTensorflow(args["prototxt"], args["model"])

cv2.error: OpenCV(3.4.1) /home/piwheels/packaging/opencv-python/opencv/modules/dnn/src/tensorflow/tf_importer.cpp:1582: error: (-2) Unknown layer type Sub in op Preprocessor/sub in function populateNet

 

My approach is to make this model work, then I need to fine tune it for a particular object, to run inference in my raspberry with NCS, since my program will do other things with the inference result I am using openCV with Python.. Maybe you know a easier way to get this done, even following a different approach since unfortunately I've researched a lot but I'm still struggling to get this working.

So please help me to a find a right way to get my custom object detector working :)

Many thanks,

Felipe

 

 

0 Kudos
Luis_at_Intel
Moderator
2,179 Views

Hi @FPART1​ ,

 

Hmm not sure what the problem could be as I have not encountered this issue before. Doing some research I found this thread, it looks like you need to use the script corresponding to the OpenCV version. From my understanding you would need to update your OpenCV version for this to work.

 

Please give that a try and let us know your results, I hope this resolves your issue.

 

 

Regards,

@Luis_at_Intel​ 

 

 

0 Kudos
FPART1
Beginner
2,179 Views

​  

Thanks for your support @Luis_at_Intel​! Mine opencv is up to date, ater some research I realised there are some training layers to be removed from the model. Unfortunately my knowledge is not enough for doing this architectural modification  therefore I've followed a different approach, I decided to use the model optimizer who does the network rearrangements automatically, it worked after some try and error.

 

Now I'm trying to fine tune SSD mobilenet to my customized my object detector. Based on some research, it seems going it via tensorflow object detection API would be the easiest way.

Please let me know I'd you have any recommendation or any different procedure to follow.

 

Thanks and regards,

Felipe

 

0 Kudos
Luis_at_Intel
Moderator
2,179 Views

That is great news, I am glad you were able to upgrade your OpenCV to a more recent version.

 

In regards of fine tuning the SSD mobilenet to your custom object detector, I don't have any recommendations for that. Please let us know if you have any additional questions.

 

 

Regards,

@Luis_at_intel

0 Kudos
FPART1
Beginner
2,179 Views

Hi @Luis_at_Intel​ 

I hope this message finds you well.

After a lot of research I was able to train my custom object detector(SSD mobilenet v1), I've downloaded this pre trained ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03 and used object detection API for the re training.

After get the model frozen I'm trying to get the IR :

python3 mo_tf.py --input_model ~/ssd_mobilenet_v1/TPU/frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config ~/ssd_mobilenet_v1/TPU/tpu.config --tensorflow_use_custom_operations_config extensions/front/tf/ssd_support_api_v1.14.json --output="detection_boxes,detection_scores,num_detections" --reverse_input_channels

But I'm getting the following error:

Model Optimizer arguments:

Common parameters:

   - Path to the Input Model:   /opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer~/ssd_mobilenet_v1/TPU/frozen_inference_graph.pb

   - Path for generated IR:   /opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/.

   - IR output name:   frozen_inference_graph

   - Log level:   ERROR

   - Batch:   Not specified, inherited from the model

   - Input layers:   Not specified, inherited from the model

   - Output layers:   detection_boxes,detection_scores,num_detections

   - Input shapes:   Not specified, inherited from the model

   - Mean values:   Not specified

   - Scale values:   Not specified

   - Scale factor:   Not specified

   - Precision of IR:   FP32

   - Enable fusing:   True

   - Enable grouped convolutions fusing:   True

   - Move mean values to preprocess section:   False

   - Reverse input channels:   True

TensorFlow specific parameters:

   - Input model in text protobuf format:   False

   - Path to model dump for TensorBoard:   None

   - List of shared libraries with TensorFlow custom layers implementation:   None

   - Update the configuration file with input/output node names:   None

   - Use configuration file used to generate the model with Object Detection API:   /opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer~/ssd_mobilenet_v1/TPU/tpu.config

   - Operations to offload:   None

   - Patterns to offload:   None

   - Use the config file:   /opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/extensions/front/tf/ssd_support_api_v1.14.json

Model Optimizer version:   2019.2.0-436-gf5827d4

/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.

 _np_qint8 = np.dtype([("qint8", np.int8, 1)])

/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.

 _np_quint8 = np.dtype([("quint8", np.uint8, 1)])

/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.

 _np_qint16 = np.dtype([("qint16", np.int16, 1)])

/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.

 _np_quint16 = np.dtype([("quint16", np.uint16, 1)])

/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.

 _np_qint32 = np.dtype([("qint32", np.int32, 1)])

/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.

 np_resource = np.dtype([("resource", np.ubyte, 1)])

/usr/local/lib/python3.5/dist-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.

 from ._conv import register_converters as _register_converters

/usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.

 _np_qint8 = np.dtype([("qint8", np.int8, 1)])

/usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.

 _np_quint8 = np.dtype([("quint8", np.uint8, 1)])

/usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.

 _np_qint16 = np.dtype([("qint16", np.int16, 1)])

/usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.

 _np_quint16 = np.dtype([("quint16", np.uint16, 1)])

/usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.

 _np_qint32 = np.dtype([("qint32", np.int32, 1)])

/usr/local/lib/python3.5/dist-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.

 np_resource = np.dtype([("resource", np.ubyte, 1)])

The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.

WARNING: Logging before flag parsing goes to stderr.

E0925 22:41:00.178525 140163210213120 main.py:307] Exception occurred during running replacer "ObjectDetectionAPISSDPostprocessorReplacement" (<class 'extensions.front.tf.ObjectDetectionAPI.ObjectDetectionAPISSDPostprocessorReplacement'>): The matched sub-graph contains network input node "image_tensor".

 For more information please refer to Model Optimizer FAQ (https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html), question #75.

​Unfortunately this FAQ is not helping me to go any further, it looks that the node "image_tensor" is not defined in the custom replacement JSON file, could you please advice?

 

Thanks and regards,

Felipe

0 Kudos
Luis_at_Intel
Moderator
2,179 Views

Hi Felipe,

 

I apologize for the delay in my response, it sounds like the conversion is having trouble interpreting the image_tensor layer on your model. This may be happening as the custom layer hasn't been implemented in the JSON file and the Model Optimizer can't process it.

 

My suggestion would be to take a look at the Model Optimizer for R3, and register that custom layer as an extension to the Model Optimizer. In this case you may need to install the latest version and try this out, let me know if you have any questions or if you were able to resolve the issue experienced.

 

Regards,

@Luis_at_Intel​ 

0 Kudos
FPART1
Beginner
2,179 Views

Hi @Luis_at_Intel​ 

I've upgraded MO to R3, but I still got the same error when running mo_tf, is there any additional JSON I could use?

Regards,

Felipe

0 Kudos
Luis_at_Intel
Moderator
2,178 Views

Hi Felipe,

 

Would it be possible to share your model so I can test it as well? This way I can see if there is any other suggestions I can think of, you can share your model via Private Message if you don't want to share it publicly.

 

 

Regards,

@Luis_at_Intel​ 

0 Kudos
FPART1
Beginner
2,178 Views

Hi @Luis_at_Intel,

Thanks I appreciate, i've just sent you a message with my storage.

Regards,

Felipe

0 Kudos
Reply