Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6574 Discussions

Is it possible to use TensorFlow SSD-MobileNet on NCS?

idata
Employee
10,828 Views

I'm working with an object detection model and I would like to use TensorFlow version of SSD-MobileNet. I saw the Caffe version and tried to retrain it, but the results were very poor. After training for 100 hours the mAP was still less than 0.03. I tried to tweak the learning rate and aspect ratios to better suit my dataset (my objects are mostly squares), but that didn't help. Then I switched to TensorFlow Object Detection API to see if there is a problem in my dataset. However, after training for just 6 hours I already got a mAP of 0.5. I also noticed that the TensorFlow version is also much faster on my machine; (0.6 sec / iteration) vs (2 sec / iteration) on caffe. So the TensorFlow version works much better and I'd like to use that instead if possible.

 

Is there any way to convert the model to NCS? And if direct conversion from TensorFlow to NCS is not possible, would it be possible to convert the model to Caffe format and then to NCS? Or could I just copy the TensorFlow model weights to the equivalent Caffe model?

0 Kudos
47 Replies
idata
Employee
3,566 Views

@alex_z Awesome! Thanks!

0 Kudos
idata
Employee
3,566 Views

Is there any way to get the OpenVINO SDK on a raspberry Pi?

0 Kudos
idata
Employee
3,566 Views

@brunden77 please take a look at this thread: https://software.intel.com/en-us/forums/computer-vision/topic/781076

0 Kudos
idata
Employee
3,566 Views

@alex_z Hi, I tried to use your command to convert a SSD net i trained for detecting heads. Unfortunately, I'm getting different error.

 

sudo ./mo_tf.py --input_model=/work/22_movidus/ncappzoo/tensorflow/custom_tf/ssd_frozen_inference_graph.pb --tensorflow_use_custom_operations_config extensions/front/tf/ssd_support.json --output="detection_boxes,detection_scores,num_detections" --data_type FP16

 

It returned

 

[ ERROR ] Failed to determine the pre-processed image size from the original TensorFlow graph. Please, specify "preprocessed_image_width" and "preprocessed_image_height" in the topology replacement configuration file in the "custom_attributes" section of the "PreprocessorReplacement" replacer. This value is defined in the configuration file samples/configs/*.config of the model in the Object Detection model zoo as "min_dimension".

 

So I opened ssd_support.json and added this to the top of the file

 

{ "custom_attributes": { "preprocessed_image_width": 300, "preprocessed_image_height": 300 }, "id" : "PreprocessorReplacement", . . .

 

But now, I'm getting a different error

 

InvalidArgumentError (see above for traceback): NodeDef mentions attr 'index_type' not in Op<name=Fill; signature=dims:int32, value:T -> output:T; attr=T:type>; NodeDef: MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/ones = Fill[T=DT_INT32, index_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/Reshape_port_0_ie_placeholder_0_0, _arg_MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/ones/Const_port_0_ie_placeholder_0_1). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.). [[Node: MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/ones = Fill[T=DT_INT32, index_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"] (_arg_MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/Reshape_port_0_ie_placeholder_0_0, _arg_MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/ones/Const_port_0_ie_placeholder_0_1)]]

 

Any clues? Thanks a lot!

0 Kudos
idata
Employee
3,566 Views

@azmath Hi! Are you using SSD or SSD-MobileNet?

0 Kudos
idata
Employee
3,566 Views

@alex_z, @WuXinyang

 

Hi all !!! I am using SSD mobile net (tensorflow ) on raspberry pi . but it is very slow to the extent that it can not be used for real time apps…and i here that there is a magic called NCS for fast processing .. How can I use that ?….. just point me the direction and give github link….. Thank you……
0 Kudos
idata
Employee
3,566 Views

@alex_z ssd_mobilenet_v1_coco . I used the one from tensorflow object detection model zoo. TF version is 1.8.

0 Kudos
idata
Employee
3,566 Views

@azmath What archive file have you used for model training? I use ssd_mobilenet_v1_coco_2017_11_17.tar.gz and TF 1.4.

0 Kudos
idata
Employee
3,566 Views

@alex_z

 

Thank you

 

Ok so i managed to get it converted. But I am not able to run it using inference_engine.

 

/opt/intel/computer_vision_sdk/deployment_tools/demo$../inference_engine/samples/build/intel64/Release/classification_sample -d CPU -i car.png -m ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.xml [ INFO ] InferenceEngine: API version ............ 1.1 Build .................. 11653 [ INFO ] Parsing input parameters [ INFO ] Loading plugin API version ............ 1.1 Build .................. lnx_20180510 Description ....... MKLDNNPlugin [ INFO ] Loading network files: ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.xml ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.bin [ INFO ] Preparing input blobs [ WARNING ] Image is resized from (787, 259) to (300, 300) [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ ERROR ] Incorrect output dimensions for classification model

 

What could be done…

 

Thanks a lot for your time.

0 Kudos
idata
Employee
3,566 Views

@azmath Try object_detection_demo_ssd_async sample.

0 Kudos
idata
Employee
3,566 Views

Oh! I had to use object_detection_sample

 

But still now I get this error

 

../inference_engine/samples/build/intel64/Release/object_detection_sample_ssd -d CPU -i car.png -m ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.xml [ INFO ] InferenceEngine: API version ............ 1.1 Build .................. 11653 Parsing input parameters [ INFO ] Loading plugin API version ............ 1.1 Build .................. lnx_20180510 Description ....... MKLDNNPlugin [ INFO ] Loading network files: ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.xml ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.bin [ INFO ] Preparing input blobs [ INFO ] Batch size is 1 [ INFO ] Preparing output blobs [ INFO ] Loading model to the plugin [ ERROR ] Supported primitive descriptors list is empty for node: Postprocessor/convert_scores

 

Anyone help…

0 Kudos
idata
Employee
3,566 Views

@alex_z

 

ok will try

0 Kudos
idata
Employee
3,566 Views

@alex_z

 

/opt/intel/computer_vision_sdk/deployment_tools/demo$ ../inference_engine/samples/build/intel64/Release/object_detection_demo_ssd_async -d CPU -m ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.xml -i /dev/video0 InferenceEngine: API version ............ 1.1 Build .................. 11653 [ INFO ] Parsing input parameters [ INFO ] Reading input [ INFO ] Loading plugin API version ............ 1.1 Build .................. lnx_20180510 Description ....... MKLDNNPlugin [ INFO ] Loading network files [ INFO ] Batch size is forced to 1. [ INFO ] Checking that the inputs are as the sample expects [ INFO ] Checking that the outputs are as the sample expects [ INFO ] Loading model to the plugin [ ERROR ] Supported primitive descriptors list is empty for node: Postprocessor/convert_scores

 

Same issue

0 Kudos
idata
Employee
3,566 Views

@azmath If you use "--data_type FP16" for converting your model, the resulting IR does not work on CPU. Try to ask for help here: https://software.intel.com/en-us/forums/computer-vision.

0 Kudos
idata
Employee
3,566 Views

@alex_z Thanks,

 

I am trying to run this on the NCS as well. When I use -d Myriad I get

 

../inference_engine/samples/build/intel64/Release/object_detection_demo_ssd_async -d Myriad -m ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.xml -i /dev/video0 mples/build/intel64/Release/object_detection_demInferenceEngine: API version ............ 1.1 Build .................. 11653 [ INFO ] Parsing input parameters [ INFO ] Reading input [ INFO ] Loading plugin [ ERROR ] Cannot find plugin for device: Default
0 Kudos
idata
Employee
3,566 Views

@azmath try -d MYRIAD

0 Kudos
idata
Employee
3,566 Views

@alex_z Thanks main. I got it working. But the newer ssdmobilenet v1 published in 2018 gives

 

/opt/intel/computer_vision_sdk_2018.2.300/deployment_tools/demo$ ../inference_engine/samples/build/intel64/Release/object_detection_demo_ssd_async -d "MYRIAD" -m ./ir/ssdmobilenet16/ssdmobilenet_frozen_inference_graph.xml -i /dev/video0 InferenceEngine: API version ............ 1.1 Build .................. 11653 [ INFO ] Parsing input parameters [ INFO ] Reading input [ INFO ] Loading plugin   API version ............ 1.1 Build .................. 11653 Description ....... myriadPlugin [ INFO ] Loading network files [ INFO ] Batch size is forced to 1. [ INFO ] Checking that the inputs are as the sample expects [ INFO ] Checking that the outputs are as the sample expects [ INFO ] Loading model to the plugin [ ERROR ] [VPU] Unsupported activation type

 

The version that you suggested - ssdmobilenet_2017, works. It means that intel is just playing catchup right now. Nothing seems to work reliably.

 

I am looking for benchmarks of all TF obj detection models given here https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md

 Zc2x03T.png

 

If anyone could share their observations on the NCS it would be nice.

0 Kudos
idata
Employee
3,566 Views

@azmath I have the same error with IR converted from frozen_inference_graph which contained in ssd_mobilenet_v1_coco_2018_03_29, I think Google made minor updates in this model.

0 Kudos
idata
Employee
3,566 Views

@WuXinyang

 

could you please share your solution for the issue you had?

 

python3 mo_tf.py --input_model /home/wuxy/Downloads/ssd_mobilenet_v1_coc

 

o_2017_11_17/frozen_inference_graph.pb --output_dir ~/models_VINO

 

and it returns some errors: [ ERROR ] Graph contains a cycle.

 

Thank you.

0 Kudos
idata
Employee
3,558 Views

I tried

 

python mo_tf.py --input_model "C:\Users\Tolotra Samuel\PycharmProjects\tensorflow_object_detection\object_detection\inference_graph\frozen_inference_graph.pb" --output_dir ./output_dir --output "detection_boxes,detection_scores,num_detections"

 

But it gives me the error:

 

 

[ ERROR ] Graph contains a cycle. Can not proceed.

 

0 Kudos
idata
Employee
3,558 Views

@alex_z Okay. I managed to convert it into frozen_inference_graph.bin. Now how do I run it on Movidius Raspberry pi ? Do I need to compile it with mvnCCompile ?

0 Kudos
Reply