I'm working with an object detection model and I would like to use TensorFlow version of SSD-MobileNet. I saw the Caffe version and tried to retrain it, but the results were very poor. After training for 100 hours the mAP was still less than 0.03. I tried to tweak the learning rate and aspect ratios to better suit my dataset (my objects are mostly squares), but that didn't help. Then I switched to TensorFlow Object Detection API to see if there is a problem in my dataset. However, after training for just 6 hours I already got a mAP of 0.5. I also noticed that the TensorFlow version is also much faster on my machine; (0.6 sec / iteration) vs (2 sec / iteration) on caffe. So the TensorFlow version works much better and I'd like to use that instead if possible.
Is there any way to convert the model to NCS? And if direct conversion from TensorFlow to NCS is not possible, would it be possible to convert the model to Caffe format and then to NCS? Or could I just copy the TensorFlow model weights to the equivalent Caffe model?
链接已复制
@alex_z Hi, I tried to use your command to convert a SSD net i trained for detecting heads. Unfortunately, I'm getting different error.
sudo ./mo_tf.py --input_model=/work/22_movidus/ncappzoo/tensorflow/custom_tf/ssd_frozen_inference_graph.pb --tensorflow_use_custom_operations_config extensions/front/tf/ssd_support.json --output="detection_boxes,detection_scores,num_detections" --data_type FP16
It returned
[ ERROR ] Failed to determine the pre-processed image size from the original TensorFlow graph. Please, specify "preprocessed_image_width" and "preprocessed_image_height" in the topology replacement configuration file in the "custom_attributes" section of the "PreprocessorReplacement" replacer. This value is defined in the configuration file samples/configs/*.config of the model in the Object Detection model zoo as "min_dimension".
So I opened ssd_support.json and added this to the top of the file
{
"custom_attributes": {
"preprocessed_image_width": 300,
"preprocessed_image_height": 300
},
"id" : "PreprocessorReplacement",
.
.
.
But now, I'm getting a different error
InvalidArgumentError (see above for traceback): NodeDef mentions attr 'index_type' not in Op<name=Fill; signature=dims:int32, value:T -> output:T; attr=T:type>; NodeDef: MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/ones = Fill[T=DT_INT32, index_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/Reshape_port_0_ie_placeholder_0_0, _arg_MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/ones/Const_port_0_ie_placeholder_0_1). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
[[Node: MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/ones = Fill[T=DT_INT32, index_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"]
(_arg_MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/Reshape_port_0_ie_placeholder_0_0, _arg_MultipleGridAnchorGenerator/Meshgrid_4/ExpandedShape_1/ones/Const_port_0_ie_placeholder_0_1)]]
Any clues? Thanks a lot!
@alex_z, @WuXinyang
Hi all !!! I am using SSD mobile net (tensorflow ) on raspberry pi . but it is very slow to the extent that it can not be used for real time apps…and i here that there is a magic called NCS for fast processing .. How can I use that ?….. just point me the direction and give github link….. Thank you……
@alex_z
Thank you
Ok so i managed to get it converted. But I am not able to run it using
inference_engine
.
/opt/intel/computer_vision_sdk/deployment_tools/demo$../inference_engine/samples/build/intel64/Release/classification_sample -d CPU -i car.png -m ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.xml
[ INFO ] InferenceEngine:
API version ............ 1.1
Build .................. 11653
[ INFO ] Parsing input parameters
[ INFO ] Loading plugin
API version ............ 1.1
Build .................. lnx_20180510
Description ....... MKLDNNPlugin
[ INFO ] Loading network files:
./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.xml
./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.bin
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (787, 259) to (300, 300)
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ ERROR ] Incorrect output dimensions for classification model
What could be done…
Thanks a lot for your time.
Oh! I had to use object_detection_sample
But still now I get this error
../inference_engine/samples/build/intel64/Release/object_detection_sample_ssd -d CPU -i car.png -m ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.xml
[ INFO ] InferenceEngine:
API version ............ 1.1
Build .................. 11653
Parsing input parameters
[ INFO ] Loading plugin
API version ............ 1.1
Build .................. lnx_20180510
Description ....... MKLDNNPlugin
[ INFO ] Loading network files:
./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.xml
./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ ERROR ] Supported primitive descriptors list is empty for node: Postprocessor/convert_scores
Anyone help…
@alex_z
/opt/intel/computer_vision_sdk/deployment_tools/demo$ ../inference_engine/samples/build/intel64/Release/object_detection_demo_ssd_async -d CPU -m ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.xml -i /dev/video0
InferenceEngine:
API version ............ 1.1
Build .................. 11653
[ INFO ] Parsing input parameters
[ INFO ] Reading input
[ INFO ] Loading plugin
API version ............ 1.1
Build .................. lnx_20180510
Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Batch size is forced to 1.
[ INFO ] Checking that the inputs are as the sample expects
[ INFO ] Checking that the outputs are as the sample expects
[ INFO ] Loading model to the plugin
[ ERROR ] Supported primitive descriptors list is empty for node: Postprocessor/convert_scores
Same issue
@alex_z Thanks,
I am trying to run this on the NCS as well. When I use -d Myriad
I get
../inference_engine/samples/build/intel64/Release/object_detection_demo_ssd_async -d Myriad -m ./ir/ssdmobilenet/ssdmobilenet_frozen_inference_graph.xml -i /dev/video0 mples/build/intel64/Release/object_detection_demInferenceEngine:
API version ............ 1.1
Build .................. 11653
[ INFO ] Parsing input parameters
[ INFO ] Reading input
[ INFO ] Loading plugin
[ ERROR ] Cannot find plugin for device: Default
@alex_z Thanks main. I got it working. But the newer ssdmobilenet v1 published in 2018 gives
/opt/intel/computer_vision_sdk_2018.2.300/deployment_tools/demo$ ../inference_engine/samples/build/intel64/Release/object_detection_demo_ssd_async -d "MYRIAD" -m ./ir/ssdmobilenet16/ssdmobilenet_frozen_inference_graph.xml -i /dev/video0
InferenceEngine:
API version ............ 1.1
Build .................. 11653
[ INFO ] Parsing input parameters
[ INFO ] Reading input
[ INFO ] Loading plugin
API version ............ 1.1
Build .................. 11653
Description ....... myriadPlugin
[ INFO ] Loading network files
[ INFO ] Batch size is forced to 1.
[ INFO ] Checking that the inputs are as the sample expects
[ INFO ] Checking that the outputs are as the sample expects
[ INFO ] Loading model to the plugin
[ ERROR ] [VPU] Unsupported activation type
The version that you suggested - ssdmobilenet_2017, works. It means that intel is just playing catchup right now. Nothing seems to work reliably.
I am looking for benchmarks of all TF obj detection models given here https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
If anyone could share their observations on the NCS it would be nice.
@WuXinyang
could you please share your solution for the issue you had?
python3 mo_tf.py --input_model /home/wuxy/Downloads/ssd_mobilenet_v1_coc
o_2017_11_17/frozen_inference_graph.pb --output_dir ~/models_VINO
and it returns some errors: [ ERROR ] Graph contains a cycle.
Thank you.
I tried
python mo_tf.py --input_model "C:\Users\Tolotra Samuel\PycharmProjects\tensorflow_object_detection\object_detection\inference_graph\frozen_inference_graph.pb" --output_dir ./output_dir --output "detection_boxes,detection_scores,num_detections"
But it gives me the error:
[ ERROR ] Graph contains a cycle. Can not proceed.
