I'm working with an object detection model and I would like to use TensorFlow version of SSD-MobileNet. I saw the Caffe version and tried to retrain it, but the results were very poor. After training for 100 hours the mAP was still less than 0.03. I tried to tweak the learning rate and aspect ratios to better suit my dataset (my objects are mostly squares), but that didn't help. Then I switched to TensorFlow Object Detection API to see if there is a problem in my dataset. However, after training for just 6 hours I already got a mAP of 0.5. I also noticed that the TensorFlow version is also much faster on my machine; (0.6 sec / iteration) vs (2 sec / iteration) on caffe. So the TensorFlow version works much better and I'd like to use that instead if possible.
Is there any way to convert the model to NCS? And if direct conversion from TensorFlow to NCS is not possible, would it be possible to convert the model to Caffe format and then to NCS? Or could I just copy the TensorFlow model weights to the equivalent Caffe model?
@manto @djaenicke @owlie We apologize, but the current NCSDK (2.04.00.06) doesn't have support for SSD Mobilenet on TensorFlow yet. As you mentioned we do support SSD MobileNet on Caffe as an alternative.
As far as re-training SSD Mobilenet on Caffe, you can try using https://github.com/listenlink/caffe/tree/ssd for more efficient DWS convolution with CUDNN 9.
You can try Intel OpenVINO™ toolkit. It supports Inference of SSD MobileNet from the TensorFlow Object Detection model zoo on NCS using the Myriad plugin.
@WuXinyang Hi! Sorry for the delayed response. Download and install the last version of OpenVINO Toolkit (https://software.intel.com/en-us/openvino-toolkit/choose-download). Inside installation folder, you can find C++/Python examples and several pre-trained models. Windows and Linux are supported by the toolkit. The main idea is the same as Movidius SDK, you convert a trained model into Intermediate Representation format using Model Optimization then the Inference Engine reads, loads, and infers the Intermediate Representation on different devices such as CPU, Intel GPU, MYRIAD 2 VPU.
@alex_z Hi thanks for your reply! In fact I already successfully set up the OpenVINO SDK and used it to convert one trained object detection ssd model but I met some problems. Did you ever try any models on NCS with OpenVINO?
@WuXinyang Yes, I have converted ssd_mobilenet_v1_coco model from Tensorflow detection model zoo and custom trained model based on SSD-Mobilenet v1 that I previously used with OpenCV DNN module. Then both models are run on NCS successfully.
@alex_z OMG!! amazing! Do you mind if you can give some instructions on how to implement it? Maybe you can post them in your blog. I am sure that many many people desire to make tensorflow ssd model work on NCS!
@alex_z I just set up the SDK and tried some sample applications. But I dont know how to compile the tensorflow model into their IR format. And after the convertion I guess I need to use some API in my code like:
auto netBuilder = new InferenceEngine::CNNNetReader();
This is my understanding, if it is right?
Hi the code I use to convert TF model is just like followings:
python3 mo_tf.py --input_model /home/wuxy/Downloads/ssd_mobilenet_v1_coc
o_2017_11_17/frozen_inference_graph.pb --output_dir ~/models_VINO
and it returns some errors: [ ERROR ] Graph contains a cycle. Can not proceed.
Can you pls tell me how can I make it work?
@WuXinyang Try the following:
./mo_tf.py --input_model= --tensorflow_use_custom_operations_config extensions/front/tf/ssd_support.json --output="detection_boxes,detection_scores,num_detections"
@alex_z For usage on NCS, I need to add a flag in your code: --data_type FP16
since the MYRIAD plugin does not support FP32 and the converted models are FP32 by default.
Again thanks for your intuitions! I have been searching for long time to try to find way to make tensorflow object-detection model run on NCS!
@alex_z Great to hear that it's possible to run TF object detection models on NCS. What kind of fps do you get with that or what is the inference time? For example with the ssd_mobilenet_v1_coco model.