- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Good morning,
I have a follow-up question to my previous question I posted here:
https://ncsforum.movidius.com/discussion/1768/ncs2-with-custom-tensorflow-model
Aroop_at_Intel has helped me with the model I provided. The solution he suggested was very helpful and I was able to run the inference on the NCS.
But I have provided a wrong model. I wanted to provide a TensorFlow model from model zoo and instead provided a model I downloaded with the OpenVino model_downloader.
The solution Aroop_at_Intel worked with the model from the model_downloader but not with the TensorFlow model from modelzoo.
I might have to change something else in the "ssd_v2_support.json" but I am not quite sure what to change.
If it helps, I have my model from modelzoo right here:
https://drive.google.com/open?id=12YHB0Bes6egGSR0ml9QjLilioCpgrbyl
Thank you for your help.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi RRein6,
Could you confirm that this model is SSD_mobilenet_v2 from the model zoo? I took a look and your pipeline.config file and it shows "faster_rcnn_inception_v2". It would be great if you could provide me a link to the model zoo where you downloaded the model.
Also, did you retrain the model or are you using a pre-trained model.
Regards,
Aroop
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Arrop_at_Intel,
I am very sorry. I uploaded a wrong model. The faster_rcnn_inception_v2 is another model I would like to transform.
I updated the files in my GoogleDrive. There are now two directories. One contains a retrained ssdlite_mobilenet_v2_coco model and the other one contains a retrained faster_rcnn_inception_v2_coco model.
I downloaded both models from the TensorFlow model zoo:
Regards,
RRein6
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello RRein6,
Attached is the tf_obj_det_jsons.zip file. After you download and extract it, move the faster_rcnn_support_api_v1.13.json file into your "C:\Intel\computer_vision_sdk\deployment_tools\model_optimizer\extensions\front\tf\" directory.
Then try running your original command with the following modifications:
replace
C:\Intel\computer_vision_sdk\deployment_tools\model_optimizer\extensions\front\tf\ssd_v2_support.json
with
C:\Intel\computer_vision_sdk\deployment_tools\model_optimizer\extensions\front\tf\faster_rcnn_support_api_v1.13.json
Regards,
Aroop
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Aroop_at_Intel,
thank you for the file. The conversion worked like a charm but now i can´t load the IR file into the Neural Compute Stick.
I uploaded the python script I use to load the IR file as well as the .xml and .bin file I use. It stops every time after "log.info("Loading model to the plugin")".
Can you please help me here?
On another note: How do I know which parameters I have zu change in the .json file so I can convert my model?
Regards,
RRein6
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello RRein6,
Can you try to convert your model again with the following Model Optimizer command:
python3 mo_tf.py --input_model <Path to frozen_inference_graph.pb> --tensorflow_use_custom_operations_config <Path to faster_rcnn_support_api_v1.13.json> --tensorflow_object_detection_api_pipeline_config <Path to pipeline.config> --reverse_input_channels --data_type FP16
Let me know if this makes any difference.
Regards,
Aroop
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Aroop_at_Intel,
I tried the command you suggested. The new files were created but the result stays the same. I still can´t load the model to the NCS.
Regards,
RRein6
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello RRein6,
Do you get any errors or does the code freeze after the "log.info("Loading model to the plugin")" ? I noticed the model took about 8.5 minutes to upload to the NCS on my system. If you see the same behavior, it may be a bug.
Could you confirm you took faster_rcnn_inception_v2_coco from the model app zoo and retrained it with your own dataset?
Regards,
Aroop
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Aroop_at_Intel,
I don´t get any errors after log.info. I tried to upload the model for 6 hours and it still did not load.
I downloaded the exact same model you linked to in your reply.
Regards,
RRein6
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi RRein6,
We couldn't get your Object_Detection_FasterRCNN.py file to work, but we were successful using one of our sample files.
First, make sure you upgrade to the latest release of OpenVINO (Make sure to build the samples again). We just released version R2 2019 yesterday.
This is the command that we used to convert your model:
sudo python3 mo_tf.py --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config faster_rcnn_support_api_v1.13.json --tensorflow_object_detection_api_pipeline_config pipeline.config --reverse_input_channels --data_type FP16 -b 1
This is the command that we used to inference the resulting IR files:
The ./object_detection_sample_ssd file should be located in the directory where you built the samples.
./object_detection_sample_ssd -m frozen_inference_graph.xml -i ABild_\(1671\).JPG -d MYRIAD
Regards,
Aroop

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page