- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I am trying to run inference on Mobilenet v1 SSD COCO downloaded from the tensorflow website. I converted the frozen model to the IR using the following command.
sudo /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_meta_graph /home/user1/Desktop/IntelNCS/Mobilenetv1_SSD/model.ckpt.meta --tensorflow_use_custom_operations_config /opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/extensions/front/tf/ssd_support.json --tensorflow_object_detection_api_pipeline_config /home/user1/Desktop/IntelNCS/Mobilenetv1_SSD/pipeline.config --data_type half --output_dir /home/user1/Desktop/Anakha/Mobilenetv1_SSD --model_name ssd_mobilenet_v1
This command generated the .xml and .bin files successfully.
However while doing the inference I am getting the following error.
[ INFO ] InferenceEngine:
API version ............ 2.0
Build .................. custom_releases/2019/R2_f5827d4773ebbe727c9acac5f007f7d94dd4be4e
Description ....... API
Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] /home/user1/Desktop/IntelNCS/car_1.bmp
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
MYRIAD
myriadPlugin version ......... 2.0
Build ........... 27579
[ INFO ] Loading network files:
/home/user1/Desktop/Anakha/Mobilenetv1_SSD/ssd_mobilenet_v1.xml
/home/user1/Desktop/Anakha/Mobilenetv1_SSD/ssd_mobilenet_v1.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the device
[ ERROR ] AssertionFailed: !ieDims.empty()
======================================================
The command used for inference:
./object_detection_sample_ssd -i /home/user1/Desktop/IntelNCS/car_1.bmp -m /home/user1/Desktop/Anakha/Mobilenetv1_SSD/ssd_mobilenet_v1.xml -d MYRIAD
The object_detection_sample_ssd is located in the /home/user1/inference_engine_samples_build/intel64/Release folder.
Kindly let me know if I am doing something wrong with the inference command. Looking forward for help.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear V B, Anakha
Please select your MobileNet V1 SSD from the Tensorflow Supported List . For instance, I selected ssd_mobilenet_v1_ppn_shared_box_predictor_300x300_coco14_sync_2018_07_03 . From there, I ran the command:
python "c:\Program Files (x86)\IntelSWTools\openvino_2019.2.242\deployment_tools\model_optimizer\mo_tf.py" --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config "c:\Program Files (x86)\IntelSWTools\openvino_2019.2.242\deployment_tools\model_optimizer\extensions\front\tf\ssd_v2_support.json" --tensorflow_object_detection_api_pipeline_config pipeline.config
Which successfully generates IR.
Thanks for upgrading to OpenVino R2 by the way !
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Shubha,
Thanks for the suggestion. In fact I had downloaded the frozen supported topologies from this link.
ssd_mobilenet_v1_coco_2018_01_28.tar.gz
However I used different set of commands as in --input_meta_graph /home/user1/Desktop/IntelNCS/Mobilenetv1_SSD/model.ckpt.meta
instead of the --input_model frozen_inference.pb to generate the .xml and .bin files.
Now I am able to get it working!!!..
Thanks a lot !!.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear V B, Anakha,
Glad it works for you now !
Thanks,
Shubha
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page