Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.
6403 Discussions

Inference on Mobilenet v1 SSD using the converted model fails with error

V_B__Anakha
Beginner
458 Views

Hi,

I am trying to run inference on Mobilenet v1 SSD COCO downloaded from the tensorflow website. I converted the frozen model to the IR using the following command.

sudo /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_meta_graph /home/user1/Desktop/IntelNCS/Mobilenetv1_SSD/model.ckpt.meta --tensorflow_use_custom_operations_config /opt/intel/openvino_2019.2.242/deployment_tools/model_optimizer/extensions/front/tf/ssd_support.json --tensorflow_object_detection_api_pipeline_config /home/user1/Desktop/IntelNCS/Mobilenetv1_SSD/pipeline.config  --data_type half --output_dir /home/user1/Desktop/Anakha/Mobilenetv1_SSD --model_name ssd_mobilenet_v1

 

This command generated the .xml and .bin files successfully.

However while doing the inference I am getting the following error.

[ INFO ] InferenceEngine:
    API version ............ 2.0
    Build .................. custom_releases/2019/R2_f5827d4773ebbe727c9acac5f007f7d94dd4be4e
    Description ....... API
Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     /home/user1/Desktop/IntelNCS/car_1.bmp
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
    MYRIAD
    myriadPlugin version ......... 2.0
    Build ........... 27579
[ INFO ] Loading network files:
    /home/user1/Desktop/Anakha/Mobilenetv1_SSD/ssd_mobilenet_v1.xml
    /home/user1/Desktop/Anakha/Mobilenetv1_SSD/ssd_mobilenet_v1.bin
[ INFO ] Preparing input blobs

[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the device
[ ERROR ] AssertionFailed: !ieDims.empty()

 

======================================================

The command used for inference:

./object_detection_sample_ssd -i /home/user1/Desktop/IntelNCS/car_1.bmp -m /home/user1/Desktop/Anakha/Mobilenetv1_SSD/ssd_mobilenet_v1.xml -d MYRIAD

The object_detection_sample_ssd is located in the /home/user1/inference_engine_samples_build/intel64/Release folder.

Kindly let me know if I am doing something wrong with the inference command. Looking forward for help.

0 Kudos
3 Replies
Shubha_R_Intel
Employee
458 Views

Dear V B, Anakha

Please select your MobileNet V1 SSD from the Tensorflow Supported List . For instance, I selected ssd_mobilenet_v1_ppn_shared_box_predictor_300x300_coco14_sync_2018_07_03 . From there, I ran the command:

python "c:\Program Files (x86)\IntelSWTools\openvino_2019.2.242\deployment_tools\model_optimizer\mo_tf.py" --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config "c:\Program Files (x86)\IntelSWTools\openvino_2019.2.242\deployment_tools\model_optimizer\extensions\front\tf\ssd_v2_support.json" --tensorflow_object_detection_api_pipeline_config pipeline.config

Which successfully generates IR.

Thanks for upgrading to OpenVino R2 by the way !

Shubha

 

0 Kudos
V_B__Anakha
Beginner
458 Views

Hi Shubha, 

 

Thanks for the suggestion. In fact I had downloaded the frozen supported topologies from this link. 

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_TensorFlow.html

ssd_mobilenet_v1_coco_2018_01_28.tar.gz

However I used different set of commands as in --input_meta_graph /home/user1/Desktop/IntelNCS/Mobilenetv1_SSD/model.ckpt.meta

instead of the --input_model  frozen_inference.pb to generate the .xml and .bin files. 

Now I am able to get it working!!!..

Thanks a lot !!.

 

 

 

0 Kudos
Shubha_R_Intel
Employee
458 Views

Dear V B, Anakha,

Glad it works for you now !

Thanks,

Shubha

0 Kudos
Reply