Community
cancel
Showing results for 
Search instead for 
Did you mean: 
350 Views

Using Faster RCNN Sample

#Informative snippet for developers

Following instructions demonstrate the usage of Faster RCNN sample on Ubuntu having Intel Distribution of OpenVINO 2019 R3.1

wget http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar....

tar -xvf faster_rcnn_inception_v2_coco_2018_01_28.tar.gz

cd faster_rcnn_inception_v2_coco_2018_01_28/

source /opt/intel/openvino/bin/setupvars.sh

python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py \
--input_model frozen_inference_graph.pb \
--reverse_input_channels \
--tensorflow_object_detection_api_pipeline_config pipeline.config \
--output=detection_boxes,detection_scores,num_detections \
--tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/faster_rcnn_support.json

/home/administrator/inference_engine_samples_build/intel64/Release/object_detection_sample_ssd \
-i /opt/intel/openvino/deployment_tools/demo/car_1.bmp \
-m frozen_inference_graph.xml

Output:-

[ INFO ] InferenceEngine:
        API version ............ 2.1
        Build .................. custom_releases/2019/R3_89514d4cb7820ea78101bb4892dd8f16f5082916
        Description ....... API
Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     /opt/intel/openvino/deployment_tools/demo/car_1.bmp
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
        CPU
        MKLDNNPlugin version ......... 2.1
        Build ........... 30886
[ INFO ] Loading network files:
        frozen_inference_graph.xml
        frozen_inference_graph.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the device
[ INFO ] Create infer request
[ WARNING ] Image is resized from (749, 637) to (600, 600)
[ INFO ] Batch size is 1
[ INFO ] Start inference
[ INFO ] Processing output blobs
[0,3] element, prob = 0.838253    (222,117)-(514,477) batch id : 0 WILL BE PRINTED!
[1,6] element, prob = 0.515723    (642,223)-(749,600) batch id : 0 WILL BE PRINTED!
[2,8] element, prob = 0.724386    (641,168)-(746,625) batch id : 0 WILL BE PRINTED!
[3,8] element, prob = 0.360326    (239,116)-(512,454) batch id : 0
[ INFO ] Image out_0.bmp created!
[ INFO ] Execution successful

0 Kudos
3 Replies
JesusE_Intel
Moderator
350 Views

Hi Kumar,

Thank you for sharing the information with the developer community! I would like to add that we also have the Object Detection Faster RCNN C++ Demo available.

Regards,

Jesus

Ali1
Beginner
310 Views

Hello Kumar, 

 

Thank you for sharing that, I was looking all over for a way to get the object_detection_sample_ssd.py to work with Faster RCNN models or if there is another Python sample script for Faster RCNN models.

I have tried following your steps and got an error when trying to use the script for object detection. The model optimisation went fine but when trying to input images and detect objects it was a bit strange as the first time after the inference I did not get an error but the script did not provide any out.bmp but the second time I tried I got the errors  shown in the file attached below.

 

I would appreciate if you could let me know where I went wrong or if there is another python script I can use for Faster RCNN model types; I found the C++ implementation but I need a python script for the application I am looking for. I am trying to use a raspberry pi for real time object detection and the python script would make it easier to use the camera module for the video stream.

P.S My current openvino version is 2020.1.023 and it is installed on an Ubuntu 16.04 LTS virtual machine.

Ali1
Beginner
269 Views

Hello @HemanthKum_G_Intel @JesusE_Intel , 

 

I have sent an enquiry about two weeks ago but did not receive a response, is there anything wrong with my implementation/request or is this forum not monitored any more?

I would appreciate any feedback you could provide.

Thank you,

Kind regards,

Mahgoub

Reply