Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

object_detection_demo_faster_rcnn fail with TF model:"faster_rcnn_inception_v2_coco_2018_01_28"

BSung8
New Contributor I
577 Views

the demo code : "object_detection_demo_faster_rcnn" fail with TF model:"faster_rcnn_inception_v2_coco_2018_01_28".  But another demo  code "object_detection_demo_ssd_async" works  fine with TF model:"faster_rcnn_inception_v2_coco_2018_01_28". any comments ?

error message as below,

InferenceEngine: 00007FF82808B740
[ INFO ] Parsing input parameters
[ INFO ] Reading input
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
        GPU
        clDNNPlugin version ......... 2.1
        Build ........... 37988
[ INFO ] Loading network files
[ INFO ] Batch size is forced to  1.
[ INFO ] Checking that the inputs are as the demo expects
[ INFO ] Checking that the outputs are as the demo expects
[ INFO ] Loading model to the device
[ INFO ] Start inference
To close the application, press 'CTRL+C' here or switch to the output window and press ESC key
To switch between sync/async modes, press TAB key in the output window
Total Inference time: 577152

[ INFO ] Execution successful

C:\Users\Intel NUC\Documents\Intel\OpenVINO\omz_demos_build\intel64\Release>object_detection_demo_faster_rcnn.exe -i C:\test_clip\wt.mp4 -m C:\openvino_models\public\faster_rcnn_inception_v2_coco_2018_01_28_WT\IR\FP16_v1.13\frozen_inference_graph.xml -d GPU
[ INFO ] InferenceEngine: 00007FF8259DB740
Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     C:\test_clip\wt.mp4
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
        GPU
        clDNNPlugin version ......... 2.1
        Build ........... 37988

[ INFO ] Loading network file:
        C:\openvino_models\public\faster_rcnn_inception_v2_coco_2018_01_28_WT\IR\FP16_v1.13\frozen_inference_graph.xml
[ ERROR ] Can't find output layer named bbox_pred

0 Kudos
2 Replies
JAIVIN_J_Intel
Employee
577 Views

Hi Bryan,

Please refer to the following comment by Shubha on another thread

Shubha R. (Intel) wrote:

The answer is actually buried in the MO Documentation , this part:

A distinct feature of any SSD topology is a part performing non-maximum suppression of proposed images bounding boxes. This part of the topology is implemented with dozens of primitive operations in TensorFlow, while in Inference Engine, it is one layer called DetectionOutput. Thus, to convert a SSD model from the TensorFlow, the Model Optimizer should replace the entire sub-graph of operations that implement the DetectionOutput layer with a single DetectionOutput node.

Somewhere in your Model Optimizer output you probably saw the following message:

The graph output nodes "num_detections", "detection_boxes", "detection_classes", "detection_scores" have been replaced with a single layer of type "Detection Output". Refer to IR catalogue in the documentation for information about this layer.

Well Detection Output layer immediately screams SSD. So instead for this faster rcnn model use Object_Detection_Sample_SSD .

If you do use the SSD sample, it will work perfectly. object_detection_demo_faster_rcnn is not meant to be used here, because this demo takes IRs with 3 original outputs. 

Thanks,

Shubha

 

0 Kudos
BSung8
New Contributor I
577 Views

Thank!!

0 Kudos
Reply