ubuntu@ubuntu:~/inference_engine_demos_build/intel64/Release$ ./object_detection_demo_faster_rcnn -i /home/ubuntu/Downloads/5.mp4 -m /home/ubuntu/Desktop/resnet101/frozen_inference_graph.xml -d CPU
[ INFO ] InferenceEngine:
API version ............ 2.0
Build .................. custom_releases/2019/R2_f5827d4773ebbe727c9acac5f007f7d94dd4be4e
Description ....... API
Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] /home/ubuntu/Downloads/5.mp4
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
MKLDNNPlugin version ......... 2.0
Build ........... 27579
[ INFO ] Loading network files:
[ ERROR ] Can't find output layer named bbox_pred
As you see,so how to solve it,please.
Shubha R. (Intel) wrote:
Dear lee, Quid,
What kind of model is it ? Can you kindly try the SSD Sample instead ?
I'm sure I used faster-rcnn-resnet101 and I convert it to IR by faster_rcnn_support_api_v1.14.json.Why should I use SSD Sample.Although it did succeed.
Dear lee, Quid,
Glad the SSD sample succeeded for you even though you were using faster-rcnn-resnet101 . I know this is a weird and confusing state of affairs, but I had to explain this to another Forum Customer recently. If you have any further questions about this, let me know.
It has to do with the message you get from Model Optimizer upon successful conversion. If you see " Detection Output " this means SSD.
Hope it helps,
I am getting the same error for object_detection_demo_faster_rcnn.exe with the model from models zoo repo for faster rcnn inception v2. I have converted the model using faster_rcnn_support.json file.
[ ERROR ] Layer bbox_pred not found in network
I am using the latest openvino_2020.2.117 sdk. I was able to run the tests for installation successfully too.
I have already filed 3 bugs for YOLOv3 and FRCNN Inception v2 for custom trained model but no response yet.