Community
cancel
Showing results for 
Search instead for 
Did you mean: 
nnain1
New Contributor I
185 Views

Failed to load Tensorflow-based Faster RCNN object detection model

This code sample shows how to deploy Caffe-based Faster RCNN object detection model.

Caffe used prototxt file and all layers are defined in the prototxt file. Layer names like "bbox_name", "proposal_name" and "prob_name" are defaulted to those used in Caffe.

But for Tensorflow-based model, no such layer names are there. I can print all layer names like this.

That is for MobileNetV2, I can print similar one for MobileNetV1. There is no such "bbox_name", "proposal_name" and "prob_name" .

When I load IR (converted from Tensorflow-based FRCNN checkpoint), I have error thrown at the following line since it doesn't have "bbox_name" in Tensorflow-based model.

DataPtr bbox_pred_reshapeInPort = ((ICNNNetwork&)recNetwork).getData(FLAGS_bbox_name.c_str());

if (bbox_pred_reshapeInPort == nullptr) {
        throw logic_error(string("Can't find output layer named ") + FLAGS_bbox_name);
}

I used  Tensorflow Object Detection API

for my training.

How to have similar layer names for Tensorflow-based FRCNN model.

0 Kudos
2 Replies
Shubha_R_Intel
Employee
185 Views

Dear nyan,

It seems like your faster rcnn model has layers which are unsupported by Inference Engine. I assume that you're using OpenVino 2019 R1 ?

We have tested many versions of tensorflow faster rcnn models that work for both model optimizer and inference engine. You can find them listed (with downloadable URLs) in the following document:

https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_T...

Just search for "faster" in the html page and you should find a suitable model. Would these work for you ?

Thanks,

Shubha

 

nnain1
New Contributor I
185 Views

Thank you. I used object_detection_sample_ssd and it can be used for FRCNN as well.

Reply