- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This code sample shows how to deploy Caffe-based Faster RCNN object detection model.
Caffe used prototxt file and all layers are defined in the prototxt file. Layer names like "bbox_name", "proposal_name" and "prob_name" are defaulted to those used in Caffe.
But for Tensorflow-based model, no such layer names are there. I can print all layer names like this.
That is for MobileNetV2, I can print similar one for MobileNetV1. There is no such "bbox_name", "proposal_name" and "prob_name" .
When I load IR (converted from Tensorflow-based FRCNN checkpoint), I have error thrown at the following line since it doesn't have "bbox_name" in Tensorflow-based model.
DataPtr bbox_pred_reshapeInPort = ((ICNNNetwork&)recNetwork).getData(FLAGS_bbox_name.c_str());
if (bbox_pred_reshapeInPort == nullptr) {
throw logic_error(string("Can't find output layer named ") + FLAGS_bbox_name);
}
I used Tensorflow Object Detection API
for my training.
How to have similar layer names for Tensorflow-based FRCNN model.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear nyan,
It seems like your faster rcnn model has layers which are unsupported by Inference Engine. I assume that you're using OpenVino 2019 R1 ?
We have tested many versions of tensorflow faster rcnn models that work for both model optimizer and inference engine. You can find them listed (with downloadable URLs) in the following document:
Just search for "faster" in the html page and you should find a suitable model. Would these work for you ?
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you. I used object_detection_sample_ssd and it can be used for FRCNN as well.
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page