- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am not able to find any sample of extracting feature vector from an input image using a pre-trained model. Any help will be appreciated.
Link Copied
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear JT,
By feature vector, I hope you mean the results or "output blob" ? What exactly do you mean by "feature vector" ? All of the OpenVino samples pretty much parse through output blobs and extract post inference results. Are you looking for something specific ? Something earlier in the pipeline ?
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Shubha,
We are doing feature vector extraction from nasnet_large. I noticed that nasnet_large is one of the pre-trained models OpenViNo supports. So I guess the best way to ask your question is to show you an example of feature vector extraction from nasnet_large. Here is the link: https://github.com/tensorflow/hub/blob/master/examples/image_retraining/retrain.py. Basically bottleneck file in this example is the feature vector file we need. We are trying to figure out how we can do this in OpenViNo. Really appreciate your help. Thanks.
JT
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear JT,
OK I think I understand what you mean. Please look at object_detection_demo_faster_rcnn . What that sample does is it adds additional output layers onto it's Inference Engine "network" or model.
See this code :
network.addOutput(FLAGS_bbox_name, 0); network.addOutput(FLAGS_prob_name, 0); network.addOutput(FLAGS_proposal_name, 0);
So essentially you can add your "Bottleneck" layer using addOutput the same way. And you can compare the contents both before and after inference.
Don't worry about FLAGS_bbox_name etc...those are just output layer names. You can instead use your "bottleneck_layer" name (or whatever the real name is).
So to summarize,
1) Run your model through MO as usual and obtain the IR
2) Add output layers before IE infer(). (See the above code)
3) Now after you attach these extra outputs on the end, you will want to perform infer() again but after loading the modified model to the plugin.
So after this part:
// --------------------------- 4. Loading model to the device ------------------------------------------ slog::info << "Loading model to the device" << slog::endl; ExecutableNetwork executable_network = ie.LoadNetwork(network, FLAGS_d); // ----------------------------------------------------------------------------------------------------- // --------------------------- 5. Create infer request ------------------------------------------------- slog::info << "Create infer request" << slog::endl; InferRequest infer_request = executable_network.CreateInferRequest(); // -----------------------------------------------------------------------------
Hope it helps,
Thanks,
Shubha
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
great. thanks a lot.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Dear JT,
Sure thing.
Shubha
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page