Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

openvino feature extraction

JT
Beginner
759 Views

I am not able to find any sample of extracting feature vector from an input image using a pre-trained model. Any help will be appreciated.

0 Kudos
5 Replies
Shubha_R_Intel
Employee
759 Views

Dear JT,

By feature vector, I hope you mean the results or "output blob" ? What exactly do you mean by "feature vector" ? All of the OpenVino samples pretty much parse through output blobs and extract post inference results. Are you looking for something specific ? Something earlier in the pipeline ?

Thanks,

Shubha

 

0 Kudos
JT
Beginner
759 Views

Shubha,

We are doing feature vector extraction from nasnet_large. I noticed that nasnet_large is one of the pre-trained models OpenViNo supports. So I guess the best way to ask your question is to show you an example of feature vector extraction from nasnet_large. Here is the link: https://github.com/tensorflow/hub/blob/master/examples/image_retraining/retrain.py. Basically bottleneck file in this example is the feature vector file we need. We are trying to figure out how we can do this in OpenViNo. Really appreciate your help. Thanks.

JT

0 Kudos
Shubha_R_Intel
Employee
759 Views

Dear JT,

OK I think I understand what you mean. Please look at object_detection_demo_faster_rcnn . What that sample does is it adds additional output layers onto it's Inference Engine "network" or model. 

See this code :

network.addOutput(FLAGS_bbox_name, 0);
network.addOutput(FLAGS_prob_name, 0);
network.addOutput(FLAGS_proposal_name, 0);

 So essentially you can add your "Bottleneck" layer using addOutput the same way. And you can compare the contents both before and after inference.

Don't worry about FLAGS_bbox_name etc...those are just output layer names. You can instead use your "bottleneck_layer" name (or whatever the real name is).

So to summarize,

1) Run your model through MO as usual and obtain the IR

2) Add output layers before IE infer().  (See the above code)

3) Now after you attach these extra outputs on the end, you will want to perform infer() again but after loading the modified model to the plugin.

So after this part:

// --------------------------- 4. Loading model to the device ------------------------------------------
        slog::info << "Loading model to the device" << slog::endl;
        ExecutableNetwork executable_network = ie.LoadNetwork(network, FLAGS_d);
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 5. Create infer request -------------------------------------------------
        slog::info << "Create infer request" << slog::endl;
        InferRequest infer_request = executable_network.CreateInferRequest();
        // -----------------------------------------------------------------------------

 

Hope it helps,

Thanks,

Shubha

0 Kudos
JT
Beginner
759 Views

great. thanks a lot.

0 Kudos
Shubha_R_Intel
Employee
759 Views

Dear JT,

Sure thing.

Shubha

0 Kudos
Reply