Intel® Distribution of OpenVINO™ Toolkit
Community assistance about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all aspects of computer vision-related on Intel® platforms.

Get tensor from intermediate layers

Delgado__Guillem
Beginner
1,587 Views

Hello!

I have been looking for a way to get the tensor from the intermediate layers after the inference but I can't find anything on the documentation nor any example on the samples. How could I extract these tensors?

0 Kudos
1 Solution
Shubha_R_Intel
Employee
1,587 Views

OK I think I understand what you want now. Please look at inference_engine\samples\object_detection_demo where the method of adding your own outputs is demonstrated. You will need to add your own output to the list of output layers. Then you will be able to check the output data from the layer after inference (using output data for an InferRequest()).

Here are the steps:

1) Run your model through MO as usual and obtain the IR

2) Add output layers before IE infer(). Look at the following code in the object detection demo:

 network.addOutput(FLAGS_bbox_name, 0);
 network.addOutput(FLAGS_prob_name, 0);
 network.addOutput(FLAGS_proposal_name, 0);

3) Now after you attach these extra outputs on the end, you will want to perform infer() again but after loading the modified model to the plugin.

        // --------------------------- 4. Loading model to the plugin ------------------------------------------
        slog::info << "Loading model to the plugin" << slog::endl;

        ExecutableNetwork executable_network = plugin.LoadNetwork(network, {});
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 5. Create infer request -------------------------------------------------
        InferRequest infer_request = executable_network.CreateInferRequest();
        // ------------------------------------------------------------------------------

View solution in original post

0 Kudos
7 Replies
Shubha_R_Intel
Employee
1,587 Views

Take a look at the code  samples\validation_app\Processor.cpp and samples\calibration_tool\calibrator_processors.cpp. Similar forum post here:

https://software.intel.com/en-us/forums/computer-vision/topic/804842

0 Kudos
Delgado__Guillem
Beginner
1,587 Views

Shubha R. (Intel) wrote:

Take a look at the code  samples\validation_app\Processor.cpp and samples\calibration_tool\calibrator_processors.cpp. Similar forum post here:

https://software.intel.com/en-us/forums/computer-vision/topic/804842

I've been reviewing the samples but I was not able to find what I was looking for. I might not be understanding it correctly on how to proceed. 

I am working with a YOLO net and I want to get the activations values from my ObjectDetection layers, I tried these different options:

  • Convert my PB model to XML with mo_tf.py selecting as output the layers I want as output. However, I get the error mo.utils.error.Error: Stopped shape/value propagation at "2-maxpool" node.
    python3 mo_tf.py --input_model YOLOGRAPH.pb --batch 1 --input "input" --output "4-leaky,1-leaky" --output_dir MY_OUPUT
  • Convert my PB model to XML with mo_tf.py using the yolo_v1_v2.json as a parameter of custom operations. However, it's not really useful in this case.
  • Iterate over my network layers or get them by name but then I do not really know how to create a blob out of these layers and get the values once I infer.

Any suggestions?

0 Kudos
Shubha_R_Intel
Employee
1,587 Views

Hi Guillem:

So can you clarify what you mean by "after inference" ?  

CNN Models have an input layer, hidden layers, and usually one output layer.

Thanks for using OpenVino !

Shubha

0 Kudos
Delgado__Guillem
Beginner
1,587 Views

Shubha R. (Intel) wrote:

Hi Guillem:

So can you clarify what you mean by "after inference" ?  

CNN Models have an input layer, hidden layers, and usually one output layer.

Thanks for using OpenVino !

Shubha

Hi Shubha!

When I mean "after inference" is after I feed the network with my image/images.  So, when I create the InferRequest() and fill the input tensor using

Blob::Ptr imageInput = infer_request.GetBlob(imageInputName); 
unsigned char* data_buffer = static_cast<unsigned char*>(imageInput->buffer());

I would like to get the data tensors in the hidden layers but I am unable to do it with the InferRequest() because I do not have any blob for these hidden layers. I have tried creating one only for a given hidden layers but I cannot find how. Also, I thought about converting my model to Openvino format specifying as an output my hidden layers and then using network.getOutputsInfo() method to create the blob and get the data_buffer but I get errors on the conversion, so I have discarded this option.

Thanks for the help!

0 Kudos
Shubha_R_Intel
Employee
1,588 Views

OK I think I understand what you want now. Please look at inference_engine\samples\object_detection_demo where the method of adding your own outputs is demonstrated. You will need to add your own output to the list of output layers. Then you will be able to check the output data from the layer after inference (using output data for an InferRequest()).

Here are the steps:

1) Run your model through MO as usual and obtain the IR

2) Add output layers before IE infer(). Look at the following code in the object detection demo:

 network.addOutput(FLAGS_bbox_name, 0);
 network.addOutput(FLAGS_prob_name, 0);
 network.addOutput(FLAGS_proposal_name, 0);

3) Now after you attach these extra outputs on the end, you will want to perform infer() again but after loading the modified model to the plugin.

        // --------------------------- 4. Loading model to the plugin ------------------------------------------
        slog::info << "Loading model to the plugin" << slog::endl;

        ExecutableNetwork executable_network = plugin.LoadNetwork(network, {});
        // -----------------------------------------------------------------------------------------------------

        // --------------------------- 5. Create infer request -------------------------------------------------
        InferRequest infer_request = executable_network.CreateInferRequest();
        // ------------------------------------------------------------------------------
0 Kudos
Delgado__Guillem
Beginner
1,587 Views

That worked perfectly! Thank you very much for the help!

0 Kudos
Ritwika_C_Intel
Employee
1,587 Views

 

 I want to change the output of intermediate layers before feeding them as a input to the next layer. How can I do that?

0 Kudos
Reply