Intel® Distribution of OpenVINO™ Toolkit
Community support and discussions about the Intel® Distribution of OpenVINO™ toolkit, OpenCV, and all things computer vision-related on Intel® platforms.

Blob names in OpenVINO inference

Sengodan__Nathiyaa
268 Views
Hi, I am trying to run inference for custom trained YOLO-v2 model using the OpenVINO. Model is successfully converted to IR format using Model Optimizer but result from FP32 inference is incorrect. I am debugging the issue by trying to see the output of each of the layers by using following code snippet. std::string out_name = "input"; Blob::Ptr blob = infer_request.GetBlob(out_name); The above code works for input and output layers of the network. However i'm getting "Failed to find input or output with name : " error for other layers in the network. [Eg. Failed to find input or output with name : '0-convolutional' for first convolution layer in the network]. How do i get the correct blob names for rest of the layers. Appreciate your help here.
0 Kudos
1 Reply
Severine_H_Intel
Employee
268 Views

Hi, 

in order to visualize the values of a intermediate layer, you need to add it as output to your model. Then, your code snippet would also work. 

To do so, here is a code snippet:

CNNNetwork network = networkReader.getNetwork();
network.addOutput("layer_name");

Best, 

Severine

Reply